Elevate Exploratory Testing with Thinking hat and Persona-based Strategies

Payoda Technology Inc
11 min readOct 29, 2021

--

Success, in most cases, is determined by how much effort has gone into planning and preparation. Take, for example, a film where millions are spent. A movie producer’s ROI depends a lot on whether the final budget of the film is more or less equal to what was originally planned. If the expenses are overshot, it cascades onto different levels and brings down the net profit. In such cases, even if the movie did well at the box office, the producer gets a much lesser profit than what he had originally hoped for. Pre-production is done to keep the budget under control. Locations, schedules, set work, call dates for each cast member, a bound script, scene papers, scene rehearsals, a buffer for weather interruptions, alternative scenes to be shot if the planned ones could not be; all of these and many more are planned during the pre-production phase. The more effort and attention to detail in the pre-production phase, the chances of something going wrong during production.

The parallel between exploratory testing and the pre-production phase

A software developer using macbook for testing purposes
Photo by Christina @ wocintechchat.com on Unsplash

In one of the international testing conferences held recently, the top reason quoted by managers from different companies regarding the need for testing was that it is “an insurance to their reputation”. Irrespective of the reason, there is always a practical limit on what you can spend on testing. If it exceeds the planned cost, your net profit would go for a toss. If teams continue to believe that exploratory testing cannot be planned, and is supposed to be unstructured, they could be in a lot of trouble. Even if the exploratory testing has no proper goal/agenda, it could still result in a certain number of defects, but, the ratio between the consolidated value of the defects found versus the effort invested would be disproportionate. Defect slippage is also more if exploratory testing is unstructured.

We can draw a parallel between the exploratory testing exercise and the pre-production phase in films. Exploratory testing is the phase where testers learn and analyze the different functionalities of the system under test on the basis of their own intuitions, insights, and past experience of testing similar applications or modules. By doing exploratory testing, you learn about the product, the market it’s going to cater and the customer base it would attract. You take an empathetic approach, think of how an end-user would perceive the application features and instructions, and observe its strengths and weaknesses. Exploratory testing is much more challenging than its scripted counterpart because it necessitates the tester to actively look, read, think and analyze carefully in order to unravel critical information/issues. Exploratory testing is more of a cognitive/intuitive approach than a technique which is what makes it valuable in finding major discrepancies.

Where does exploratory testing go wrong?

Most often while doing exploratory testing testers tend to lose direction and wander off, and risk wasting a great deal of time trying to find defects by repeatedly testing the same feature. If you have a team of testers, think of the effort wasted if many tests the same functionality over and over again, without a strategy, clinging to the hope that they would discover bugs to justify the efforts. Hope is a good thing, but here, having hope alone is a dangerous thing. You need to have a strategy in place that increases the likelihood of finding defects and raising valuable questions.

Session-Based Exploratory Testing

Exploratory testing becomes so much more potent when spliced together with a few techniques. Session-based testing is one such technique. Its basis is that the human brain can focus on a specific topic only for a limited period of time. It is a variation of the immensely successful Pomodoro Technique. According to the testing expert Jonathan Bach, splitting your exploratory testing phase into shorter windows of 90 mins, which is a block of time without interruptions of any kind, is bound to make the exercise exponentially more effective, by enabling the tester to focus solely on the functionality at hand. The most important component of the session is the Session Charter which clearly defines the goal and provides a brief about its agenda. After a session is done, the tester and the session owner produce a report of the findings.

We can infuse a few other ideas into the session-based exploratory testing to make it even more effective. One such great idea is Edward De Bono’s “Thinking Hat Strategy”.

Implementation of the Thinking Hat Strategy for Testing

Edward de Bono was a modern-day polymath who introduced the concept of Lateral thinking. His “Six Thinking Hats” strategy has been applied successfully across various fields. In testing, The Thinking Hats strategy encourages the thought process of the tester to be a lot more thorough, cohesive, and focussed. It is ideally suited for exploratory testing where the approach has to be multi-faceted and creative while never ignoring the obvious. It can be applied to a single tester or a team of testers, but the test manager should ensure that everyone in the team is wearing the same hat at any given point in time.

The Blue Hat

It is the first hat to be worn as it plans for the rest of the session. The blue hat session allocates time for all the other hats. It looks at the bigger picture. The nature of the application under test, the experience of the testers, their knowledge about the product, the phase of the project, the amount of the previous testing done, the purpose of performing exploratory testing are factors that decide the time allotted for each Hat. After all the hats have been done with, the Blue Hat can be worn again for final decision making, reporting, and prioritizing findings.

The White Hat

It emphasizes the testers to work with what’s known — the facts of the application under test and to ignore all assumptions. With careful observation of the application, it’s quite possible to use this phase to unearth previously unknown information. Testers should capture all related data/facts to create data-driven tests. If there are any doubts regarding the requirements, they need to be clarified with the SMEs or the BA.

The Red Hat

This hat gives importance to the unique feel of each tester. Testers can list down what they think is impressive and what isn’t, without being judged and without having to explain. Findings from this stage are based on subjective perception. Accessibility, Usability, and UI testing come to the fore here. Testers should make use of style guidelines and standards if they are available. Bring to attention anything that you think might annoy the end-user such as response times and UI misalignments. Validating the application for grammatical errors, spelling errors, cosmetic issues, behavior in multiple windows, tab navigation in forms are all part of the Red Hat testing phase.

The Yellow Hat

In this phase, testers need to focus only on the positives meaning testing functionality as it has been prescribed to be used. This phase is optimistic and does not consider the potential pitfalls along the way.

The Black Hat

Heisenberg’s hat from Breaking Bad should come to your mind. It’s all about breaking bad when you wear the Black hat. Think of all the things that could break the application, test the application erratically. Provide invalid data and see how the application responds, test the application on different browsers, different resolutions and view modes, test for SQL injection and other UI-based security hacking techniques, test with old browsers and machines with smaller screen sizes. Bombard the application with all the negative scenarios that you can possibly think of. The idea of this phase is to expose the loopholes that prevent the user from achieving the desired functionality.

The Green Hat

This hat empowers testers with the freedom to think out of the box and come up with creative scenarios, ideas, and solutions that may not have been attempted while wearing the other hats. Exploring opportunities to make the system more intuitive is another goal of the Green Hat. Think of unconventional ways of testing the product because what you consider unconventional might be the usual way for some users. Scenarios like, navigating to a page by using URL, hitting back browser after having logged into the application, different ways of submitting a form — by the click of a button or by hitting enter, check if feedback is shown to the user in a prompt manner if he/she tries to do something that isn’t allowed — are some examples of the scenarios to be tested here.

Things to be remembered while implementing the Thinking Hat strategy

While implementing the Thinking Hat strategy it is important that thought is put into splitting the time for each of the hats. Based on the application the time for each hat might vary. Time allotted should never give preference to one particular thinking strategy and ignore another. Take, for example, a registration functionality. In a total session time of 90 minutes you allow 1 hour for the Black Hat, and just 6 minutes for each of the other hats, you might get several findings from the negative scenarios tested, but chances are that you might miss testing an obvious positive scenario which might have an issue.

Also, it’s important that testers keep each Hat separate and do not overlap them. For example, when you are wearing a Yellow Hat there should not be any thoughts creeping in about negative scenarios. This helps the mind to remain focused on the task at hand and prevents deviation. The idea is to test thoroughly from all angles, one by one. If you overlap, chaos ensues and the whole process collapses like a domino trail.

Having a standard Thinking Hat planning template for exploratory testing ensures you do not miss anything. Below described are a few scenarios for each hat for a typical Registration functionality.

Persona Based Testing Strategy

The Thinking Hat and Persona-based testing strategies are close relatives. As a process, they try to focus the thinking process in a specific direction at any given point in time and then move over to the next. We will consider the same example of a typical registration functionality here too, for illustration purposes.

Matthew, the Manager

Matthew often multitasks when he works with the application. He wants the registration done as soon as possible to get into the system. He doesn’t look into the finer details or the negative scenarios.

  • Does it the quickest way possible.
  • Uses shortcuts like copy and paste, tab to navigate through fields.
  • Don’t fill in fields that aren’t mandatory.
  • Makes mistakes in the type of data that each field accepts, enters already registered details again.
  • Expects fast responses.
  • Often leaves the form filled halfway through because he gets called up for another meeting, comes back later, and checks the state of the form.

Eric, the Eccentric

Eric is unusual in his approach. He does things that a normal user most often wouldn’t. He has time on his hands and is patient enough to see how his actions affect the application.

  • Enters invalid inputs, leaves mandatory fields empty, enters huge sets of characters, and checks how the form reacts.
  • Uses multiple tabs to process simultaneous registrations with the same credentials.
  • Accesses the registration form from an unusual browser or device.
  • Interrupts by disconnecting the internet connection or by refreshing or by clicking the back browser after having clicked the registration button with valid inputs.
  • Performs all the negative scenarios that come to his mind and evaluates how the system behaves.

Cuthbert, the Curious

Cuthbert is similar to Eric in some ways because he too has time on his hands. But he doesn’t approach the application with the intent of breaking it. Instead, he is interested to learn more about each upgrade and dives deeper to know what it has to offer.

  • Works with the parts of the application that others rarely do.
  • Tries out different workflows and button/key actions to achieve the same result.
  • Tries out any new feature to its fullest extent.
  • Takes a different approach when entering values in the registration form — borderline values, decimals, negative values, etc. to see how the application responds.
  • Registers with an email address that is already registered.

Carrey, the Careful

Carrey uses the application regularly and sticks to his routine workflow every time. He has the knack of noticing even the minute changes that have gone in with each upgrade.

  • Uses the most popular features of the application.
  • Is watchful and notices even a small change in the routine workflow he uses, be it functional or UI.
  • Tolerates slow response times.
  • Enters data in all the fields, be it optional or mandatory.
  • Do not test the negative scenarios, enter valid data always.

Sneaky Pete

Pete likes to break systems. He is an expert in unearthing security loopholes and will try to check if the application has taken proper precautions to safeguard data against such hacks.

  • Manipulates authorization by modifying URLs and tries to access pages that shouldn’t be viewable for a particular user role.
  • Uses SQL, LDAP, and JS tag injection to hack the system using the input fields.
  • Tries invalid values and makes the system show multiple errors at the same time.
  • Checks if passwords are encrypted during transmission.
  • Tries to hijack the session. Checks if inactivity timeouts are properly implemented.
  • Checks if the URL contains session id.
  • Tries to synthetically load the application server and observes its impact.
  • Checks to see if directory listing is enabled on the server
  • Checks to see if the application reveals error handling information such as stack trace to the end-user.

George, the Globetrotter

George travels around the world and uses the application for specific tasks.

  • He accesses the application outside the local business hours where it’s hosted.
  • Uses different network providers, very often has poor network connectivity.
  • Uses a variety of browsers, machines, keyboard layouts, and devices.
  • Accesses the application from different time zones.

Ebert, the Elder

Ebert is in his 80s and has very little exposure to using software applications. He is slow in his usage and doesn’t always know how to navigate through to the next page so he looks for visual and textual cues to help him.

  • Slow response times do not bother him.
  • Accesses applications from an old browser version or a browser that isn’t used by many.
  • Reads through each page looking for instructions or visual cues. If the application isn’t intuitive, he gets stuck.
  • His operating system and machine are outdated.
  • Very often clicks on the back button to refer to the information entered on the previous page.
  • Zooms in on the webpage if he cannot see things clearly.

Renee, the Teenager

Renee is a social media enthusiast. She is hooked to her mobile phone for the most part of the day and uses a variety of apps. She is very finicky about the UI and usability features of the apps she uses.

  • Has very little patience and gets frustrated if the app isn’t fast enough.
  • Switches back and forth between several different applications.
  • Receives several interruptions such as voice calls, video calls & message pop-ups while using the mobile app.
  • Expect the data entered during her last interaction in the mobile app to be intact when she returns from another app.
  • Expects uploaded images to be of great quality, expects uploads to be quick.
  • Switches to other competing apps or is quick to provide a scathing review if the UI or UX isn’t great with subsequent upgrades.

Conclusion

As you can see, from the types of personae covered above, we have managed to account for every type of testing that the Thinking Hat strategy offers. Testing is, at its very elemental form, a craft. It can be dovetailed with different techniques to produce the same desired result. But before you choose a technique, ensure that it’s flawless and fits the bill when it comes to your organization’s process, the application under test, and the size, experience, and mentality of your testing team. Speak with us, to know more about how Payoda can help you with this service.

Blog Authored by: Mohan Bharathi

--

--

Payoda Technology Inc

Your Digital Transformation partner. We are here to share knowledge on varied technologies, updates; and to stay in touch with the tech-space.