Does your Test Automation code smell? Here’s how you can launder it!

Payoda Technology Inc
9 min readJul 16, 2021

The psychological purpose of automated testing, apart from all the statistical purposes of effort and cost reduction, is that they give a sense of comfort and joy to members of the project and to the stakeholders. The comfort of continuous testing keeps everyone relaxed because, with each line of code modification or new feature built-in, the automated tests confirm whether or not there are impacts. If there are, you quickly analyze and fix them. If there aren’t any, you gain confidence and continue to build on the code. Automated tests provide a sense of joy because all that you want to test is being tested as you want it to be, but without any effort.

But many a time, your test automation code is a mess. It has reached a point where you no longer know where anything is anymore. The framework and the tests are so complex and badly orchestrated that any change that you do in the code makes you anxious with the thought that it might break something else, in some other place. And the worst part is that you are not really sure how it got to this point.

You start well but then everything goes haywire!

When you started writing the automation tests everything seemed fine. But what you don’t realize is that it’s a parallax. They seem fine in the early stages because the code back then was minimal. But as the application logic expanded, the number of test methods and the dependency points between them went up. That’s when you realize that there are foundation-level mistakes in your automation source code and that your tests are strewn with code smells. A major reason could be that the initial framework template that you followed wasn’t scalable or maintainable. Maybe you did not pay attention each time you had to make a change to the existing code and did the quickest fix possible by adding code that’s either redundant or isn’t applicable to all environments or would impact other tests in some way or the other.

What are code smells?

A code smell is an implementation of code that basically violates the fundamental principles of framework design or the coding language and causes increased development effort or flaky, unstable tests because they aren’t objective but subjective to the data, the environment or the state of the application when the code was written. A code that smells might work, but it is bound to pull the rug from under your feet at some point in time. A smelly code solves lesser problems than it creates. And when that’s the case, an increase in automation code development effort is inevitable.

Here are some ways by which you can avoid code smells. It’s important that you follow these right from the get-go.

  1. Lengthy Classes

A typical automation test script would launch the browser, then the URL of the AUT, login, and perform a set of actions on the application, and then verify the state of the application after those actions and then logout. But what if all of these separate sets of actions were in a single lengthy class. That’s a bad code smell. These are different sets of actions with each having a unique responsibility and must be treated as such. In such a case where all the actions are clubbed into a single class, it makes it difficult for even the author to make sense of what the class contains, let alone a new test engineer coming in to assist or take over. Such a code smell affects readability, maintenance and can result in redundancy. So it’s important that each action is maintained in a separate class of their own. This classification makes it easier to locate where each part of the code is, identify any redundancies and most of all make maintenance a hell of a lot easier.

2. Lengthy Methods

Lengthy methods are twins of the Lengthy classes code smell and hence have a similar odor. They throw up the same problems such as lack of readability, being prone to redundancy, and difficulties in maintenance. When you start writing a test method, you do not intend for it to be so long and to cover so many responsibilities. But as time goes, and as features get built-in, the method grows, and you don’t mind separating the concerns. Eventually, you arrive at a point where you lose sight of what exactly was the method’s responsibility to start with. The method name and its original intention might have been to test a particular thing but as time passes and as changes get included it ends up testing a lot of other things as well. The remediation for this is the same as lengthy classes. Separate the concerns into individual test methods and call them whenever and wherever needed.

3. Code Duplication

Code duplication is one of the most commonly spotted code smell because you write a duplicate code knowing that it’s a duplicate, thinking that you can remember the other spots where the same code is located and if at all changes are needed, you could easily change the code in all the places. But things don’t turn out so easy. With the proliferation of the test suite, and the multiplication of the duplicate code, it becomes a nightmare eventually when a code change comes in and you are tasked to hunt down all the places where the change needs to be made. You miss one spot and that becomes a bug in your test code. This unnecessarily increases development effort and is also capable of introducing false negatives. In order to resolve this code smell, it’s important to classify all of the common code into their own methods. In this way, the code exists in a unique method, with an appropriate name, and can be located easily for any changes to be done. A change in this method will reflect in all the classes that call this method.

4. Unstable locator strategies

A crucial part of your automation code is the locators you feed to your tool in order to search and interact with the elements on the page. Having unstable or flaky locators is one of the worst code smells and has a serious impact on development effort due to the effort required on modifying locators repeatedly every time there is a change in the page structure. Locators that aren’t durable, that are absolute such as the one shown below are fragile and would require a relook, every time there is a change made to the page.

/html/body/app-root/div/div/div/section/label/app-search-form/form/span[1]

Instead have locators that are relative, that depend on unique attribute values such as id, class, type, or value assigned to the element that you want to locate. You can also go in for the ever-reliable CSS path. A relative xpath would look like the one shown below:

//*[@id = ‘firstname_search’]

By using the above locator strategy, unless there is a direct change to this specific element, the element would be identifiable with the same locator irrespective of the other changes that’ve gone into the page structure.

5. Bad exposure

Exposing classes, methods, and data members to other classes shouldn’t be done unless there is a definite reason to do so. If all the classes in the internal framework code are visible to the classes in the test layer, then that’s a code smell. When the class references such as web elements that are used in the page object class or the web element returning class methods are exposed to all, then the test code is enabled to access these DOM elements, which isn’t their concern at all. So providing appropriate access modifiers — making your page classes, base class, and methods returning web elements as private or protected is what needs to be done in order to remedy this code smell.

6. Inefficient wait strategy

Automation test code acts on the application several times faster than what we as humans do. If we click a button in the application, we wait for the application to load, respond, and show us the next page. But if the automation code clicks the same button, before the application could show the next page, the next line of code is ready to be executed. This is where we need to include waits which instruct the code to pause execution until the desired application response is attained. However, adding a wait that asks the script to pause a definite period of time (x seconds) before each activity is a code smell. Different environments behave in different ways. Your local environment might be slower and hence you might have added a wait of 10 seconds. But in a staging environment where the performance is pretty similar to production, the application performs much faster, and waiting for the same 10 seconds here will be unnecessary. A collection of such hard-coded waits will tremendously increase the suite execution time, because of having used the greatest denominator of wait across environments.

Thread. sleep(30); 🡪 Hardcoded wait — susceptible to wait longer than it has to in environments where the application performs quicker and fails the test script in environments where the application performs slower. This example of hardcode wait mandates the script to pause for 30 seconds.

Using conditional waits is the remedy for this code smell. Using the reference variable of WebDriverWait and in association with the ExpectedConditions class, we can instruct the script to wait until a specific condition is satisfied, and if the condition isn’t attained then wait for the stipulated period of time, before failing the test, whichever happens, earlier.

WebDriverWait explicitwait = new WebDriverWait(driver, 30);

Explicitwait.until(ExpectedConditions.visibilityofElementLocated(By.xpath(“xpath of the element1 to be included here”))));

Driver.findElement(By.xpath(“xpath of the element1 to be included here”)).click();

Unlike the hardcoded wait (thread.sleep), the explicit wait described above doesn’t force the script to wait for 30 seconds. If the condition is satisfied within 5 seconds of the previous action, then the next line of code that clicks on the element is executed. By using this technique, we can be sure that the script waits only for the time that’s necessary for the element to become actionable, nothing more, nothing less. Following are some of the conditions that the ExpectedConditions class provides.

  • alertIsPresent()
  • elementToBeClickable()
  • elementSelectionStateToBe()
  • elementToBeSelected()
  • frameToBeAvaliableAndSwitchToIt()
  • invisibilityOfTheElementLocated()
  • presenceOfElementLocated()
  • invisibilityOfElementWithText()
  • presenceOfAllElementsLocatedBy()
  • textToBePresentInElement()
  • textToBePresentInElementLocated()
  • textToBePresentInElementValue()
  • titleIs()
  • titleContains()
  • visibilityOf()
  • visibilityOfAllElements()
  • visibilityOfElementLocated()
  • visibilityOfAllElementsLocatedBy()

Another alternative to the hardcoded wait is the fluent wait in Selenium where we can define the maximum time to wait for an element along with the frequency in which the presence of the web element needs to be checked. It implements the WAIT interface and the syntax for Selenium v3.11 and above is as shown below.

Wait<WebDriver> wait = new FluentWait<WebDriver>(driver)

.withTimeout(DURATION.ofSeconds(30))

.pollingEvery(DURATION.ofSeconds(5))

.ignoring(NoSuchElementException.class);

WebElement ele = wait.until(new Function<WebDriver, WebElement>() {

public WebElement apply(WebDriver driver) {

return driver.findElement(By.id(“ele”));

}

});

The above code polls to check if the element with the id “ele” is available every 5 seconds. It repeats the process for 30 seconds before it gets timed out and throws a NoSuchElementException.

7. Inappropriate points of failure

We spoke about how the test code should be separated out and should not be concerned with the details of how the framework code performs actions on the browser. In the same way, it is important that the framework code isn’t loaded with the responsibility of failing a test too. Placing assertions in your framework code and giving it the power to fail your test is a code smell. It basically reduces the reusability of your framework code for both positive and negative scenarios. To remedy this code smell, remove all assertions from your framework code. Use the framework code to evaluate the state or perform an action but always perform the judgment call of a pass or fail from within the test code.

8. Hardcoded Data

It is not recommended to have data that’s passed to your application under test be directly hardcoded into the test code. That’s a definite code smell because different environments work with different sets of data and if you have all your data within the test code then you would have to hunt down each and every place to change the data when you are tasked to execute your scripts against a different database/environment. Have the data in a separate datasheet or properties file so that you can replace the entire file according to the environment in which you need the test suite to be executed.

Smelly test code dampens the joy of automation and introduces doubt by keeping your test suite in a precarious state. Spend some time tidying up all your code smells and get rid of them once and for all, to gain confidence in the stability of your test code, every time you make changes to it, irrespective of the scale. Refresh your test automation code, rejuvenate it and experience the joy of automation.

Author: Mohan Bharathi Srinivasan

--

--

Payoda Technology Inc

Your Digital Transformation partner. We are here to share knowledge on varied technologies, updates; and to stay in touch with the tech-space.