In Mercury, we do a lot of work with startups, where functionality constantly changes, and the code is rewritten from sprint to sprint. If you write an automated test, and then the team decides to make changes to the software, your test may quickly become obsolete, which means that instead of saving the time and expenses, as expected, we have to spend additional effort constantly fixing our tests.

Ever-developing products

Our team had the same experience with our own product. Initially we thought that manual testing of a large project takes too long, and automation is needed. But then we came to realize that our QA team was struggling to keep up with a product that would change from sprint to sprint, that is, literally, every two weeks.

We had a situation when we were migrating an application from Xamarin to native, and a part of the native code was rewritten in React Native. The team spent many hours rewriting everything for a new interface.

As a result, we had to give up on all UI tests: it takes too much time to support them, they always lag behind the rest of the project, critical issues are too often overlooked, and the tests crash because the technology is far from perfect.

Automation that we use

From our experience, we identified five types of automation that we use when working with quickly developing projects.

API tests

We test client/server interaction. As a rule, the protocol is stable, and any changes implemented on the front end or on the backend do not affect the client/server interaction. We simulate client requests and, in rare cases, server responses.

For client requests, we usually use Postman, along with several in-house solutions built on reliable libraries and frameworks. We use Python in combination with Requests, or Ruby with REST Client.

To simulate server responses we usually use spoofers, such as Fiddler, Burp Suite, Charles Proxy. We set up API tests to run automatically several times a day and check if the prod is working correctly.

Testing critical functionality

Any new build is a high error risk. In this case we use automated smoke testing to quickly identify and fix bugs at an early stage, without wasting any valuable time on software that we know is unstable.

Load testing

We are currently experimenting with automated load tests. It is still too early to draw conclusions, but potentially this can help developers to detect performance degradation already at the code writing stage, and also help QA engineers to save time spent on regression load testing.

End-to-end test scripts for stable functionality

Even in ever-changing projects, there eventually comes the time when some of the application’s functionality becomes relatively stable. When that happens, we use automated tests, and they really save us a lot of time.

Unit testing

Developers review functional test cases and check methods inside the code. This is the most correct approach, but at this point one may feel tempted to continue writing functional code instead of writing a test for the code each time. It is not often that we can allocate enough development hours and implement unit tests, that’s why we don’t use them frequently.

In other cases we usually do manual testing. QA engineers think through a case, test it, and we get results right away.

When automation makes sense

In the case of a startup, automated testing usually works as follows:

1.   first we come up with a test case

2.   then we write a test

3.   next, we fix this test

4.   back to step 1.

If a project changes quickly, tests become obsolete even quicker. It is often the case that supporting them does not justify the effort required from the team. This process takes up too much valuable time, so every time we ask ourselves: wouldn’t it be easier to do manual testing?

Automated testing is a must for an enterprise product -- a complex product with many repetitive actions, where it is impossible to find bugs manually. Such companies usually budget for extra time to be spent on process automation; also, the product functionality is more stable, and tests do not need to be rewritten every fortnight.

In our case, every project is different, so before we decide if automation makes sense in a particular case, we ask ourselves “Does it have more pros than cons?” for at least some part of the functionality. If the answer is yes, then we use automation.

In reality, automated testing is still a developing area, and available tools are limited:

  • not all languages can be used for automation
  • the core of the test framework often needs to be spruced up manually
  • tools for analyzing results are far from perfect.

Nevertheless, as we said in the beginning, the future belongs to automation. Even now, the job of QA engineers who do not have advanced skills in automated testing is to prepare test scenarios, while the job of test automation engineers and developers is to build the automation core.

The industry is developing rapidly, and in about five years most of manual UI testing will most likely be covered by simple human-readable scenarios. That’s what we hope for and wait for to happen.

To sum it up

Using automated testing on any project requires a combination of several things: a correctly selected testing strategy, complex analysis, and flexible approach. Here at Mercury, we have worked on many projects, but are yet to come up with a universal “rule”.

However, we have identified several scenarios where automation most often pays off, even with frequently changing products:

  • we test client/server interaction
  • we do smoke testing before releasing a new version
  • we try to automate load tests
  • we test areas with stable functionality
  • we like unit tests and can’t have enough of them.

When writing this article, we wanted to share our experience, and we also tried to answer one of the most complex questions in software testing. And even though we have not yet reached a definitive conclusion, we hope that our findings will help our team to come up with a smart approach to automation in ever-developing products.