Editor’s note: If you experience software quality issues or struggle with release deadlines in your Agile project, read on to get some valuable ideas about making your test process a better fit for Agile. If you still have troubles adjusting your test process to the Agile specifics, consider leveraging our experience in software testing, and ScienceSoft's QA team will do it for you.
ScienceSoft leverages the experience accumulated in our company during 34 years of the software testing practice to explain the specifics of software testing in Agile and list the best practices our QA team employs in our Agile projects to let our customers enjoy quality software released on time.
The Agile testing process specifics
The main particularity of software testing in Agile arises from the difference between the QA process in linear and Agile software development models.
In projects managed according to linear methodologies, such as Waterfall, the stages of the software development life cycle (SDLC) are organized sequentially, and each stage is executed once along the course of the project. Testing begins only after the development phase is finished, and no new features are introduced during test execution. Therefore, in Waterfall projects, regression appears only as a result of bug fixes.
In projects managed according to any of the Agile methodologies, the SDLC stages are considerably shorter and executed multiples times throughout the project. With each new iteration, new functionality is introduced, which sharply increases the chance of regression.
Thus, the Waterfall and Agile testing processes differ in the volume of regression testing. And the key to optimizing testing in Agile is optimizing regression testing.
The method of optimizing regression testing
To reduce the time it takes to perform regression testing while making sure that critical software functionality is covered, we prioritize regression testing activities with software risks in mind.
For that, we analyze the complexity, business priority, and usage frequency of different software modules, and identify the likelihood and impact of risks each module is prone to.
Based on the obtained information, we prioritize the software modules and single out the high-risk ones. We group the test cases covering the high-risk modules into a separate test suit – a partial regression test suit, and run it at the end of each iteration.
In addition to executing a partial regression test suit at the end of each iteration, we run a full regression test suit before major releases or at a previously set schedule. And the full regression test suite comprises all the test cases created throughout the project.
Unlike in a test process with no risk-based prioritization where a test team runs a full regression test suit at the end of each sprint, taking a risk-based approach to testing allows us to ensure the quality of high-risk software modules with fewer test cases, which take considerably less time to execute.
Still, this testing approach has an important downside, as it leaves coverage gaps. Luckily, there are a number of methods you can employ to fill them in.
How to close test coverage gaps?
- Introducing test automation at the unit level
To uphold the coverage of low-risk software modules not included in the partial regression test suit, have the development team validate such modules at the unit level. Unit tests do not require much time to write and take milliseconds to execute. Moreover, with a continuous integration pipeline in place, the execution of unit tests can be triggered automatically upon each new code commit. To make sure that unit tests are trustworthy and reflect the recent changes introduced to the application, we include unit tests into a general application code review practice.
- Extending test automation to the API and UI levels
Test automation can help you fill in coverage gaps without delaying the releases. And in the context of a given sprint, test automation can reduce the time it takes to perform regression testing from days to hours. For that, we dedicate time to designing and developing test scripts, setting up a corresponding test environment, and maintaining both the environment and the test script library. Before opting for test automation, we make sure the required efforts are lower than the efforts needed for providing the same coverage by manually executing the regression test suit.
To optimize regression testing even further, we do not wait for the application’s UI to be ready and start running automated tests at the application’s API level. Doing so helps us keep our customers’ development teams informed of software defects early in the development cycle.
An example from our practice
With the above practices applied to our projects, we help our customers deliver reliable software in quick iterations. For instance, when testing a health information exchange application being developed according to Scrum, ScienceSoft’s test team assessed the risk level of the application modules and together with the customer’s development team, prioritized the testing activities accordingly. We also employed automated API testing to check the interoperation among different software modules. Our taking a risk-based approach to test prioritization and employing test automation allowed the customer to release new application builds every 2 to 4 weeks, and a month after the release, no critical bugs were found in production.
Enjoy quality software and accelerated releases!
Setting up and executing Agile testing activities in an optimal way calls for substantial QA expertise. If you doubt you can pull off software testing in your Agile project, we are ready to assist and take over the execution of your testing activities. To save the management efforts on your side, we can take care of your testing on a managed services basis.