Editor’s note: If you experience troubles with ensuring high software quality or meeting release deadlines, chances are it all went wrong at the very beginning. A poorly designed QA strategy can hamper software quality and limit the productivity of a test team. ScienceSoft’s QA consultant Victor Sachuk summarizes the cornerstones of our approach to designing a QA strategy and explains how to make a QA strategy a springboard to testing success. Read on and don’t hesitate to turn to our QA consulting services for in-depth recommendations.
At ScienceSoft, we use a QA strategy as a blueprint that guides all software testing activities within a project, and I believe that a structured QA strategy is one of the key factors influencing project success. For instance, in one of ScienceSoft’s projects, a successfully designed QA strategy helped us achieve a 25% reduction in testing time, which let our customer release quality software and meet a strict deadline.
Below, I summarize the essence of the approach we follow when designing QA strategies for the customers who outsourced QA to ScienceSoft.
6 pillars of ScienceSoft’s approach to QA strategy design
The key idea behind our approach is tailoring a QA strategy to the peculiarities of a specific application, project, and organization. To realize this approach, ScienceSoft’s QA team performs the following activities:
There is a number of factors shaping a QA strategy, and before getting down to planning as such, we dedicate effort to analyzing these factors:
Applications used in different domains require different depths of and approaches to testing. Analyzing the specifics of the domain, we at ScienceSoft primarily consider the peculiarities of the user flow in the domain, the strictness of the domain’s requirements to software quality and the necessity of an app to comply with domain-specific regulations.
For instance, testing hospital applications, which usually require a complete test coverage, have a complex user flow, and need to comply with HIPAA, we ensure that all application requirements are covered with test cases, perform in-depth testing of high-risk areas, and check an application for compliance with relevant HIPAA safeguards.
Organizational and project specifics
Designing a QA strategy, ScienceSoft’s QA Consultants make sure that a QA team does not face roadblocks when following it. For that, we analyze the processes adopted within a customer’s organization (e.g., project initiation and management) and project (e.g., knowledge transfer and risk management), and make sure to seamlessly weave a QA strategy into these processes.
Release plan specifics
As my colleague Andrei Mikhailau, Software Testing Director at ScienceSoft, described in his guide to the QA process, testing in Agile has certain specifics as compared to testing in linear methodologies, and we reflect these specifics in a QA strategy. Contrary to a belief that Agile projects do not need a testing strategy as they value working software over documentation, in our Agile projects, we do not eliminate a testing strategy, but adapt it to the Agile specifics, for instance, prioritizing software’s functional modules based on risk and combining partial and full regression testing.
We analyze functional and non-functional requirements to software and other newly created and already available design documentation to define the scope and levels of testing, perform test prioritization, and come up with an optimal test team structure.
Clearly defined acceptance criteria help us understand exactly what kind of behavior a particular feature must demonstrate and allow us to ensure that we have a common understanding of the requirements to software with our customers. Clarifying the acceptance criteria, I make sure that each criterion is testable and has clear pass and fail scenarios.
At the beginning of cooperation, a customer and ScienceSoft’s test team agree on the conditions to be met for us to start testing activities. For instance, we establish a deadline for the customer’s development team to implement certain functionality. For the sake of transparency and reliability of the delivered services, we also agree on the test deliverables to be issued for the customer throughout the project.
We single out test groups and define the approach to test prioritization and test records maintenance.
To facilitate the creation and maintenance of test design documentation and foster test prioritization, we divide software functionality into areas based on their functional similarity. For instance, testing ecommerce platforms for our customers, we single out such functional areas as sign in/sign up, website search, user account management, product catalogue management, and others.
Defects in some software modules can pose risks to its users or hamper an application release. While in other modules, defects do not influence the quality of software much, and test cases covering such modules can be considered cosmetic. In ScienceSoft’s software testing projects, we prioritize the execution of test cases covering high-risk modules and state it in a QA strategy.
Test record maintenance
Executing test cases, ScienceSoft’s test teams keep track of the execution results in the form of test records. We make sure that the test records contain the information about who and when executed a particular test case, how long the execution took, and what the result of execution was. We define the information to be included in the test records and the directory where the records will be stored in a test strategy.
ScienceSoft’s QA consultants define the test levels at which testing will be performed, an approach to regression testing and test status collection and reporting.
To ensure optimal test coverage, we identify the share of testing to be performed at unit, integration, and system levels. To prevent defects from entering later project stages, we prefer taking a larger share of testing activities closer to the beginning of the delivery pipeline, encouraging development teams to perform unit testing, and taking over at the integration and system levels.
Approach to regression testing
At ScienceSoft, we treat regression testing as an important part of the testing scope as it helps ensure that defect fixes neither broke, nor altered the existing functionality. In reality, however, a test team is often pressed for time and has to search for ways to optimize regression testing. In ScienceSoft’s projects, for instance, we may opt for test automation to quicken the testing process or choose to combine partial and full regression testing based on risk.
Test status collection and reporting
In ScienceSoft’s projects, we collect the input from individual testers about the status of software testing activities in a project and provide it to the customer according to an agreed schedule. We state how often the test results reports will be provided to the customer in a test strategy.
We identify the environment(s) needed for testing, making sure that the environment or their combination is cost-effective to maintain and provides for accurate test results. I consider it optimal to use a development environment for integration testing and verification of fixed bugs, a staging environment – for system testing, and a production environment – for acceptance testing.
At this point, I also identify the roles and responsibilities of the QA team members to be involved in a project.
The approach put in practice
Based on my experience in one of ScienceSoft’s projects, I’ll show how the elements of the described approach work in practice. The customer, a provider of security audit and compliance services, commissioned us to develop and implement a testing strategy for test process optimization. The customer had to comply with a tight project schedule and needed to optimize collaboration among their several QA teams.
To optimize a customer’s QA strategy, I suggested:
- Splitting the testing scope into modules and assigning a dedicated test team to each. This helped the customer avoid communication inefficiencies among their several QA teams.
- Performing software risk analysis and prioritizing testing of high-risk software modules – they had to be tested earlier in the development cycle by test engineers with deeper domain expertise.
- Arranging a dedicated test environment and setting up a test data library, which improved testing reliability.
- Automating regression testing relying on the manual test cases that consistently helped detect bugs, which significantly increased testing velocity.
As a result, our customer reduced the testing time by 25%, improved the communication among several QA teams, and released a high-quality application respecting the tight deadline.
A neat QA strategy for outstanding quality outcomes
I am convinced that a QA strategy is necessary to build a structured QA process, and I advise you to plan upcoming QA activities in advance. If you want to make sure that your QA strategy is feasible or need help in designing one, my colleagues at ScienceSoft and I will be glad to help, just let me know.