Editor’s note: For quality assurance not to impede your software development project and defects not to leak into the software production, ScienceSoft recommends to carefully plan it out as early as a software requirements gathering stage. Backed with his 20 years in QA consulting and software testing, in this article, Andrei guides you through the typical stages of a robust QA process, explains how to set it up and adjust it to Agile. Still, if you need expert help with tailoring QA to your software or project specifics, you can consider ScienceSoft’s QA consulting services.
Process-oriented and focused on preventing software defects, software quality assurance (QA) reaches beyond mere bug detection. Though the QA process includes software testing, or quality control, as one of its elements, quality assurance differs from quality control in terms of character, methods, and the SDLC stage at which the activity starts.
Software quality assurance should start as early in the software development life cycle as at the requirements gathering stage and comprise the following activities:
The cost of fixing a defect found during testing is up to 15 times higher than the cost of preventing one at the requirements design stage. To avoid that, QA professionals are involved in the analysis and clarification of functional and non-functional software requirements and make sure the requirements are clear, consistent, complete, traceable, and testable. Thus, they prevent possible software defects and facilitate upcoming test design activities.
QA professionals use the knowledge gained at the requirements analysis stage as a basis for test planning. According to IEEE 29119-3, a test plan should contain a test strategy and cover a testing scope, a project budget and deadlines, the types and levels of testing an application requires, bug tracking and reporting procedures, resources and their responsibilities, and other factors.
At this stage, QA specialists design test cases or checklists covering software requirements. Test cases outline conditions, test data (prepared at the test design stage as well), and test steps needed to validate particular functionality, and state an expected test result. In order to gain familiarity with an application and come up with an optimal approach to test design, test engineers may start test design activities with a certain amount of exploratory testing.
When test automation is in scope, test automation engineers create test automation scenarios at the test design stage as well.
Also, the test environment is prepared for test case execution. The test environment should closely mimic the production environment in terms of hardware, software and network configurations, operating system settings, available databases, and other characteristics.
Test execution and defect reporting
Test execution starts at the unit level, when the development team performs unit testing. In its turn, the test team takes over at the API and UI levels. Manual test engineers execute the designed test cases, submitting found defects in a defect tracking system, while test automation engineers use a selected framework (e.g., Selenium, Appium, Protractor) to execute automated test scripts and generate test reports.
Retesting and regression testing
Once the found defects are fixed, test engineers retest functionality in question and perform regression testing to make sure that bug fixes neither broke the related functionality nor made it different from that specified in the requirements.
Once the development team issues a release notification (containing the list of implemented features, fixed defects, known issues and limitations), the test team identifies software functionality that has been affected by the introduced changes and determines test suites required to cover the scope of the deployed build. The test team performs smoke testing to make sure the build is stable and, once it is successfully passed, executes the identified test suites, issuing a test result report when finished.
To set up an efficient QA process for product and enterprise software, a QA consultant from an internal testing center of excellence (TCoE) or a QA outsourcing provider should consistently take the following steps:
- Audit the existing QA process
QA professionals should thoroughly assess project, quality, change, knowledge, risk, and incident management processes; investigate project documentation; interview key project persons to identify QA-related problems at the project and company levels. Such QA maturity models as TMMi and TPI may be used to structure the acquired knowledge, determine the current level of QA process maturity, identify areas for QA process improvement and reveal the need for QA process redesign.
- Design a new QA process
If the audit revealed the need for QA process redesign, QA professionals work out solutions to the identified problems, model a new QA process, and design a roadmap for its implementation. In the roadmap, they describe the new process specifics, define quality metrics, and consider risks that may arise in response to changes. QA experts also outline the roles and responsibilities of the QA team members and plan required training for QA teams to be able to support the new processes.
- Implement a new QA process
The new QA process is rolled out according to the implementation roadmap.
- Monitor the effectiveness of the introduced changes
To make sure that the introduced changes facilitate the delivery of high-quality software and the revamped QA process stays up-to-date with the organization’s business processes and needs in the long term, QA teams need to continuously assess the QA process, plan and implement the required improvements.
To see how these activities are performed in real life and how they actually help companies develop high-quality software, you can explore the relevant QA consulting case from our portfolio.
Projects following Agile methodologies have specifics in terms of QA process organization. Unexpected changes of requirements put pressure on QA teams, and frequent releases reduce the time available for software quality assurance and testing. To make sure that the QA process for product and enterprise software stays efficient in Agile, it makes sense to adopt the following practices:
- Adapt test design to Agile
In Agile projects, the share of exploratory testing versus test case-based testing increases. Also, test cases are frequently replaced with checklists, i.e., high-level lists of items to be tested and the criteria against which an application should be verified.
- Optimize regression testing
Considering that in Agile projects the number of times a regression test suit is executed rises, full regression testing becomes time-consuming. To optimize the regression testing process, test teams can combine partial and full regression testing, prioritize regression testing activities based on the risk levels of defects, and automate a regression test suit.
- Increase the share of test automation
Automating frequently executed test cases that do not change much with each iteration helps reduce testing time further and increase the quality of software testing.
Setting up an effective QA process requires substantial QA expertise and can be time- and effort-intensive. You can turn to ScienceSoft's QA consultants with 8-15 years of experience, who can assist you with the QA process setup and audit, for you to quickly start benefiting from a full-fledged QA process.
QA vendor with 19 years in the domain, we can help establish a robust QA strategy or improve the current QA processes.