QA Consulting and Testing of Web Applications for Insurance Claim Estimation
The Customer is a US company offering software products for insurance claim estimation. The Customer cooperates with major US insurance companies and global leaders in car manufacturing.
The Customer started to work on the development of the software ecosystem comprising three applications with a common database. Each application was intended for a definite type of users:
- A progressive web application for car owners.
- A web application for insurance companies.
- A web application for estimating companies.
The Customer’s QA processes were not mature enough to keep pace with the Scrum methodology, according to which the applications were developed. The biggest problem was insufficient requirements to the applications’ functionality. The Customer needed to adapt the QA processes to the needs of the Scrum development and improve the project requirements.
Moreover, the Customer needed to validate that the complex functionality of the three applications was working as intended and that the applications’ database handled vast volumes of data efficiently and correctly.
ScienceSoft’s QA consultants audited the Customer’s QA processes and, according to the Test Maturity Model Integration (TMMi), rated them as belonging to Level 1. Based on the audit results, ScienceSoft’s QA consultants created an action plan to improve the maturity of the Customer’s QA processes and adjust them to the needs of the Scrum-driven development. ScienceSoft’s QA consultants coordinated the cooperation between the BA, development, and testing teams concerning the project requirements and developed an elaborate QA strategy and plan, comprising the scope of testing, the planned testing types, test team members and their responsibilities, requirements to the test environment, required tools.
As the Scrum development methodology required, the ScienceSoft’s test team started testing the applications in parallel with the development. With each iteration, the test team analyzed the requirements to the applications’ new functionality and created test documentation comprising checklists for every application feature.
To check whether the applications’ numerous user flows worked correctly, the ScienceSoft’s test team performed thorough functional testing. The detected defects were reported to the Customer in Jira. Once the reported defects were fixed, the test engineers retested the functionality in point and performed regression testing to validate that the fixes neither broke the existing functionality, nor made it different from that stated in the requirements.
To check whether the applications had a look & feel compliant with the Human Interface Guidelines and were convenient to use, the test team cooperated with the BA team and carried our UI/UX testing.
To validate that the data was exchanged among the applications correctly, API testing was performed.
The database testing was performed to check if the transactions and changes made by the users in all three applications were recorded in the database correctly, as well as validate that the data flow among the applications was secure.
At the end of each iteration, with the release of a new software build, the test team performed testing on the production environment to validate the build was stable.
In the result of cooperation with ScienceSoft, the Customer’s QA processes reached Level 3 according to the TMMi. The Customer has benefited form an increased efficiency and greater flexibility of the testing process, and managed to release fully functional application builds every 2 to 4 weeks.
Technologies and Tools
Testing tools: Fiddler, Postman, iTools, Instruments Xcode, Atlassian Jira, Confluence.