According to the Agile methodology that we follow, every iteration in the process of app development includes requirement specification, design, coding, testing, and review.
Given the cyclic nature of iterative development, a quality assurance engineer performs a very important role in the successful completion of builds. First, QA specialists clarify the requirements regarding supported devices and supported operating systems. Next, QA specialists test builds as they become ready. Finally, QA specialists suggest usability improvements throughout the entire product development lifecycle.
Let’s look at the role of Quality Assurance through each stage of the app development process.
The QA department’s engagement in a project begins in project planning – a pre-production phase of app development that can be described as “getting ready for the project.” At this stage, our QA engineers analyze a project’s logic, examine wireframes and mockups, and give their suggestions on UI/UX and other usability improvements. For example, QA might suggest moving a button to a more appropriate location or swapping a keyboard for a more fitting one.
After these initial steps have been completed, the QA specialist assigned to the project writes user stories.
User stories in Agile methodology are short scenarios describing a user’s actions. In other words, they describe how an app will be used in the real world and what goals it will accomplish. User stories are referenced by QA specialists during software testing. They are usually written as short sentences that follow this template:
As a <role>, I want <feature> so that <reason>
As a <user>, I want <goal>
[Examples of user stories]
Each iteration of development starts with a planning meeting, but the very first planning meeting is the most essential. It is during this initial planning meeting that tasks for the first sprint are chosen, a team vision is established, and the goals of the iteration are enumerated.
Once developers begin to implement the first features, a QA specialist writes test cases and develops a checklist.
Test cases are sets of conditions that a product must meet in order to deliver an expected result. Unlike user stories, test cases are aimed at checking the smallest pieces of functionality ensuring that all constituent parts work properly to deliver an expected outcome. Test cases have a particular structure consisting of a title, preconditions, steps to be taken, and the expected result.
[A test case describing actions that a tester should perform to check if a feature “change user profile image” works]
A checklist serves the same purpose as a test case, but is less detailed and takes less time to edit, which makes it a very valuable tool for large projects.
[An example of a checklist]
The testing process starts when the first features are implemented and the first functioning build is delivered. All tests are performed in close cooperation with the rest of the team and the client. We perform testing on all devices and OS versions that a product is expected to work on at launch.
We practice Continuous Integration (CI) in order to achieve constant build delivery and to avoid hiccups during the development process.
Continuous Integration Process
CI makes it possible to deliver builds automatically so that all team members as well as the client can be fully aware of development progress. With the latest build always at our fingertips we can constantly monitor the quality of our products.
With CI, any change in project code means that a new build will be assembled and delivered to the QA department at once. Always having the latest build allows us to control the quality of a product continuously and be aware of every project update.
There are three main players in the CI process: 1) Git, 2) code reviews, and 3) Crashlytics.
We use the Git revision control system as a project’s code repository, and as the destination for code reviews.
A code review is performed to validate the quality of newly-added parts of code before merging with the rest of a project’s code. Once a review is completed successfully, the new code is automatically uploaded to the build server where a build is assembled. Then, the build server sends a build to the corresponding project in Crashlytics Beta and notifies the development team, including QA specialists, that a new build is available for download.
At the end of an iteration, a client receives the build with all necessary fixes.
The Crashlytics crash reporting system helps to track down bugs which somehow passed through the testing process by arranging statistics about crashes that have occurred.
It also gathers information about the app version, device, country, language, battery charge, screen orientation an other relevant data.
As with all aspects of Agile methodology, testing is a repeated cycle. Iteration by iteration, we carry out the following types of testing:
New feature testing
New feature testing is a deep and intense check of functionality to make sure that all new features implemented in a current build are working correctly.
Unlike new feature testing, sanity testing is used for revisioning of the features that were implemented in earlier iterations. It verifies that the functionality meets the established requirements.
The term “smoke testing” comes from engineering slang and means powering on a device to make sure that it doesn’t start “smoking.”
In our case, smoke testing is a quick verification of all basic functionality that is done before we run more complex regression testing. For example, if a tester can’t log in, there is no point in checking how other features in a build work together until this major bug is fixed.
When a large chunk of functionality is has been implemented and thoroughly tested, QA engineers perform regression testing so as to be certain that all the components of an app are working together without causing new bugs to appear.
As a rare exception, there can be iterations during which only a few major features are added. For example, implementation of a chat feature in an app may take a great amount of time and require close attention to its subfeatures. In this case, we may put off regression testing until the next iteration.
It’s important to add that the closer the process of development is to completion, the more time-consuming regression testing becomes.
Testers also perform so-called non-functional tests that don’t touch upon an app’s specific features. We carry out the following non-functional tests:
All bugs found during the testing process and from Crashlytics reports are submitted to the corresponding project in Jira. Thus, developers have a full list of issues to fix before the next iteration starts.
The next iteration launches the whole process anew, starting with a planning meeting where we adjust the plan in order to add newly-approved ideas based on a client and QA department feedback.
It’s almost impossible to wipe out every single bug during the app development process. Therefore, after an app is released, it needs to be supported and maintained. What’s more, it will likely be necessary to improve the app and to add new features as new OS versions are rolled out.
We arrange support services and continue tracking all unexpected crashes and bug in order to keep apps up-to-date and running well. Because app support is a continual necessity, we maintain permanent contact with our clients.
Apart from implementing small fixes, we also provide complex app updates. Before putting a new version to the market, we carry out update testing to detect possible new bugs that could appear after the update, and also to make sure that all user data from a previous versions (e.g. friend lists, data files) have migrated to the new version correctly.
By following our established quality standards we are able to keep a close eye on the quality of all of our apps.