Manual Testing Mobile Apps
When starting off with a small app, manual testing each release is a viable path. As the app grows, this effort gets more tedious. At scale - an app that’s releases weekly, with a large number of engineers working on it - this approach will break down. The goal at scale is to always have a shippable master with zero - or as close to zero - dependency on manual testing. ** There are a few challenges even when manual testing is done by choice:** either because it’s not a large overhead (yet), or because there aren’t enough automated test in place to allow to drop this testing.
- Who does the manual testing? When I started at Uber, the engineering team owned all the manual testing for their features. We kept a simple Google Form in place, with basic instructions, recording “pass” or “fail”. We repeated the checklist every week, as part of the build train. Of course, as we grew, this approach did not scale well and we started to rely on a testing team.
- How do you keep testing instructions up to date? Regardless of who does the testing - and engineer or a dedicated person - you’ll want clear and simple instructions. You’ll also need test accounts, their login information and the data to input in each test step. Where will you store this? Who will keep it up to date? At Uber, the platform team built a manual test administration system where engineering teams would record test instructions, and testers would mark the test as executed or failed every week.
- Keep manual testing in-house, outsource or mix? At Uber, I calculated how much it was costing us engineers to execute manual tests every week. This number was high enough to justify staffing this effort with dedicated people. During my time there, we used both third-parties like Applause, and a dedicated in-house quality team. In-house teams are less effort to start with and can access in-house systems. On the other hand, third-parties can be more reliable and scale up or down, based on how much testing you need.
- How do you integrate manual testing in your build train and release process? How do you handle issues found? What type of issue should block the release, and what are things that won’t? You’ll need to weave the manual testing step into your release workflow.
A frequent headache with manual tests is how regressions are found at the eleventh hour - right before release. This is still better than the alternative - releasing with new regressions - but it can put engineers under pressure to quickly fix a bug, to avoid the release from being delayed. The root cause of the bug can often be hard to locate, as it might have taken a week or more since the offending code was merged.
If this “last minute bugreport” issue is hitting you or your team frequently, consider either reducing the time to get feedback. For example, could manual testing start earlier? Should engineers execute some basic tests? Can automation help more?
When you have a manual testing process in place, make sure to leave enough time not just for testing, but for the fixing of any high-impact bugs. Do this either by doing manual testing early and leaving buffer time for fixing, or by being flexible pushing back the app release schedule if you find regressions.
Manual tests stay essential for mobile apps in a few cases - even companies who invest in best-in-class automation and spare no time and effort agree.
- Interfacing with the physical world. When relying on camera input for recognizing patterns like QR codes, doing document scanning or AR, you can automate much of the tests, but will still need to do manual verification to stay safe. When you build NFC applications
- End-to-end testing of payments systems. I’ve spent 4 years at Uber working on payments. Automating payments tests has a catch-22: payments fraud systems are sophisticated enough to detect and ban suspicious patterns - such as things that look seemingly automated. This means they quickly ban automated tests. You could test against test harnesses for payments providers, but then you’re not testing against production. Deciding if you invest in working around payments fraud systems or investing more in monitoring, alerting and staged rollouts of payments changes will be up to your situation. At Uber, we moved towards the former, and we’ll cover monitoring, alerting and staged rollouts in more detail in Part 3.
- Exploratory testing. Great QA teams and vendors excel at this scenario. They’ll attempt to “break” the app through creative steps that engineers’ wont’ think of, but end users will do it, regardless. Except unlike users, who often won’t report anything, but quietly churn, these testers will provide detailed steps for reproducing all issues. For apps with millions of users, exploratory testing is a an area to definitely invest in: the only question is how frequently to do it and how much budget or time to spend on it.
There’s far more to talk about with testing and I recommend the book Hands-On Mobile App Testing by Daniel Knott to go deeper for both automated and manual testing. This book covers test strategies, iOS, Android and hybrid tooling, rapid mobile release cycles, testing throughout the app lifecycle and more.
You are reading an early draft from the book Building Mobile Apps at Scale. For the final, expanded and improved content, grab the book now - it's free to download as a PDF until 31 May.
Building Mobile Apps at Scale
"An essential read for anyone working with mobile apps. Not just for mobile engineers - but also on the backend or web teams. The book is full of insights coming from someone who has done engineering at scale."
- Ruj Sabya, formerly Sr Engineering Manager @ Flipkart