The replatforming of our hybrid mobile app to separate iOS and Android native apps was already well under way when I arrived at FreeAgent as a test engineer for the mobile team. Since then we have carved out processes that the whole team can contribute to, giving us confidence that for each release our apps are in good shape. Here are a few things we are doing to ensure this.
Automation with Appium
All end-to-end tests were being executed manually when I joined the team. As the replatforming work continued to pick up pace, manually regression testing different scenarios and their permutations became more and more time-consuming. Defects were creeping into core user flows as the complexity and size of the apps’ codebases grew. We needed something to help reassure us we weren’t breaking core functionality and to cut down the time spent on repetitive testing tasks. That’s where Appium comes in handy.
Our suite of automated end-to-end tests uses Appium and is written in Ruby.
Why Appium? The tests can be executed on both our native iOS and Android apps with the same test code, which means we only need to maintain a single codebase. We use the XCUITest and UIAutomator2 drivers for iOS and Android respectively.
Why Ruby? FreeAgent is written with love and Rails! Choosing a familiar and common programming language made sense for us as our Rails engineers can bring their Rails-fu and experience to this test codebase.
The tests are written following the Page Object Model (POM) design pattern as this helps keep the clutter of selectors, wait methods, element properties and navigation helpers away from the actual tests. The screens in our app have consistent designs throughout, so POM-ing makes elements and functions re-usable when we write tests across different areas of the app.
As you would expect, these tests don’t cover everything as they take a fair bit of time to maintain and run – such is the life of end-to-end tests. Their main responsibility is to ensure the happiest user paths are working as expected. Given this test suite can take a while to execute, it is run on a nightly schedule.
Now that the replatforming has been completed, these tests are an integral part of our release process. Our sprints run for two weeks and we usually release at the end of each sprint. These tests help ensure our main user flows haven’t broken, with new features and fixes being merged as the sprint progresses.
FreeAgent uses a shift-left testing strategy to give our test engineers time and opportunities to devise and manage processes for preventing bugs before code changes reach their testing phase. This may sound like “let’s get developers to do all the testing”. Well, sort of. Developers will take on manual testing with support and guidance from the test engineer. Test scripts are written by the developers as they are developing the feature/change. The test engineer or other developers may make suggestions to improve the test script. Once the change is ready for code review, either the code reviewer, another developer (this should never be the developer who made the code change) or test engineer will execute the test script. A second set of testing eyes is always welcome if the change feels like it is high risk.
We’re taking a bit of an uphill journey with unit tests but it’s a journey well worth going on. Does code coverage mean everything, though? No! But it certainly does help to gauge which parts of the app are at a higher risk of defects being introduced. It’s sometimes very easy to start treating code coverage as a numbers game. We should avoid writing test code for the sole purpose of upping the coverage percentage. As we continue to work on increasing our code coverage, our general approach to addressing existing gaps in coverage is to fill gaps where the greatest risk lies by not having unit tests in place, e.g. core functionality and complex code. Gaps in code coverage are tracked as technical debt so that we don’t forget to tackle them later.
Trying to measure something that has a different meaning to different parts of the business can be a bit tricky. A question that I’m often asking myself is “are these test processes actually working?”. We need a bit more than app ratings to tell us how well we’re really doing. Alongside app ratings, we combine a few metrics like customer issues, internally raised issues, crash-free users, open crashes and code coverage to generate an app quality score. We then compare the score month on month. It’s not an exact science but it is helpful to have a rough indicator that provides the team with a benchmark to reach for and maintain.
As we continue to boost the native mobile apps with more great features, the way we test will evolve, and I’m very much looking forward to it.