Starting a data science internship at FreeAgent was going to be a completely new experience for me and I was super excited. It’s a lovely warm day in Edinburgh, Monday 1st of June, I’m standing in my kitchen looking out of the window and wondering what the next few months will hold for me. It’s 9.28am and my first meeting is supposed to start in 2 minutes. Am I going to be late on my first day? No – it’s 2020 and lockdown is still in full swing, I’ll be starting the internship working from home!
Getting Started
I’m taking 3 months out of my physics PhD at the University of Edinburgh to get some experience in an industrial setting. FreeAgent really stood out to me when I was looking for internships this summer – it looked like a refreshing and fun place to work and the data science team were working on some really interesting and ambitious projects.
I received all of the necessary IT equipment along with some other goodies on the Friday before I started, specially delivered (adhering to social distancing guidelines) by a member of the FreeAgent IT team. I hadn’t spoken to anyone in person outside of my flat in person for over 2 months and so I was thankful for the quick interaction. I’m fortunate enough to have a desk in my flat where I have set up a good area to be productive from home.
I joined the first Google Meet, there were two of us starting on that Monday. The morning flew by and by 2pm I was set up with my IT equipment and had learned about FreeAgent’s values and history as well as the people that made it tick. Everyone that I encountered on the first day seemed really enthusiastic and lively – which is always reassuring!
I think the smooth remote onboarding process was aided by the fact that many of FreeAgent’s employees already worked remotely and the employees who usually work on-site use laptops and cloud-based services. I found most of the information that I needed was already handily documented on the engineering wiki in Notion, which was really useful with familiarising myself with everything from setting up an account on Amazon Web Services, to learning about the specifics of the project I will be working on.
About the Project
This summer the work I’ll be doing is centred around explaining bank transactions, for example a transaction containing “Hilton Hotels – £279.78” might belong to the category “285 – Accommodation and Meals” in FreeAgent. Explaining transactions can take up a lot of customer time and we want to know if machine learning can help automate the task. As far as classification tasks go, this one is particularly challenging because of the sheer number of categories a transaction can fall into (over 67) and there are potentially quite serious consequences if a transaction is mislabelled – both from a reporting and tax perspective.
I’ve joined the team at a really exciting time, shortly before new automation features will be made available to FreeAgent’s full customer base. Over my first few weeks I’ve been getting to grips with the technologies used across the data science team by evaluating model performance from the perspective of a typical FreeAgent user, and helping the team develop an alternative model architecture. Later in the summer I’ll be helping to add more nominal codes which the proposed model aims to classify.
Meeting the Team
In the afternoon of the first day I met with the Data Science team. They have been the people I have spoken to the most since starting here, in particular my “buddy” within the team who is my primary contact should I have any concerns. They have all been incredibly welcoming and patient, and sought to answer all of the questions I’ve had whilst getting up to speed. I have found my buddy very approachable (or message-able?) which is especially important when starting remotely – those quick questions which you might just turn around and ask somebody if you were sitting next to them in the office actually require you to actively message somebody, but this hasn’t been a daunting prospect thanks to their attitude!
Members of the wider Data Science and Analytics Team have been thinking of creative ways to mix up the working day and I was really grateful to be invited into a well-established Pointless league.
Remotely Challenging
There are inevitably parts of starting a project which are made more difficult by working from home. When learning about and experimenting with new technologies, talking through concepts with a good old marker pen and whiteboard is invaluable and hard to replicate on a video call. It can sometimes feel like a problem could have been solved quicker if we could have a face to face interaction. On the whole I think that this has been overcome by having regular catch-ups with other members of the team, something which has really helped me gain momentum and keep it going in these first few weeks.
Another downside of starting remotely is that you lose the passive/organic interactions that naturally occur in an office – the watercooler chat phenomenon. Having said that, there has been an active effort to provide these kinds of interactions and I’m really impressed by the availability of things like “remote coffee break” and a plethora of “off-topic” slack channels. I think I should challenge myself to get more actively involved with one of these channels in the coming months!
There are also weird things that we take for granted like a commute in the morning. Whilst, in the past, it has sometimes felt like a chore that I wished I could skip for extra time in bed, I’ve come to realise that a cycle ride across Edinburgh not only transitions me physically into a place of work but mentally into a state of working – perhaps I should do a fake commute in the morning by cycling round the block?
Before starting this internship I was laughing with a friend that the word intern originates from the latin for internal – which might not be true for me in the physical sense. However, I’ve been pleasantly surprised how included I have been made to feel in FreeAgent and how smoothly starting this internship has gone remotely! I’m really looking forward to continuing this journey.
At FreeAgent, we run 45,000 tests on every code change to make sure that our rails monolith continues to work as expected. These include unit, integration, and acceptance tests. Recently, we switched from Capybara-webkit to Headless Chrome with Selenium for running JavaScript and acceptance tests.
Why did we switch?
Capybara-webkit has now been deprecated and uses an old version of webkit engine, so we had to look for alternatives. We preferred Headless Chrome over Chrome because it provides a real browser context without the memory overhead of running Chrome. JavaScript and feature tests now have the same execution context as end users of our site, so we have more accurate feedback from tests.
Chromium vs Chrome Browser
We use the open source Chromium browser on our Continuous Integration (CI) servers. Chromium is lightweight and has a smaller memory footprint than Chrome.
Steps we took to prepare CI
Switching OS
Previously, our Jenkins CI setup ran on CentOS. Chromium fails to install on CentOS 6, so we switched to Ubuntu. Now, we use the Jenkins EC2 plugin to spin up Ubuntu spot instances on AWS. This also took us one step closer towards our goal to move to AWS.
Webdrivers gem
ChromeDriver is an open source tool used by Selenium to control Chrome. It provides capabilities for navigating to web pages, user input, JavaScript execution, and many more. ChromeDriver is a standalone server which implements WebDriver’s wire protocol for Chromium.
To keep Chromium and Chromedriver versions in sync, we introduced the webdrivers gem into our setup. It automatically pulls the appropriate driver version when the test runs on a machine for the first time. Webdrivers also provides a rake task in case you run tests in parallel and don’t want each process to spend time upgrading the driver.
Migrating our Jasmine test suite
We use Karma to run Jasmine tests in headless mode. Karma works with any of the most popular testing frameworks (Jasmine, Mocha, QUnit).
Migrating Acceptance tests
We’ve hooked-up Capybara with the selenium-webdriver gem to drive our tests in Headless Chrome. Previously, we used capybara-webkit but that only drives the QtWebKit browser which is now deprecated. Whereas, selenium-webdriver opens up possibilities for testing on a variety of browsers.
On the downside, a whole host of tests started failing when we made this switch. Here’s a list of gotchas we encountered:
Many text assertions changed because we get more real text data now with Chrome. We corrected text to match output from Chrome. For example: Webkit ignores non-breaking spaces but chrome returns them.
Capybara-webkit provides the have_http_status and request.headers methods. Selenium does not provide any request/response inspection methods. We added a middleware to intercept requests to allow us to inject headers.
The methods to set or delete cookies are different. Also selenium is strict, you cannot set cookies until you visit a page in the domain you intend to scope your cookies to.
# In capybara-webkit
page.driver.clear_cookies
page.driver.set_cookie(
"fa_user_session_key=#{sign_cookie(user_session.key)}; path=/; domain=#{user.account.subdomain}.lvh.me"
)
visit(login_path)
Selenium does not have a method to perform downloads. We’ve added a download helper to fetch downloaded files. You can fetch the last downloaded file using last_download! method in feature spec. We run tests in parallel worker processes, so we maintained a separate download directory for each worker to avoid races.
element.send_keys only works on focusable elements, e.g. sending an escape keypress to a div (not a focusable element) closes a modal window in WebKit but we had to send keys to a focusable element in Selenium.
# In capybara-webkit
find(".fe-Modal[data-modal-name='practice_dashboard_sample']").send_keys(:escape)
# In selenium
within ".fe-Modal[data-modal-name='practice_dashboard_sample']" do
find(".fe-Modal-closeButton").send_keys(:escape)
end
WebKit handles JavaScript confirmation dialog boxes so your test doesn’t have to. In Selenium, a click action needs to be wrapped in a accept_confirm , or dismiss_confirm block.
# In capybara-webkit
click_link("Delete Yodlee")
# In selenium
accept_confirm do
click_link("Delete Yodlee")
end
Selenium cannot find empty elements if Capybara.ignore_hidden_elements is set to true. Selenium can not find check-boxes or empty fields in this case. We’ve fixed tests by using visible:any in find methods or by setting ignore_hidden_elements=false.
Selenium does not support the .trigger method. You will need to call or simulate the event.
# In capybara-webkit
within "[data-target$='practice-select.results']" do
find("option[value='#{practice.id}']").trigger(:mouseover)
end
# In selenium
within "[data-target$='manager-select.results']" do
find("option[value='#{manager.id}']").hover
end
Noteworthy Selenium driver configs
Non-headless mode
We’ve also enabled non-headless mode in capybara’s selenium driver config, which allows us to debug tests in a browser window by setting an environment variable
Logging errors from the driver and browser
Selenium webdriver takes in loggingPrefs to capture browser and driver logs.
Taking regular time out to focus on self-improvement can have concrete benefits for both you and your organisation. These benefits could include becoming more confident in your role, getting that promotion, or helping you become a more collaborative and communicative team member.
It surprises me that lots of people I speak to aren’t nearly as excited about personal developments as I am. If you feel guilty about taking time out for personal development, or you’re not sure where you’d even start, read on and let me try to convince you that it can work for you.
What is personal development?
The idea of personal development is a fairly simple one. By regularly dedicating time to learn new subjects and improve upon things you know already, you can progress further towards a career goal.
In my experience, there are two crucial parts: the plan and the implementation.
The plan involves taking time to think about where you’d like to be in the future and what steps you can take to get there.
The implementation is where most people lose momentum. This is where you regularly dedicate time to focus on the areas you have identified, helping you to progress towards your goal.
You can shape your personal development to fit your learning style and the areas you want to study. For example, maybe your goal is to be a more confident public speaker. As part of your personal development plan you could spend time researching a topic and presenting your findings to your colleagues after a few weeks. Maybe you’re working on a project that makes use of an API, but you want to know more—what is OAuth and how does it work? How is it different from OpenID? Spending just an hour a week to read up and eventually contribute to an open-source project could be a way for you to improve your knowledge.
Personal development has been absolutely crucial for me in levelling up as an engineer. I’ve used it to study areas such as Test Driven Development, Design Patterns, UML, and even for learning tools like Vim, tmux, and the Git command line. All of which have had a direct positive impact on the projects I’ve worked on.
How I approach personal development
For me, the most important thing is that I have fun with it. It’s very easy to get bored and do something else, so I make it an event I look forward to. That means finding a nice cafe to work in, snoozing Slack notifications, and sipping fancy coffee.
I track my personal development through a Trello board that looks like this:
Every 6 weeks I re-evaluate the board. I look at the areas to improve based on feedback from colleagues and decide what my goals should be. I keep my core values in mind here too (the left column)—these are some general areas I’ve identified that help me do my best work. This process is working OK for me right now, but it’s important to point out that it’s very fluid, and that I change things around all the time. Maybe I’ll plan for 8 weeks instead of 6, or maybe I’ll abandon a goal. That’s all perfectly fine because I own my personal development and I construct it in the way that works for me at that time.
Common pitfalls
Ever since joining FreeAgent in 2015 I’ve been taking time for personal development. It’s something that has thankfully been encouraged by my team from the start. I’ve tried lots of different techniques with varying amounts of success, and so some of the problems I’ve run into along the way are outlined below.
Not owning the plan
This is fairly common when you’re new to personal development. When you have a well-meaning mentor that spends time with you to develop the plan, it’s very easy to go along with everything they say. The result is a plan that doesn’t fit right because someone else created it for you.
Here’s a real-life example. Several years ago, my mentor and I decided application performance would be a good area for me to focus on. This sounded good to me at the time and we agreed I would read 2 relevant books and give a talk on our application’s performance in 3 months. I never got around to it. There were other things I was more interested in at the time and so the personal development didn’t take priority for me.
I didn’t own the plan and it showed over time.
I scheduled the application performance work for 3 months in the future because that felt far enough away that I could “just do it later”, which leads on to our next common pitfall.
Adding too much to the plan
It’s definitely a good idea to think about your long term career. Are you in a job that you enjoy? What kind of role would you like to have 5 years from now? Those are the things I think about every 6 weeks when I create a fresh personal development plan. What I have found is that sometimes I change my mind and that’s totally fine. If I create a personal development plan that covers a full year then all the things I can’t really be bothered to do will get pushed later and later into the plan.
With the previous example, I decided I would focus on application performance in 3 months’ time. This was because when I created the plan I was more interested in learning design patterns and systems architecture and wanted to focus on those straight away. By saying I would work on application performance in 3 months, I was really saying “this isn’t a focus for me right now and I probably won’t get round to it”. I just didn’t realise it at the time.
What works for me is keeping my plan nice and short. Every 6 weeks I have a career check-in. I ask myself those big career questions, like am I happy where I am? What role am I working towards? I then decide what to focus on over the next 6 weeks to help me move towards a goal. I find that 6 weeks is short enough that I can only really fit in the things I’m interested in. For the tasks I would usually kick down the road, I explicitly say “Yes, this may be valuable, but I’m not going to focus on it right now and that’s OK”.
Not scheduling time for it
Creating a personal development plan is one thing, but dedicating time to it each week is another challenge altogether and it’s something I’ve struggled with over the years. Maybe you have a big and important project that you think should take priority, or maybe you just feel guilty about taking time out for studying. Those feelings are natural at first, but I’d urge you to stick with it. Yes, there may be more pressing work, but you will likely always have a “big and important project” to work on. I find it useful to establish a routine and communicate it with my colleagues. If your teammates understand that you take an hour to study every Thursday afternoon, it can be factored into planning. Remember, there’s no need to feel guilty about taking this time—it’s an investment in you, and the returns will benefit everyone you work with.
How do you approach personal development? I’d love to hear about what works for you and your team.
It has been just over a year since we shipped our replatformed iOS app to great reviews and many happy customers. In this post I would like to take a look back at where we came from and how the FreeAgent mobile app is shaping up for the future.
We started developing the mobile app way back in 2014. At the time the entire engineering team was ~20 people and our engineers were mainly skilled in web development, so Cordova + React was a sensible platform to use. It allowed us to quickly get an app launched as well as keep it updated for both iOS and Android with a small team. For the data layer we used Backbone, which eventually we had to work around when we moved to a Flux-based app state.
In order to compile and ship the old mobile app we had a complex setup that stemmed from our initial MVP efforts: a Rails skeleton that would compile CoffeeScript and SASS assets and move them over to an output folder. A script would copy the assets to a path expected by Cordova and would then prepare the Cordova app for building. This process would often take 1 minute on average, only after which we could run or debug the app in Xcode. (To avoid this, we would usually run a Rails server and use the app in Chrome but using Xcode was sometimes necessary.)
As the years went by and the app grew in functionality – we added push notifications, Haptic Touch menu, file attachments among other features – issues started to appear with our tech stack:
Cordova used UIWebView which was deprecated by Apple and did not contain the newer Nitro JavaScript engine that WKWebView benefited from. This resulted in slower app speed, especially on older devices such as iPhone 4S.
WebKit would behave differently between iOS versions, sometimes leading to visual glitches that were not straightforward to fix across all versions.
We were limited in our accessibility efforts, as Dynamic Text and UIAccessibility were not available in web views.
Various keyboard issues arose, such as not being able to use more precise keyboard types and viewport resizing issues when the keyboard covered the screen.
CodePush deployments would fail for a portion of the user base, leaving them on older code.
Flow of data between JavaScript and native Swift code was awkward and had to go through native plugins. As time went by we found ourselves writing more and more native code.
A good chunk of effort went into duplicating UI elements such as tabbed navigation or time pickers. These elements have 10 years of development at Apple behind them and duplicating them ourselves resulted in inferior solutions that did not look or behave exactly like the native UIKit controls.
Most importantly, we knew the user experience could be better. We pride ourselves in excellent user experience on FreeAgent’s desktop app but not the same could be said for our Cordova-based app. We also wanted to explore other areas, such as deeper Haptic Touch integration, swipe actions, richer notifications, widgets and more system integration than we could do using Cordova.
Starting the replatforming
At FreeAgent we use RFDs to decide on architecture and future direction of our code. In March 2018 the mobile team started a discussion about why replatforming to use 100% native code made sense. We had already made a few attempts at this during our popular Hack Days – including a sample Apple Watch companion app – so we had some idea of what replatforming would entail.
We set out a 1-year timeline in which we would keep the existing Cordova app in maintenance mode while gearing up for the native rewrite:
2-3 months for hiring and onboarding
2 months for banking (the most challenging area of the app)
1.5 months for invoices and estimates
1.5 months for bills and expenses
1 month for timeslips
1-1.5 months for beta testing and bug fixing
At the time, the mobile team consisted of just 2 people (Anup and I) and we knew we needed more in order to build and maintain the project, especially as the iOS and Android projects would be completely separate. Over time the team grew from 2 people to 10: 3 iOS engineers, 3 Android engineers, 1 designer, 1 test engineer, 1 engineering manager and 1 product manager.
Once we had the team in place it was time to start the rewrite. The first version was intended to match the functionality, look, and behaviour of the old Cordova app. This approach allowed us to target a fixed spec and have a solid foundation upon which to build new features after the initial release. It also helped us to onboard new developers more easily, as they had an existing app to recreate.
Native tech stack
Based on our experiences with the hybrid Cordova stack, we knew we wanted to write the new app in as much native code as possible. Swift was the obvious choice, along with Objective-C for any code we could copy over from the legacy app, such as the Face ID prompt.
The rest of the project is standard iOS development toolset:
What’s more interesting is the project organisation. We keep all our own code in a single Xcode project and a single Xcode target. The files are grouped by feature:
Technically, we split the app into 4 layers:
API / Model: contains all the logic to communicate with our public API, including API clients and data models such as invoices or bank transactions
View Model: classes that view controllers bind to in order to get the data to display
Coordinators: navigation between screens, allowing us to keep every view controller independent of others
API layer
The API layer is simple and consists of an API client using URLSession, as well as data models that match our API data structure. These models conform to Codable and are decoded using JSONDecoder built into Swift.
We aim to keep the API layer lower-level and independent of the rest of the app’s code; eventually we may move it into its own framework.
One interesting bit in the API layer is how our client looks:
We have generic methods that map to the HTTP methods on the server. Each takes in an ApiResource (all our models conform to this) and an endpoint. The endpoints are specified in each model like this:
struct EmailTemplate: ApiResource {
enum Endpoint: ApiHTTPEndpoint {
case invoiceEmail(id: String)
case creditNoteEmail(id: String)
case estimateEmail(estimateId: String)
var rootPath: String {
switch self {
case let .invoiceEmail(id):
return "invoices/\(id)/email_template"
case let .creditNoteEmail(id):
return "\(CreditNote.Endpoint.creditNotesPath)/\(id)/email_template"
case let .estimateEmail(id):
return "estimates/\(id)/email_template"
}
}
}
}
Model-View-View-Model (MVVM for short) is a pattern created at Microsoft by Ken Cooper and Ted Peters. It allows greater separation between business logic and view-related logic. The basic idea is that we have a view controller which grabs data from and issues commands to the view model. The view controller is not bound to anything else and does not know any business logic details. The view model is the one bound to data sources, business logic, etc. It also transforms raw values into user-friendly ones for display.
There are various ways to use MVVM. The most useful way is by mixing in something like SwiftUI or XAML and doing data binding or using reactive frameworks to bind values to UI. However, as we did not require such complexity our application of MVVM is relatively simple:
View models have properties, which the view controller reads, and methods to do work such as loading data from the API.
View controllers have a viewModel property, which is injected by coordinators. They observe this object using a closure, which is called whenever something of interest happens in the view model.
When a change happens in the view model, the view controllers update the UI to match.
Let’s take a real example from our email screen:
class EmailViewModel {
enum Change {
case loading(Bool)
case updated
case sent
case failure(Error)
}
var didChange: ((Change) -> Void)?
private let apiClient: ApiClient
func send() {
let emailWrapper = EmailWrapper(email: email, resource: resource)
apiClient.post(emailWrapper, endpoint: sendEndpoint) { [weak self] result in
switch result {
case .success:
self?.didChange?(.sent)
case let .failure(error):
self?.handleSendError(error)
}
}
}
func loadData() {
didChange?(.loading(true))
apiClient.get(templateEndpoint, as: EmailTemplate.self) { [weak self] result in
guard let self = self else {
return
}
guard case let .success(emailTemplate) = result else {
self.email.subject = self.defaultSubject
self.didChange?(.loading(false))
self.didChange?(.updated)
self.loadVerifiedEmails()
return
}
self.email.to = emailTemplate.recipient ?? self.email.to
self.email.subject = emailTemplate.subject
self.email.body = emailTemplate.body.html
self.loadVerifiedEmails()
}
}
private func loadVerifiedEmails() {
apiClient.get(VerifiedEmailAddresses.Endpoint.all, as: VerifiedEmailAddresses.self, completion: { result in
guard case let .success(verifiedAddresses) = result else {
return
}
let emails = verifiedAddresses.emails
self.verifiedEmailAddresses = emails
self.email.from = emails.first(where: { $0 == self.defaultSender.email }) ?? emails.first
self.didChange?(.loading(false))
self.didChange?(.updated)
})
}
}
class EmailViewController: UIViewController, AnalyticsScreenViewTracking {
var viewModel: EmailViewModel!
override func viewDidLoad() {
super.viewDidLoad()
// More UI setup here...
viewModel.didChange = { [weak self] change in
DispatchQueue.main.async {
self?.viewModelChanged(change)
}
}
viewModel.loadData()
}
func viewModelChanged(_ change: EmailViewModel.Change) {
switch change {
case let .loading(isLoading) where isLoading == true:
emailEditorView.isEditable = false
loadingActivityIndicator.startAnimating()
case let .loading(isLoading) where isLoading == false:
emailEditorView.isEditable = true
loadingActivityIndicator.stopAnimating()
case .updated:
viewModelUpdated()
case .sent:
navigationItem.rightBarButtonItem?.isEnabled = true
isFinishing = true
delegate?.emailViewControllerDidSend(self)
case let .failure(error):
handleError(error)
default:
break
}
}
}
The view models have a didChange property, which is called whenever something happens that requires an update to the UI. They use an injected apiClient to call the API and when the API responds the view model emits events.
The view controller has a method, which is then called, and switches between the various event types for its particular view model. Handling all events in a single method like this is something that Apple recommend in their sample code. We found it to be helpful in debugging issues as the code is localised at only one point in the view controller.
Coordinators
Usually navigation in iOS is done using storyboards and segues. These are quite inflexible as a project grows due to a few reasons:
View controllers need to know about each other and what data the next one in the flow holds
Navigation flow is fixed and is harder to dynamically switch based on conditions such as feature switches or A/B testing
Up until iOS 13 it was not possible to intercept the segue to run custom initialisation code for a view controller
Storyboards and segues cannot be easily unit tested
After looking at a few solutions for this, we settled on the coordinator pattern. The coordinators are simple classes that push view controllers onto the navigation stack. At the root of our app we have a “master” coordinator called AppCoordinator. This handles login / logout flow: handling universal link navigation, showing the Face ID prompt on resume and processing Haptic Touch actions from the home screen icon.
Further down we have coordinators for each section of our app: LoginCoordinator, OverviewCoordinator, BankingCoordinator, etc. Each of these has methods to set up and push the relevant view controllers onto the navigation stack. Navigation between views is made via delegation:
Our app has complex views that change depending on various conditions. For example, we show more options if mileage VAT is reclaimed on fuel. Bank transactions also have various types which correspond to different options on screen.
The easiest way to implement dynamic views is to use a collection view. However, we did not wish to bloat our view models or view controllers with a lot of boilerplate code to handle various conditions and states. We ended up extracting those concerns in what we call “cell providers”.
There are two main aspects to cell providers:
The cell providers themselves, which register cell XIBs with the collection view and vend reusable cells to the view controller
Cell model providers, which provide models bound to cell types to the view model
When a view appears on screen, the view controller asks the view model how many cells it should display, what sections it has etc.:
class InsightsViewController: UIViewController {
var viewModel: InsightsViewModel!
private var cellProvider: InsightsCellProvider!
override func viewDidLoad() {
cellProvider = InsightsCellProvider(
collectionView: collectionView,
columnDelegate: self
)
// ...
}
override func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell {
guard let panel = viewModel.cellModel(at: indexPath) else {
fatalError("Could not find data for overview section at \(indexPath)")
}
return cellProvider.cellForItem(at: indexPath, for: panel, in: collectionView)
}
// ...
}
The view model, in turn, asks its cell model provider what models it has and returns those to the view controller:
The view controller passes the cell models onto the cell provider, which knows how to vend various cells depending on the models it receives:
class InsightsCellProvider {
enum CellIdentifiers: String {
case overviewHeaderCell
case overviewLabelCell
case overviewDebugCell
case notificationCell
case overviewBannerCell
case overviewColumnsCell
}
private(set) var collectionView: UICollectionView
func registerCells() {
collectionView.register(NotificationCell.nib, forCellWithReuseIdentifier: CellIdentifiers.notificationCell.rawValue)
collectionView.register(OverviewHeaderCollectionViewCell.nib, forCellWithReuseIdentifier: CellIdentifiers.overviewHeaderCell.rawValue)
collectionView.register(OverviewLabelPanelCollectionViewCell.nib, forCellWithReuseIdentifier: CellIdentifiers.overviewLabelCell.rawValue)
collectionView.register(OverviewBannerCollectionViewCell.nib, forCellWithReuseIdentifier: CellIdentifiers.overviewBannerCell.rawValue)
collectionView.register(OverviewColumnsCell.nib, forCellWithReuseIdentifier: CellIdentifiers.overviewColumnsCell.rawValue)
}
func cellForItem(at indexPath: IndexPath, for cellModel: CollectionViewCellModel, in collectionView: UICollectionView) -> UICollectionViewCell {
switch cellModel {
case let cellModel as OverviewHeaderPanel:
return headerCell(at: indexPath, withModel: cellModel, in: collectionView)
case let cellModel as OverviewLabelPanel:
return labelCell(at: indexPath, withModel: cellModel, in: collectionView)
case let cellModel as NotificationViewModel:
return notificationCell(at: indexPath, withModel: cellModel, in: collectionView)
case let cellModel as OverviewBannerViewModel:
return bannerCell(at: indexPath, withModel: cellModel, in: collectionView)
case let cellModel as OverviewColumnsPanel:
return columnsCell(at: indexPath, withModel: cellModel, in: collectionView)
default:
fatalError("\(type(of: cellModel)) does not have a counterpart cell")
}
}
private func labelCell(at indexPath: IndexPath, withModel cellViewModel: OverviewLabelPanel, in collectionView: UICollectionView) -> UICollectionViewCell {
let cell: OverviewLabelPanelCollectionViewCell = collectionView.dequeueReusableCell(
withReuseIdentifier: CellIdentifiers.overviewLabelCell.rawValue,
for: indexPath
)
cell.setData(fromPanel: cellViewModel)
return cell
}
// ...
}
This approach allowed us to keep the view models and view controllers slim. It also allowed us to more easily mock and test relevant view logic depending on state.
Testing
Our testing approach involves a mix of automated and manual testing. Initially, our strategy was to build up UI tests to match user journeys in the app and build unit tests for mostly used features and covering edge cases. We quickly found out, however, that UI tests were really slow on CI. As a result, we shifted our strategy to cover as much of the app as we could with unit and system tests, alongside manual release candidate testing, before pushing updates to the App Store.
We use the standard XCTest framework without any extra libraries. Due to our heavy use of dependency injection, testing is quite easy, especially around API layers and view models. This gives us good confidence that changes we make to the app logic will not break existing functionality.
As we build up our automated tests, manual testing is increasingly reserved for hard-to-test areas such as Face ID prompt, Haptic Touch quick actions, attachment uploads etc.
Moving forward
We are really happy with how the native version of the iOS mobile app has turned out and it seems that our customers are as well. We’ve received a lot of feedback that the app is more responsive and stable since the rewrite, and our App Store rating has grown to a stunning 4.9 stars. 98% of sessions are crash-free. Accessibility is much improved and we’ll continue to make strides in this area, as making our products accessible is an important goal for all of us at FreeAgent.
On the engineering side, we are now much more agile and have fewer tech stack issues to worry about. Deploying updates requires a wait for App Approval but the upside is that most users will have their app updated automatically by iOS. Sprints are now more structured as we aim to ship enhancements and new features at the end of our 2-week sprints.
Looking forward, there is much scope for improving the app to ensure the best experience for our users. Beside new features (we’re keeping them under wraps for now!), we have investigated adding support for iOS features such as Dynamic Type and Dark Mode. Now that our team is strong and the foundation of the mobile app solid, we’re looking forward to making the best iOS accounting app out there!