Empowering our practices: Take 2

Posted by on October 30, 2020

‘How can we use our data to understand and empower our accountancy practice partners?’ This is the question that I posed in my previous blog about delivering practice insights last summer. This question was the foundation for my project last year and is the one I came back to continue answering this year.

To recap how the project was left at the end of last summer; I had created three dashboards that summarised the usage of FreeAgent by accountancy practices and their clients. These dashboards were used by FreeAgent to speak to our accountancy practice partners and inform them of how their clients interact with the app, including:

  • Usage of the mobile app/desktop
  • Activity by account managers versus end users
  • Which clients have not been active for over 30 days

When I finished my last internship, the dashboards were being used frequently to start conversations with the accountants about which areas their clients were excelling in and which areas may need some improvement. There was a lot of excitement from the accountants about getting access to this data, as it was information that they just hadn’t had in the past and it could help them start necessary conversations with their clients who weren’t using FreeAgent as expected.

The buzz and excitement around these ‘Practice Insights’ was even greater than we’d imagined and it got us thinking, ‘How can we make this even better? How can we improve this practice insights project to empower our accountancy practice partners even more?’ and that was the beginning of the proof of concept ‘self-serve’ practice insights tool created over the last three months.

The Problem

I’m going to rewind slightly and first cover a few issues that came up with the practice insights dashboards. 

Practices have individual needs

Though the dashboards were really well received, they are certainly far from being a ‘one size fits all’ solution. As someone with no background in accountancy, one thing I have learned throughout this project is that accountancy practices all have different processes, different priorities and different ways of working. Therefore they have different needs on which data should be monitored to help streamline their processes and engage their clients. There was an iterative process of implementing feedback from accountants that allowed us to create dashboards which covered the needs of a broad range of accountancy practices. However, they still are limited to only the information shown on the dashboards, even though we have much more to offer. 

Practice dashboards can’t be sent interactively 

The practice insights dashboards are interactive. Users can drill into individual companies to get more detail. However, when an accountant is sent this dashboard it is only in a PDF format. This means that they lose the interactivity of the dashboard when being regularly sent it and need to do a screen-sharing call with a partner at FreeAgent to get the detail that they need – and even then they are still limited to data provided by the three dashboards, which is more of a summary rather than a detailed breakdown of what clients are doing.

Practices unable to get granular insights from our reports

To combat the lack of breakdown of information in the practice insights dashboard, my colleague Jack created a ‘Practice Engagement Insights’ report which has a more detailed breakdown of activity for companies in a practice. It can be downloaded as a CSV file and sent to a practice to play around with the data themselves if they want to. However this report is still limited to the activity over the date range we selected and requires the accountants to have knowledge of how to manipulate CSV data in the way they want to.

So to summarise these issues:  if the practice insights dashboard is not granular enough and the practice engagement report is not flexible enough, how can we create a product which is both flexible and detailed for the accountants to get the insights they need to help their individual practice? A possible solution – a ‘self-serve’ practice insights product which allows accountants to select the data that they need in the format that they need.

The Solution

Using our business intelligence tool, Looker, we packaged the insights in a way that allowed ‘self-service’ – which is currently being trialled with four accountancy practices. This means that we gave four accountancy practices access to Looker and created a project that allows them to explore their own data and build their own reports, depending on what they are interested in. The data is limited to data dimensions and calculations that we have created for the practices, which are split into four different categories: 

  • Company API Usage – data on client API usage (which API integrations have been set up and how often they are used)
  • Company Key Dates – data on important dates for clients (e.g first and most recent VAT returns)
  • Company and User Activity – data on minutes of activity on a client account broken down by users and account managers
  • Practice API usage – data on practice API usage (which integrations have been used by the practice)
Explore options for accountants

These categories of data are called ‘explores’ in Looker.

Company and User Activity explore

Within the explore, the accountant can pick and choose which data points they would like to see. In this example, the Company and User Activity explore, they could look at the number of minutes spent by the client and the accountant for each company over the last however many days they are interested in. 

We are aware that access to this tool could be overwhelming at first, so we also created some example dashboards (based off the previous ‘Practice Insights Dashboard’) to help get them started.

Pre-made dashboards available to accountants

Future of the project

So far we have received a lot of useful feedback from accountants who have trialled the tool. It is interesting to see how the accountancy practices use the tool differently- some find it helping for managing their own practice, but are less interested in details of what their clients are doing. Others find that details of client usage has been incredibly helpful for them to chase up clients where necessary. 

A part of the tool which one accountant finds extremely useful, could have less use to another – but this is exactly what we wanted, something with variety, something which can be customised by each accountant to give them what they need. One thing is for sure – they are all excited about getting access to this data. 

As for the future of the project? As I mentioned earlier, this is very much an early days ‘Proof of Concept’ idea, to show that this is something that could really be valuable to our accountancy practice partners. So how it could look in the future is dependent on many different variables. That’s the beauty of this project, it has the scope to become something much bigger and much greater, in whatever shape that may be. The bottom line is this, we know that our accountants are passionate about data and we are excited to continue down this path and find the best way for us to get their data to them.

Breathing New Life into Dormant Code

Posted by on

In the Tax Engineering team at FreeAgent we’re currently working on adding Corporation Tax and Final Accounts filing to our application as we continue our mission to help our customers relax about tax. We’ve been working on these features since June and recently released the Final Accounts report and the CT600 form, but what you might not know is that work on this project originally began back in 2017. So what’s the reason it took us so long to release anything? Three words: Making Tax Digital.

Making Tax Digital (MTD) was first announced by the government in 2015 and its aim is to “make it easier for individuals and businesses to get their tax right and keep on top of their affairs”. It was originally planned to be introduced in 2018 but was eventually delayed until April 2019.

What this meant for the Tax Engineering team was that work we had started on Final Accounts and Corporation Tax filing had to be halted as we shifted our focus to work on supporting VAT filing via MTD. Once that work was completed, several changes within the Tax Engineering team and other annual updates to established features meant that we weren’t able to pick the work up again until June 2020.

By that time the code we were working with was over 2 years old and a lot can change in that time, not least the team itself. None of the software engineers who originally worked on the project were still with the team.

What is Dormant Code?

Dormant code is code which is in the codebase but is not being actively used. In our case, some functionality had been built and was in the codebase, but it was hidden behind a feature flag. In addition to that we also had a work-in-progress pull request (PR) from the same time which was never merged into the main app.

How Did We Start?

1. Talked to the people who wrote it originally

We were fortunate that one member of the previous team who worked on the project originally was still with the company (albeit as part of another team now) and talking to them as a first step was hugely beneficial. They could talk through the decisions that were made at the time and highlight any areas which may need improvement.

2. Got it running

Once we felt we had a general idea of what the code was trying to do, the next step was to actually get it running. We ran any tests already in place and updated them if necessary to get them passing. Next, we fired up the app, checked the functionality was working and did what was originally intended, making any necessary changes along the way. 

3. Got it up to date

Technical Changes

Programming languages and frameworks are constantly evolving. What was the latest and greatest technique even 1 year ago might be seen as out-of-date now. For example, we had some custom code which rendered a view outside of a controller. We were on Rails 4 at the time, but have since upgraded to Rails 6. A new feature was introduced in Rails 5 which enabled us to update the old code to use the new technique, making it much more clear and concise.

Domain Changes

As well as language changes, there may have been changes in the domain you’re working in. Our Final Accounts work required us to integrate with Companies House APIs and their guidance on how to do that had changed too. We went through the dormant code with a fine-tooth comb and made sure that we were still meeting all of their requirements.

4. Built on top of it

Once we were happy that it was up to date and all the tests were passing, we could confidently build on top of it. This was the point at which we tried integrating the old PR, pulling out parts we wanted to keep and discarding other parts we felt were no longer relevant. We added additional tests for areas we felt needed them and started working through the to-do list we’d been compiling in steps 1 to 3!

Forgotten, But Not Gone

If there’s one thing we learned while working on this project it’s that dormant code doesn’t have to be dead code. If you can gain an understanding of what was originally done and be confident that it is still fit for purpose, then why reinvent the wheel? Why not instead just improve the wheel.

Can’t someone else do it? Automating Looker Housekeeping in Two Days

Posted by on October 2, 2020

Twice a year at FreeAgent we hold Hack Days, where the entire company gets a couple of days to work on whatever takes their fancy. We’ve had all sorts of projects over the years, from a text recognition “Receipt Scanner” which automatically creates an expense based on the image of a receipt, to an “Office Olympics” with a frantic bean bag race to decide the winner.

This year, I decided to build something to help us declutter our internal reporting tool – Looker – by automatically identifying and deleting unused content. This would make it quicker and easier for our users to find useful content and also speed up our Looker development (since our changes won’t be validated against as much content). Lea, our Data Science Intern, was interested in the project too and we started by putting together a quick list of things that the Looker Content Cleaner (LCC) should be able to do. This was our list of requirements:

  • Identify content that had not been accessed in 90+ days and move it to the Trash folder (“delete”)
  • Permanently delete (“purge”) anything that has been sitting in the Trash for 90+ days
  • Allow for some content to be marked as “do not delete”, regardless of usage figures (some content is only used if something breaks, or is looked at annually, etc.)
  • Retain a record of “what and when” has been deleted
  • Run automatically (so we don’t need to remember to run it)

Now that we knew what we wanted the LCC to do, we created a quick plan in Notion. We used this plan to track our progress and make sure we both were working in tandem. That was vital since Hack Days is only two days long – we needed to be quick and efficient! We decided to build the LCC in our ETL tool, Matillion, for the following reasons:

Now we had the plan, it was time to build!

Our Notion Planning Page

The Build

There was a portion of the build which was spent configuring the Matillion to Looker API connection. This wasn’t difficult, but it was a bit fiddly, so I’ll perhaps cover it in a subsequent blog post. Right now, I want to focus on how we fulfilled the requirements we set for the tool itself.

Requirement 1: Identify unused content and move it to the Trash folder

Identifying unused content was quite simple using Looker’s system__activity explore, so we created a saved report (a look) of content that had not been accessed for 90+ days:

An “Unused Content” look

Now we had a list of content that we wanted to delete, the next step was to move this content to the Trash folder. Content in the Trash is still recoverable – you can think of this as a soft delete. So what did the Matillion job look like that performed this? Well, once we had a token to connect to the API, we simply:

  • ran the look of unused content to store its results in a database table
  • compared that unused list against our do not delete list of exceptions, to build a to be deleted list of content
  • iterated over the content in the to be deleted list, set it’s deleted property to true and logged the action in a table (so we have an audit trail)
Moving the content to the Trash

Because the “Unused Content” look contained both dashboards and looks, we needed to make one of two API calls, either update_dashboard or update_look:

This is the O – content_delete job in Matillion, where we move the content to the Trash and log the event in the “audit” table

Requirement 2: Empty long-deleted content from the Trash

Now we could move unused content to the Trash, we wanted to empty the Trash of long-deleted content (90+ days counted as “long-deleted” for our purposes). This purge operation is permanent, so we needed to be careful here! To do this, we performed the following steps:

  • made an API call to search for deleted dashboards and looks
  • identified the content that had been deleted more than 90 days ago. This gave us a list of content to be purged
  • compared that list against our do not delete list of exceptions (not strictly needed, but a sensible bit of defensive programming)
  • iterated over the list of content, purged it and logged the action in our audit table

In practice, the API endpoints for looks and dashboards are both separate, so we performed these steps separately for both types of content:

Purging content deleted more than 90 days ago

Requirement 3: Allow for us to mark content as do not delete

In Looker, we have content that is accessed infrequently (or maybe never), but that we still want to retain. For instance, we have a dashboard which displays details on any data load failures and we only look at this in emergencies. We don’t want to delete this dashboard if we don’t view it for 90 days, as we may need it again in the future.

To ensure content such as this wasn’t caught up in the deletes, we stored details in an exceptions table, which looks like this:

Our exceptions are managed in this table

This table was then used in Matillion to ensure that content on the do not delete list was not deleted.

An example of the “looker_content_to_retain” table being used so we can ensure we don’t purge any of those dashboards

Requirement 4: Retain a record of “what and when”

Creating this audit trail was pretty simple. Every time we performed a delete or purge action, we wrote the following details to a table in Redshift:

  • The type of content (look or dashboard)
  • The ID of the content
  • The title of the content
  • What action was performed against the content (delete or purge)
  • When the action took place
Our audit trail

Requirement 5: Run automatically

Scheduling a job in Matillion takes under a minute. We scheduled the LCC to run just after midnight. The next morning, I checked the audit log and found that the LCC had purged a look that had been deleted 90 days ago. Hurrah!

The first “unmonitored” delete!

Bonus Requirement: Can we report on what the LCC has done in Looker?

As everything went so well, we quickly put together a Looker explore showing what the LCC had done. It simply reads the output log, but it allows our users to see what content has been deleted, and for us to report on its actions in aggregate:

Listing deleted content in Looker
Counting the deletions/purges of looks and dashboards

Conclusions

This is what the final LCC looks like in the Matillion UI:

The LCC in all its glory!

The benefit of both choosing the right tools and having a clear plan was apparent – we were able to get so much done in a short space of time. It also demonstrated that we can make life a little bit easier for our team and users of our data by harnessing the power of the Looker API – this is something I hope to do more with in future.

In terms of the numbers, the LCC has been running in Production for a few days now, with no issues. So far we have deleted over 100 pieces of content and purged over 250. This represents a ~25% reduction in the number of dashboards and ~33% reduction in the number of looks. All things considered, the building of the Looker Content Cleaner has been a great success!

Return of the intern

Posted by on October 1, 2020

A year on from my first data science internship at FreeAgent, I’m about to start my second one, and I’m getting ready for my second first day. It’s different this time though. Aside from the fact that I am now a fully fledged graduate (scary!), this internship is going to be based entirely within my living room, as the entire company is now working remotely. I have never worked remotely before and I’ve got some concerns. One of my favourite things about working at FreeAgent was the lively office atmosphere and spending time with my team. Although I knew I’d be back in that same team, I was worried that working from home could potentially be quite isolating, and that I might not feel as involved as I did the previous year. 

Remote deja vu

Inductions took up the first half of my first day back, along with setting up my new work laptop and familiarising myself with security protocols. It felt like deja vu, a vague familiarity, remembering sitting in the office boardroom last year (with a much higher dose of nerves in my system back then) anticipating what the next 3 months might bring. In terms of working from home, I felt at ease about the fact that I had worked with almost everyone in the team in person before, and there was only one person that I hadn’t yet met. Funnily enough, it turned out that the one person I had not met before (Jack) was the person I would be working alongside the most! Luckily for me, Jack has been incredibly helpful and great to work with, and working with new people remotely was not nearly as daunting as I thought it would be. My first day was almost completely meetings; catching up with the team, meeting Jack and hearing about the project I would be working on. Everyone was as welcoming as they were the first time around, and still somehow managed to keep the same friendly, chatty atmosphere as if we were in the office. By the end of my first day back, the main concerns I had were already at ease.

My working from home setup

The first week

Last summer, I remember it taking at least a few weeks before I found my feet and properly settled into the actual work itself, understanding the data model and technologies that we use. I think I expected it to be the same this time around but that was a very incorrect assumption. Everything came flooding back so quickly in my first week, and aside from learning the new technologies we use, by the end of the week it felt like I had never left! I’m sure a lot  of this is down to the team being as welcoming as they were.

I think that the highlight of my first week was seeing a question in our team request Slack channel from a member of the sales team, asking about the practice insights dashboard I created last year. It was a really rewarding feeling to know that the work I did last year still had value and was still getting used. I knew that my project got a lot of positive responses last year but I imagined that once I had left, the momentum behind it would have slowed and people would have forgotten about it, so I was really pleasantly surprised to find out that wasn’t the case. Even this week I spoke with a member of the sales team who told me she regularly does ‘insights calls’ with her accountancy practice partners, showing the dashboards I created, and they still find it really useful.

Spot the difference

Although it was surprisingly quick to settle back into the swing of things at FreeAgent and feel at home, day-to-day life and working from home is still very different to my experience from last year. I would love to say that I enjoy working from home as much as I did working in the office, but that wouldn’t be the truth. Mostly, I miss the social interaction of seeing people in the office, but initially something I found really difficult was separating my work and home life, now that they were both essentially in the same room. My team was really helpful when I told them I found the adjustment slightly difficult in the beginning, giving me tips around making sure I take a full lunch hour and putting my work stuff out of sight over the weekends. Like most things, I think time was a big factor in the adjustment. Working from home is just my norm now, and there definitely are perks! As the internship went on and my project began gaining speed, I quickly became busier with work, which definitely helped. Occasionally, on days when work feels quite slow I find I miss being in the office more, however I think it’s important to remember (and remind myself!) that everyone has slow days in the office as well, and that’s just life. 

As for missing the social interaction, luckily I still get to catch up with everyone from the analytics and data science teams in our daily 15-minute post-lunch game of ‘Who Wants To Be A Millionaire’. Sadly,  the Analytics team has unfortunately not yet been in the lead since I started, but let’s remember that correlation does not equal causation, in other words, I would like to think that me joining is not the cause of this. Joking aside, the game is a really nice way to break up the day and have a good laugh with everyone on the data science and analytics teams, especially if there’s a day with not many meetings. It’s always nice to have a catch up and see everyone’s happy (but competitive) faces.

Woah, we’re halfway there

I can’t believe that I’m already halfway through my internship! I realise that I’ve still not given any indication of what it is that I’m actually working on so I guess that will need to be covered in a separate blog post – it’s exciting stuff though!

Overall, aside from my occasionally patchy wifi and the lack of bagel Wednesdays, it’s all-in-all not too bad working from home. I’d be lying if I said I didn’t miss seeing everyone in person every day but given the current circumstances, we’ve just got to make the best of what we have. I’m thrilled to be back at FreeAgent regardless of where I am working and I will say that the team at FreeAgent have done a great job of making the best of a difficult situation. I look forward to the day that I can see the team again in person!