Waiting for it with Capybara’s synchronize method

Posted by on 26 August 2025

Feature specs are notorious for their potential to flake. It’s possible for the results of feature specs to be inconsistent because they have to deal with asynchronous state. In a typical test environment, there’s a single Ruby process at play, so test code will be executed in order as written – we can reasonably expect one line to complete before the next is executed. But when it comes to feature specs with Capybara, we have three processes to deal with:

  • the test runner, which uses a driver (something that can translate Ruby instructions into web browser automation) to step through the test instructions in order
  • the server, which runs in a separate process and exposes itself on a port so that it can be accessed
  • the browser, which is controlled by the driver to browse the site, and thus make requests to the server

This means we need to think about asynchronous state – after all, if the test runner instructs the browser to make a request to the server, how can we be sure that the page has loaded?

Pardon me, are you Aaron Burr, sir?

The secret to remedying flakes is to…wait for it!

That’s not me stalling for word count on this article, that’s the genuine advice – wait for it. When we call something like this:

visit root_path

…that’s the test runner using the driver to instruct the browser to navigate to /. But how can we tell that the browser has rendered the page?

Typically we tend to do that with an assertion of what’s shown on the page:

expect(page).to have_content("Overview")

The have_content matcher is one of many provided by Capybara. This is important, because the matchers provided by Capybara work a little differently to most. While in a typical test where we’d perhaps call something like:

expect(author).to have_attributes(name: "Simon Fish")

…that matcher will only call attributes against author once because we’re working within a synchronous context, and we can reasonably expect author to have those attributes.

But Capybara is built with this asynchronous context in mind, so have_content will poll the page instead. The matcher will succeed if it finds what it’s looking for, and if not, then it’ll raise an error (and thus fail) after a set maximum wait time.

To bring this back around, visit root_path won’t do any waiting – it’ll tell the browser to visit that path, but it’s up to you to decide what it is that indicates that the browser has loaded the page at /. In the case of FreeAgent, our root path is the Overview page, so we could look for the word Overview (have_content(”Overview”)). Or, if that’s too generic, we might want to do something like put a data-testid attribute on the page title and expect that to appear (have_css(”[data-testid=’overview-title’]”)).

Every action, whether that’s visiting a path, clicking a link or button, or submitting a form, needs to have some form of waiting Capybara expectation to verify that the request has completed and the page has changed in response to it.

But that’s not all there is to it – there’s one scenario where just a single waiting matcher on its own isn’t good enough!

Drop some knowledge!

More recently, some of the remaining flaky specs we’ve had to deal with have involved dropdowns. These take one click to open, and another to select an option. Now, let’s say we were to write:

click_on "Actions" # opens a dropdown marked "Actions
click_on "View"    # selects the dropdown option titled "View"

Running this could go one of three ways:

  1. Clicking on “Actions” opens the dropdown, and clicking on “View” activates its behaviour
  2. Clicking on “Actions” opens the dropdown, but due to a code change, “View” isn’t visible. Capybara tries to find a clickable element with “View” in its content, cannot, and eventually fails.
  3. Clicking on “Actions” does not open the dropdown. Capybara looks for a “View” option next, though, because the click_on action completed successfully.

For each click_on call, the only thing Capybara waits for is for the element to be visible. While an “Actions” button may be visible, it may not yet be interactable – we might be waiting for some JS to mount, for example. This is likely to happen in good time for a customer browsing the site, but the feature spec environment might take a little longer to mount the JS, especially if it’s not running on as much memory. So there are reasons why 3 could happen, and there are also reasons why 3 could stop happening with time – the browser could eventually load the JS and the dropdown could become interactable.

In ordinary use, the two things should be tied together, too – clicking on “Actions” should reveal the “View” button. So if we can’t find the “View” button, it’s probably because the “Actions” button needs clicking again. How do we tell Capybara to retry opening the dropdown if the “View” option isn’t visible?

This is where page.document.synchronize comes in. The synchronize method is responsible for Capybara’s waiting behaviour under the hood – there are helper methods that check if an element is on the page right now, which matchers like have_content and have_css call from within a page.document.synchronize block so that they are retried until they pass or time out.

So in the example above, we might use:

page.document.synchronize do
  click_on "Actions"
  click_on "View"
end

This would mean that if click_on "View" were to fail, we would also retry click_on "Actions" , thus letting us ensure that the dropdown is open – and retry toggling it if it isn’t – before clicking on the “View” action.

One word of warning – using expectations within a synchronize block will cause it to fail, so lean on Capybara methods like page.find , which will raise the error synchronize swallows, instead.

Capybara’s waiting behaviour is absolutely core to consistent feature specs as I’ve illustrated, and page.document.synchronize is a handy extra trick to securing consistency where two coupled elements are concerned.


Knowing this has helped us to remedy many flaky specs and prevent others from becoming possibilities. We have code that dynamically updates form content depending on selected options, which often reveals places where we might not have accurately accounted for the async behaviour at play. Being mindful of waiting with Capybara enables us to find the right solutions and make our feature specs in these areas more resilient – and might help you to do the same!

Wiggling my way to a win

Posted by on 14 August 2025

The work calendar

At FreeAgent, we work in intervals of sprints (2 weeks) and cycles (which are made up of 4 sprints – adding up to roughly 2 months total). In a cycle a team typically aims to complete 1 larger project, and during a sprint a team aims to complete sub-tasks of that larger project. This helps construct timelines and structure for product managers and engineers to work within. 

In between cycles, leadership takes time to reflect on progress. Product managers and engineering managers plan the next cycle, and this leaves engineers with some free time outside their regular work.

Coming up to the end of my first cycle at FreeAgent, my manager informed me of the amusingly named ‘Wiggle Week’ before the start of the next cycle. 

What is Wiggle Week?

Wiggle Week gives teams some wiggle room to adjust when transitioning to the next cycle. It’s also a chance for engineers to work on something different from their normal routine work. This could be creating a new feature, or fixing a bug that has bothered them all cycle. The extra time could also serve as a buffer, if a project didn’t go to plan in the previous cycle. 

There is a Notion board for all the ideas people across Engineering have come up with. Some examples are, updating our developer website to the modern FreeAgent style, and rejuvenating the FreeAgent game that engineers use to learn about the application. Interns can also contribute to this board, and we’re in a unique position to offer insightful and helpful suggestions as we each have a fresh pair of eyes to go over the codebase.

When I came across the board, there was a plethora of ideas I could help with. But with so much choice, how could I decide what to work on?

Deciding on a project

When deciding what I should pick up here are some of things I took into consideration:

  1. Ability – Generally speaking you should stick to something within your comfort zone, especially if you want to complete a project within the 5 days of Wiggle Week. For example, if you’re used to working on the main Rails application, it makes sense not to implement a change on the mobile applications! That’s not to say, don’t branch out to other teams, cross-team collaboration is encouraged.
  1. Scope – Assess the scope of the project. Although one week may seem like plenty of time, it’s important to factor in planning, designing, and time for code reviews and pre-production testing. As a lot of engineers will be working on their own projects. it may take longer than usual to get a code review, or a design signed off.
  1. Interesting – Work on something you genuinely find interesting and will enjoy! There’s no point dragging yourself through Wiggle Week working on something you find dull. 

Being an intern for the Banking Integrations team, I was used to working on the main Rails application, so I felt most comfortable taking on something in that codebase. One project that seemed to tick all three boxes was updating the default validation messages which we show to customers when an error is detected on a form submission. 

Old validation message for 1 error that says
"1 error prohibited this email from being saved, There were problems with the following fields:, - The 'To' field can't be blank"
Old validation message for multiple errors that says
"2 errors prohibited this email from being saved, There were problems with the following fields:, - The 'To' field can't be blank, - Subject can't be blank"

Old validation messages

The mission was to make error messages more user-friendly without having to completely overhaul how they’re created or displayed. Despite appearing as a straightforward modification, it was an interesting multi-layered problem. The code needed to be designed so that updating the message again in the future would be much easier. In addition to this, we had to release changes gradually to ensure we didn’t break all the validation messages at once. 

Assembling the dream team

Although I could have chosen to fly solo, I thought I would take this opportunity to work with my fellow intern Owen from the Workflow team. Having someone else to bounce ideas off and brainstorm with was particularly nice, especially because we were in the same boat being new to the company. 

As interns we did lack context and experience, but this was even more reason to dive in and try to learn something new. We needed to do more research and ask a lot of clarifying questions to more seasoned engineers throughout the process, so it ended up being a fantastic learning opportunity. We were able to branch out to other teams and utilise different perspectives.

Defining the problem and solution

Before jumping into the problem, we had to really understand what needed to be done. Notably, we were trying to futureproof the message, this meant we had to design our solution in a way where it would be easy to update the message later. As a starting point, the Notion card had a comment from another engineer on how this change could be implemented. 

After some initial discussion, we fired a message to them for some clarification. This raised a few more questions, and with some additional help from our respective teams we then clearly understood what we needed to do and how to do it. We also spent time splitting up the work and co-ordinating pull requests (a request to merge proposed changes into the main codebase). I cannot stress how important these discussions and questions were. FreeAgent has such an open culture where you can always ask for a hand if you are stuck. 

It was eye-opening to learn how each team works a bit differently. For example, the change we were working on was closer to the Workflow team’s focus than any other’s, so we followed their structure for pull requests. Workflow was more strict about having small changes that are easier to review, descriptive commit messages, and linking to previous work. 

Initially I found this quite frustrating, especially considering I didn’t have to take these into account as much when working with Banking Integrations. What I learned is that each method of working has its own merits. Having small pull requests meant we were less likely to trip up during code review, and because they were self-contained they could be reviewed and released independently. I came around to seeing the value in this structure and even implemented some aspects of it in my project, for example I dedicated more time to having descriptive commit messages and pull request descriptions. Collaboration and communication is so important in this role. Helpful comments and descriptions lead to less friction in communication and ultimately greater efficiency for passing code reviews and testing. As a point of comparison, we ended up with 7 pull requests, whereas if I was making this change through Banking Integrations I might’ve only had 3.

Releasing the changes

After implementing the solution, passing code review and pre-production testing, we released the changes and lived happily ever after…

Well, almost – following up on changes is incredibly important. Informing the relevant people is essential, especially for customer-facing changes. For this work, we let the Support and Product teams know about the changes we were making. In addition to this, our visual regression test suite, which compares screenshots of the app, needed to be updated to recognise our changes. 

New validation message for 1 error that says "Please check the following issue, The 'To' field can't be blank"
New validation message for multiple errors that says "Please check the following 2 issues, - The 'To' field can't be blank, - Subject can't be blank"

New validation messages

Finally getting to the win, product designer Andrew who prepared the new messages, mentioned me and Owen during the Weekly Wins of our company-wide Town Hall. Weekly Wins is a segment of the meeting where we celebrate the accomplishments that occurred during the week. It felt incredibly rewarding to be acknowledged in front of the whole company. 

Wiggle Week is a brilliant opportunity to work with another team and try something new. I learned about how Workflow operates which led to me changing the way I approached work. I highly recommend you try to implement something similar in your own work!

Switching to Swift: The iOS Migration

Posted by on

Apple released the SwiftUI framework at their Worldwide Developers Conference (WWDC) 2019. The new framework was a significant shift in iOS app development, promising faster development and more reactive user interfaces (UI). At the time, the FreeAgent iOS codebase was in much the same state as other iOS apps – using UIKit, an entirely different framework that had been in existence since 2008.

Whilst UIKit was, and still is, a reliable framework with a large developer community, FreeAgent made the decision to begin the migration to SwiftUI in 2022. Here are some examples of how the two frameworks differ and how SwiftUI is working well for us.

The benefits of SwiftUI

1. Declarative Syntax

Imagine someone is cooking dinner for you. If the person cooking were a professional chef, you would simply describe what you want to eat. If they had never cooked before, you might want to give them a recipe.

SwiftUI is the chef in this situation. Declarative syntax means that the framework focuses on the overall goal of the programme, and doesn’t require specific instructions. On the other hand, UIKit (the person new to cooking) uses imperative syntax – meaning instructions must be dictated for it to work. As such, SwiftUI is able to produce the same result with fewer instructions.

Essentially, declarative syntax allows you to do a lot more in a lot less time. It is also less overwhelming for beginners, and far easier to maintain – for the simple reason that declarative syntax is far more succinct.

2. Reactive UI

SwiftUI is a modern framework with simplified state management allowing for reactive changes to UI using features such as state variables and data binding.

State variables cause the UI to change when they change. Binding links two variables together, so that when one updates, so does the other. Combining these, you create multiple locations in which you can update the variable, and the UI will change reactively.

Here’s an example: you want to make a toggle switch. When it is on, you want to display the text “on”, and when off, you want to show the text “off”. To achieve this you need a state variable and a toggle (a built-in component to SwiftUI), where the isOn field is bound to the state variable.

@State private var isToggleOn = false 

public var body: some View { 
  Toggle("Toggle Switch", isOn: $isToggleOn)

 if isToggleOn { 
   Text("Toggle is on") 
 } else { 
  Text("Toggle is off") 
 } 
}

You have a reactive UI with just two components. When you switch the toggle, it will automatically trigger the text field to update because it is linked to the state variable. Using the dollar sign ($) ensures SwiftUI creates the binding to the state variable.

Contrast the simplicity to UIKit, where you must manually handle updating the UI. If you switch the toggle switch, you must then tell all other components to update manually.

3. Live Previews

Another nice feature is live previews. In your code editor you can view your UI in real time for minor changes like text colour or size. Not having to rebuild the entire FreeAgent mobile app in order to check for minor changes saves a huge amount of development time.

One of the most important features in SwiftUI is that it can operate alongside UIKit. Small sections of the code can be updated view by view, making future development in those areas significantly faster. This means a large upheaval to the codebase is unnecessary, and we can adopt the benefits of SwiftUI without immediately and fully committing to it.

An example from the iOS codebase

My project is titled Share to Smart Capture. Sharing general files to the mobile app from outside sources (e.g. the Photos app) is a pre-existing feature, but it is written in UIKit. This new feature means that users have the option to share files to Smart Capture – an area of the app that auto-extracts information from files such as images of receipts. 

The Share to Smart Capture feature has been a perfect opportunity to migrate the Share Extension to a SwiftUI implementation. Here are some of the differences between the old and the new screens.

1. The File Structure

The UIKit implementation has several files for different purposes: providers, which lay out the components; view models, which represent the state and the data of components; and the controller, which responds to events that may occur in the UI. There are multiple providers, to break down the code to a manageable level, and multiple view models – one for each component and one for the overall screen.
In contrast, the SwiftUI view and its logic are contained in one file (aside from the dependencies).

2. Reactive UI

We saw the toggle example above, where the state variable drove the UI responsiveness. When it was switched, the state variable automatically updated. In the new Share Extension, whether the file is private or shared is controlled by a similar toggle.

In UIKit, it isn’t quite that simple. First, the toggle is tapped. This triggers an event which calls the provider’s delegate (an object that allows it to notify another object about events). This tells a ViewModel that the toggle has been tapped. The ViewModel then updates its state, which triggers a ViewController to handle the change.

As you can tell, SwiftUI is a lot simpler with regards to implementing a reactive UI.

3. Readability

It’s also a lot more readable. Consider that the UIKit code explanation I gave was a simplified version. You still have to search multiple files, multiple functions and have a solid understanding of UIKit to understand the code. In contrast, you can understand the SwiftUI implementation just by looking at the toggle and anywhere the state variable linked to that toggle is accessed or updated. Below is an example of SwiftUI vs UIKit code for implementing a list:

import SwiftUI

struct FruitGridView: View {
    let fruits = ["Apple", 
                  "Banana",
                  "Cherry"]

    var body: some View {
        NavigationView {
            List(fruits, id: \.self) { fruit in
                Text(fruit)
                    .onTapGesture {
                        print("Item \(fruit) tapped")
                    }
            }
            .navigationTitle("Fruits")
        }
    }
}
import UIKit

class FruitViewController: UIViewController, 
			   UICollectionViewDataSource, 
			   UICollectionViewDelegateFlowLayout {
    
    @IBOutlet weak var collectionView: UICollectionView!

    let fruits = ["Apple", "Banana", "Cherry"]

    override func viewDidLoad() {
        super.viewDidLoad()
        self.title = "Fruits"
        
        collectionView.dataSource = self
        collectionView.delegate = self
    }

    // MARK: - UICollectionViewDataSource

    func collectionView(_ collectionView: UICollectionView, 
			numberOfItemsInSection section: Int) -> Int {
        return fruits.count
    }

    func collectionView(_ collectionView: UICollectionView, 
			cellForItemAt indexPath: IndexPath) 
				-> UICollectionViewCell {
        let cell = collectionView.dequeueReusableCell(
				        withReuseIdentifier: "FruitCell"
				      , for: indexPath)
        cell.contentView.subviews.forEach { $0.removeFromSuperview() }

        let label = UILabel(frame: cell.contentView.bounds)
        label.text = fruits[indexPath.item]
        label.textAlignment = .center
        cell.contentView.addSubview(label)

        return cell
    }

    // MARK: - UICollectionViewDelegate

    func collectionView(_ collectionView: UICollectionView, 
			didSelectItemAt indexPath: IndexPath) {
        let fruit = fruits[indexPath.item]
        print("Item \(fruit) tapped")
    }

    // Optional: Layout
    func collectionView(_ collectionView: UICollectionView, 
		        layout collectionViewLayout: UICollectionViewLayout,
                        sizeForItemAt indexPath: IndexPath) -> CGSize {
        return CGSize(width: collectionView.bounds.width, height: 44)
    }
}

And this further emphasises the readability aspect: The Share Extension SwiftUI implementation is 184 lines shorter than the UIKit one, despite adding saving to Smart Capture, varying based on API results and showing more information on screen.

Challenges of migration

The migration to SwiftUI from UIKit is highly beneficial in terms of production speed and efficiency. However, balancing adding new features with migrating pre-existing ones is an ongoing challenge. Whilst it is quicker to code in SwiftUI, migrating from UIKit and simultaneously adding new features can be a slow process – but adding more UIKit that in the future will have to be changed isn’t ideal or any faster in the long term. So, while new features may be slower to develop temporarily, once the codebase is migrated the full benefits of SwiftUI should be evident.

Building the bridge from education to real-world application

Posted by on 13 August 2025

You’ve just hung up the phone and you’ve received an offer for an internship. The excitement is overwhelming as the countless months of applications, assessment centres, interviews and rejections become obsolete. You’ve done it! This is your first step gaining valuable experience about being a software engineer and bridging the gap from theoretical university to real-world application. You’re faced with excitement and questions. What is it like to work on a large codebase? What will my team be like?

My internship served as a great opportunity to gain the skills needed to go from working on small scale university projects to being able to independently contribute code to a massive codebase. This is an insight into some of the challenges I faced, overcame and learned from throughout my time as a software engineering intern at FreeAgent.


Starting Construction:

One of the toughest challenges I faced when building my own bridge was having the confidence to start on this journey in the first place. The size of the bridge that each person must build will vary depending on their previous experience, although the feeling of impostor syndrome is something most people will experience. It’s difficult to start building a bridge if you don’t believe you can do it in the first place. In the period from accepting the position through to the first few days of my internship, I frequently considered myself a fraud who’d received this opportunity due to luck. I believed I didn’t have the skillset required, and that the recruiters had just overlooked a more qualified applicant, mistakenly choosing me. I felt nervous as I believed I had to prove myself and not be outed as the fraud I believed I was. I was worried that I wouldn’t be able to do anything correctly. Naively, I forgot that an internship is a learning opportunity. If there is one time to start messing things up in your professional career, it’s now! Fortunately, the team here at FreeAgent provide the blueprint and materials to build and cross this bridge!

There’s a lot of structure in place to support you. This includes having your own personal buddy, frequent 1:1s with your manager, and extremely helpful and friendly co-workers. Within the first few weeks, my feeling of impostor syndrome evaporated as I found myself becoming more familiar with the codebase and with FreeAgent’s software development process. This led to me having more confidence in my work. It turns out that I knew a lot more than I thought I did! I also quickly learned that it is not a solo venture, as you’re paired with your buddy and you share the journey with your fellow interns.


Git good:

Another major challenge at the start of my internship was learning my way around Git. I had used Git in my second and third years at university, which gave me the idea that I was knowledgeable about source control. Little did I realise, I’d barely seen the tip of the iceberg. The skills taught in university provide a basic foundation, although they aren’t the most helpful when working on a large scale application like FreeAgent, where there are dozens of developers working on it daily.

I found myself often getting tangled up in merge conflicts with myself when working on multiple tickets to implement or update a feature. At first, these were difficult to resolve on my own, but thankfully, the Git magicians within my team were able to help. Through more practice and a lot of support from my team members when wanting to perform certain actions, I found that a lot of the issues I was having at the beginning were no longer happening.

One piece of advice I would give to myself before starting this internship, would be to spruce up my Git skills and better understand how it works as opposed to just remembering certain commands. These skills are essential for when you are working collaboratively on software, and they’re definitely some of the most valuable ones I’ve gained throughout my internship.


Stress testing:

Any bridge being constructed must be stress tested to ensure that it is structurally sound. Part of the process of building my bridge to being a software engineer was writing tests for my code. My university courses had often put emphasis on the why for writing tests, although they had provided little practice on how to write the tests and what to test for. 

There were occasions when I could implement the functionality for a feature within a couple of hours but then found myself spending days trying to write their tests. To begin with, I had difficulty writing tests using Ruby’s testing tool RSpec but with more practice I found writing the code for the tests easier.

A more persistent issue was knowing where and how much to test. I often found myself trying to walk the line between too much test coverage and too little. You don’t want to create redundant tests, as test suites can be expensive, but you also want to make sure you’re adequately testing your new feature. There was also uncertainty about whether I was testing in the right places. I still find writing tests difficult, although nowhere near as much as when I started. Practise and support from others have allowed me to develop this skill from nothing. I owe a big thank you to all the Workflow engineers who consistently lent a helping hand during their busy schedules. Without their help, this is definitely a challenge I would have struggled to overcome.


Crossing the bridge:

Looking back at this summer, and where I was when I started this journey, has allowed me to understand how much I’ve developed and appreciate the value of having real-world experience. A key motivation for me to do an internship was to gain experience and also create work with impact. Within my first few weeks, that goal had already been achieved, as my work was shipped to production as it was completed. I overcame personal doubts and found confidence in the work that I produce. I also learned from and overcame the technical hurdles of Git and testing.

None of this would have been possible without the support from my team, and from my buddy in particular. The working culture and experience here is unrivalled. So, whether you’re considering applying or have just received an offer for an internship position here at FreeAgent, I would say have confidence in yourself, treat failure as an opportunity to learn and most importantly enjoy it. My time at FreeAgent has allowed me to learn, and has most importantly bridged the gap between education and the real world. 

Building the building blocks: My summer on FreeAgent’s Design System Team

Posted by on

As someone hoping to find a job within the UI/UX field of computer science, I was amazed at how little I knew about how a design system team works. My classmates at university were the same. They too didn’t realise the scale of these teams and had never come across the phrase “design system”.

This internship has been a crash course in why design systems are so important. I think it’s something more students should know about.

Okay, so what is a design system?

A design system is a catalogue of the fundamental building blocks and best practices that make up the user interface of a product. This includes foundations like colour palettes and typography, reusable components like buttons and data tables, and patterns that solve common interface problems like form validation or loading states. 

An engineer building a feature like a login page doesn’t need to start from scratch; they can go to the Design System documentation to grab components like buttons and use patterns like error messages. They can do this knowing they will all fit the FreeAgent style as they’re using the foundational colours and spacing. This speeds up the process, creates consistency in style and quality, and allows for central control of these elements. It’s like using constants in coding – set the value once (and well) and it’s much easier to use or update later on.

So, what does that look like day to day? Here are some of the main roles the team takes on, and some of the things I got to work on this summer.

1. 🔍The Detective Work (aka Research)

Nothing gets built without a plan. Before writing any code, the team does thorough research to figure out the best way to build a component. It’s all about making sure it’s useful, efficient, and accessible, as what goes into the design system will be reused throughout the product.

  • What I did: I helped research a potential new feature for our data table component. This meant seeing how other systems implemented it, assessing if those approaches would work for us and checking if it aligned with our coding practices. It was a great experience to take an idea the whole way through the research process and learn what industry experts advise when constructing this type of component.

2. 🏗️The Construction Zone (Building Components)

This is where all that research is put to use. The solution found in the researching stage now gets built, tested and iterated until it’s working at a high standard. This building stage could be building a brand new component or adding in new features to already existing ones.

  • What I did: My first project was helping to add a new “total row” feature to our data table. This was my deep dive into the codebase. I spent a lot of time using the Visual Studio Code search tool to figure out how everything was connected, untangling that “Ruby magic”. It was a great project for me to get comfortable with the code. Later, I built a “compact view” for the same table – this made use of the knowledge I gained from the previous project as well as challenging me to learn more. These are now up and running on the FreeAgent website and it’s amazing to know that something I built as an intern is making a positive impact for customers.

3. ✉️The Mail Room (Communication Across Teams)

A design system isn’t a good one unless it listens to what other teams need. A large part of the job is encouraging other teams to come to you with problems, or things they think would improve the system, and making it happen. Collaboration is the only way to have a well thought out system.

  • What I did: I saw this in action when another team asked for a new, smaller size for our Avatar component which I then built and shipped out for them. This was a great example of how we collaborate with others teams to add current, needed pieces to the design system to help them out.

4. 📒The Handbook (Documentation)

Even the best component in the world could be rendered unused and useless if it goes undocumented. Good documentation, with examples, dos and don’ts, and code snippets, is everything. It’s like someone asking you how to make a cake and you just hand them the finished product: great in the moment if they’re hungry, useless in the future when they want another cake.

  • What I did: For every single thing I built, I also wrote the documentation. This outlined the changes made, explained my work, and gave example code to be easily used across the site. Not only is this important for others, but I also really enjoyed the process of looking back on the work I’d done and condensing it.

The Takeaway

My university courses gave me a great foundation for front-end development, but this summer taught me what truly holds a large product together. Getting a real feel for how development works at a larger scale taught me so much; I feel that I am walking away as a better and more thoughtful programmer.

So if you’re a student exploring the front-end development world like me, I’d encourage you to have a look at the design systems hiding underneath the websites you know and love. It’s where you’ll find the work that makes these great products cohesive and allows them to add new features so quickly. For me, getting to be a part of that process on the design system team has been an amazing way to spend my summer at FreeAgent.

Uncharted Waters: a guide to exploring unfamiliar codebases

Posted by on 12 August 2025

Some companies’ codebases are massive. FreeAgent’s, with its monolith Rails application, certainly is no exception. The first time you clone one of these codebases onto your machine and see the sheer number of folders, files and complexity, your eyes might just widen and your jaw may just drop – letting out a yelp. As you click through the first few random files, noticing that you don’t understand a thing and that it takes a while just to scroll through the folders in your file explorer, you’ll quickly realise why it is called engineering.

I have more bad news: the chances are that with each new project card, you’ll be tinkering with a separate, and again unfamiliar, section of the codebase. But fear not, for there are ways you can make this challenge easier and build your confidence in figuring out what the heck is going on.

A sea of syntax

When you sail out on your first task, it can be easy to get lost and overwhelmed by all of the syntax and code in the files you’ve opened. Attributes on a div that you’ve never seen before, long-named methods that don’t seem to be relevant, and the intimidating stimulus controllers may stick out like lethal icebergs. Thankfully, icebergs can be navigated around. A great thing about working with existing codebases is that oftentimes you don’t need to worry about a lot of the code at a low level; the majority of it you can safely ignore – it just works.

So don’t worry about reading a file through line by line and failing to understand past the fifth. Though there are icebergs, you come to realise the sea isn’t always rocky, and steering around them is not so difficult. What is more important is that you slowly but surely build a high level understanding of what sections of the codebase do.


Useful tools/techniques (for VS Code, though other IDEs may have similar features):

  • Github Copilot Extension
    Mac: Ctrl + Command + I (^⌘I)
    Windows: Ctrl + Alt + I

Tip: Adding a file to Copilot’s context and asking it to explain to you what it does is a really effective way to get a quick, high level summary of what it does without having to dig through the code.

Swim around

The best and most crucial advice I can give is this: set aside some extra time, whether it be 15 minutes or an hour, to just explore the code around your task without trying to complete it. You could click through definitions of methods, change some text in a view to see how it reflects on the webpage, add in some print debugging statements to see when, and in what order, things are triggered… get creative with it!

Even subconsciously, you’ll start to pick up on patterns of code structure, how methods are named, what files and sections are coupled, the general path formats for different components, and more. This is invaluable to get an insight into “how things work around here”, where “around here” can be as large or small a chunk of the codebase as the context requires. This makes the actual attempt at your task much more digestible – you’ll likely have already started to discern ways you could implement it as you swam around.


Useful tools/techniques:

  • Search tab
    Mac: Shift + Command + F (⇧⌘F)
    Windows: Shift + Ctrl + F

Tip: If you know where in the app you need to work, but are unsure of which file, copy some text from that page and search for it in the codebase. This should help you find the corresponding file and is a good starting point to swim around from.

  • Go to definition
    Mac: Command + Click (⌘ Click)
    Windows: Ctrl + Click

Tip: This is very useful to find coupled files that are relevant to your task as it takes you to the method definition, wherever it may be in the codebase. You can also use it to get a deeper understanding of what is being executed, and how.

Dive deep

Now you’ve mapped out the surroundings, it’s time to get into the specifics and take on the task. Hopefully you now have an idea of where to begin, and to pinpoint it you can make use of some of the techniques mentioned before.

Then get writing code! Studying the codebase beforehand should give you the confidence during implementation that you aren’t breaking anything and are writing code that fits in with the standards. It also helps to prevent extra, unneeded work. Before familiarising myself, I’ve spent hours defining new methods just for an engineer on my team to tell me during a review that they already exist somewhere, and then I’ve had to spend more time refactoring my solution to use them.

To boil it down, you’re trying to find out what there is, what you need, and how to use it. Before you know it you’ll be picking up project cards with conviction – without being stressed it may be a bit beyond your pay grade despite having assigned your name to it – as you can accurately scope out where and how changes may have to be carried out in advance, gauging the size and complexity of them.


Useful tools/techniques:

  • Reference similar features

Tip: If you notice that something similar to what you are implementing already exists somewhere else in your company’s software, find it in the codebase and have it open as a reference to see how it’s done.

  • Quick Open (file list)
    Mac: Command + P (⌘P)
    Windows: Ctrl + P

Tip: This allows you to search for files in the codebase by name, so it can be useful to find code related to a feature by searching for its name. It also saves a lot of time as opposed to digging through folders in the file explorer.

Back aboard

Like any skill, learning how to get to know a codebase requires dedicated time and practice, so adopting these tools and tips into your arsenal early is sure to help get you to grips with your company’s codebase. Good luck on your voyages!

Creating re-usable descriptions in dbt with Jinja docs

Posted by on 11 August 2025

If you’re working with dbt and find yourself copying the same column descriptions across multiple models, this post is for you. We’ll show you how to eliminate that repetition using a simple but powerful technique!

The need to create common column descriptions

At FreeAgent, we process a lot of event data flowing into our data platform. While each event is unique, they all share common elements – like where the event originated, or the time it was emitted. With well over 100 of these event models we need to maintain hundreds of duplicate descriptions. When working on our data pipelines in Dagster and dbt, we wanted a simple way to define these common attributes once, rather than manually rewriting or copying them across every event-based model. Repeating ourselves isn’t just tedious; it also creates a high risk of inconsistencies, and makes updating descriptions a laborious task.

Using Jinja docs to create re-usable descriptions

We found an elegant solution by combining Jinja’s doc function with a dedicated docs.md file. This approach allows us to centralise our common column descriptions and reuse them.

Let’s look at how we implemented this for a column called data_source. This column is present on all our event models and indicates whether an event came from our desktop app, mobile app, or another source.

1. Create a docs.md File

First, we created a docs.md file within our dbt project’s models folder. This file serves as a single repository for all our reusable documentation snippets.

For instance, to describe our data_source column, we added the following:

{% docs event_data_source %}
The source of the event that was emitted, e.g. DESKTOP, MOBILE_WEB, etc
{% enddocs %}

In this snippet:

  • {% docs event_data_source %} and {% enddocs %} define a Jinja documentation block.
  • event_data_source is a unique identifier for this particular documentation snippet.
  • The text within the block is the actual description we want to reuse

2. Using the doc Function in dbt Models

Now, whenever we define a column that needs this description, we simply reference it using the doc function in our dbt model’s YAML configuration:

- name: data_source
  description: '{{ doc("event_data_source") }}'
  data_type: varchar(20)

And that’s it! When dbt compiles our models, it will pull the description associated with event_data_source from docs.md and insert it directly into the model’s metadata.

Benefits of this approach

Implementing common descriptions for columns used across multiple models has brought several advantages:

  • DRY Principle Adherence: We’ve eliminated redundant documentation efforts. Where appropriate, descriptions are defined once and reused across countless models, reducing repetition.
  • Enhanced Consistency: With a single source of truth for common descriptions, we ensure that all mentions of a specific data element are described identically.
  • Time Savings: Data engineers no longer need to manually type or copy-paste descriptions. This saves time during model development and iteration, and reduces the likelihood of errors.
  • Easier Maintenance: If a common description needs to be updated, we only need to modify it in one place (docs.md). This change then propagates automatically to all models using that doc reference.
  • Improved Discoverability: Centralising common terms in docs.md can also serve as a useful reference point for understanding frequently used concepts within our data landscape.

Summary

By leveraging Jinja’s doc function and a simple docs.md file, we’ve streamlined our dbt documentation process. This approach makes our documentation more efficient, consistent, and maintainable. It’s a simple change, but it has a significant impact: not only does it allow our team to focus more on data transformation and less on repetitive documentation tasks, but it also builds confidence for our data consumers. When they explore our data through Dagster’s asset catalog, they’ll find consistent, unified descriptions for common elements across all our models, making it much easier to understand what the data truly represents.

What’s your experience with this approach? Have you found other creative ways to centralise dbt documentation? We’d love to hear your approaches!