A view of technical leadership from across the industry

Posted by on July 27, 2023

It has been over two years since FreeAgent introduced staff engineers into the IC track. The intention was to align ourselves with the wider industry by renaming the previous Senior II level to Staff. At the time, no changes were made to the expectations for the role. Since then, we have overhauled the expectation framework (for our learnings on how to do this, see Dave’s post here), and the folks at LeadDev have created a conference called Staff+, defined on their website as “A technical leadership conference for senior individual contributors.”

The LeadDev conferences originally focused on engineering management, and have been popular at FreeAgent. On Monday (25th June 2023), Colin and I traveled down the East Coast Mainline to attend Staff+ London. We heard from 27 speakers across 2 days and many, many coffees (or in Colin’s case, tea’s). We were looking for what it means to be a Staff+ Engineer and asked ourselves if FreeAgent’s definition fits the industry mould?

Immediate Reaction

Firstly, a key take away was that the Staff+ role differs a lot across the industry, and sometimes even within the same company. However, a common expectation of the role was that the engineers at this level should have broad impact. A slightly surprising element of this was that most speakers acknowledged that their time writing code had decreased (since being promoted to a Staff+ role). So, if they didn’t write as much code, how did they achieve this impact? It tended to fall into following areas:

  • Diversity & Inclusivity
  • Mentoring & Coaching
  • Documentation, documentation, documentation (a lot of talks were about documentation)
  • Opportunities for others
  • Architecture & Tech Debt
  • Documentation (seriously, there were a lot)

Diversity & Inclusivity

Let’s start with the easy stuff… There were a number of talks directly aimed at improving diversity and inclusivity, in fact LeadDev as a whole, champions both. They put a lot of effort into being diverse and inclusive, and as a result, the conference is friendly, relaxed, and everyone just seemed really nice.

This actually underpinned the sentiment of the talks specifically about diversity and inclusivity. Firstly, diversity is the starting point, not the end goal. Inclusivity embraces differences, allowing people to be themselves and bring their different experiences, viewpoints, and opinions to the table. The speakers (Liem Pham and J. Bobby Dorlus) pointed out that if you have an environment that isn’t inclusive it can lead to heightened imposter syndrome and other mental health issues.

Mentoring & Coaching

There were no talks specifically about mentoring or coaching. Why am I mentioning it then? I would estimate that at least 60% of the talks included mentoring and coaching as something Staff+ engineers were expected to do. This was amplified by the conference having organised speed coaching sessions for attendees, which were exciting to watch. A group of attendees crowded around the coaches and presented them with problems they were facing in their roles. The coach would ask questions and try to get to the crux of the problem. I was left with a similar level of anxiety and panic that rises when a live performance requires crowd participation and I’ve ended up in the front row.

The benefits of mentoring/coaching included knowledge sharing and personal development. However, although it has a large impact on a limited number of people (at least initially). A common way to increase the scope of their impact was documentation.

Documentation

Tech specs, ADR (Architectural Decision Record), wikis, best practices, incident processes, etc. You name it, you should document it. The idea is that Staff+ engineers should look to scale your impact by creating something that is always available to others, at least if it is well-maintained, searchable, and curated. One of the most important aspects of this is to capture the why of the decisions.

Opportunities for others

Broadly speaking, if you’re a principal engineer, you’re running out of ladder to climb. Nice for some! The idea here is that we should be creating space for others to lead. Have you already written 10 RFDs? Let someone else write the next one and support them while they do it. Delegate “promotable” tasks to others and sponsor them. Pick up “unpromotable tasks” – the small things that are hard to be recognised for (:roll_eyes: probably Gem updates) come expectation scoring time.

Architecture & Tech Debt

A recurring theme was for principal engineers to champion technical debt and re-architect solutions to provide long-term agility and maintainability. They needed to be able to effectively communicate to directors why this was important for the company and accept that it might not always be the best time to embark on these projects.

A fantastic example was provided by Alice Barlet of the Financial Times, who embarked on a project to reduce duplication across their system. The duplication and lack of single responsibility elements in the service resulted in numerous small, time-consuming bugs. Instead of continuing to fix the bugs in many places, refactoring the technical debt allowed them to make the system more maintainable for the long term.

A key takeaway (for me) was how Alice communicated the tech debt work with the directors. Initially, she asked for a team of four engineers for six months to complete the work. However, the directors said this was not possible**. Instead of embarking on the project understaffed, she decided it wasn’t worth doing at all. This was echoed in other talks, where the current economic climate demands that people do more with less. But this is often detrimental, and doing fewer, more focused, and higher quality projects has a larger impact than more half-finished, lower quality projects.

Another common approach to tackle large architecture problems was to create a group of Staff+ Engineers who would help make architectural decisions and review documents and plans created by others. The key here was to avoid an ivory tower scenario and allow others to make their own decisions. The community might be advisory, rotate members (which I really liked the idea of), and involve engineers from across the whole organisation.

**Due to Alices take it or leave it approach the directors ended up agreeing that the project was worth doing properly and changed their mind.

Closing Remarks

So what does it mean to be a Staff+ Engineer, and how does FreeAgent’s definition compare? Like everything, it depends. I think there’s a strong argument that FreeAgents engineering expectations are well-aligned with the industry, especially at the principal level. Inclusivity, broad (department/company) impact, documentation, communication, and architecture are all called out specifically.

The conference was really enjoyable, and I would certainly recommend it to anyone interested in technical leadership that isn’t focused on Engineering Management. It is running again in London next year (or perhaps we can convince finance the New York one would be more beneficial :wink:).

The web application brand refresh journey

Posted by on July 26, 2023

At the beginning of this year the Design System team took on the work of updating the FreeAgent brand, focusing on the web application’s typography, colours and logos (and a few extra bits). You can read more about the new look in Roan’s post.

I want to take you on a journey of what updating a 10+ year old codebase was like and the challenges it brought with it.


The making of a plan

The idea of what the brand refresh would be was relatively straightforward, we knew we had a new logo, new colours, and a new font family to add to the mix. 

It wasn’t intended as any huge overhaul to the product, but rather as Roan put it “a fresh lick of paint”.

Sounds pretty straight forward and set in stone, right? If only it were!

This being such a far reaching change across the entire web application, we knew we had to plan for some user testing, we agreed to run a Beta a month before the official launch to gather feedback, and with that we had to consider our technical solutions so that they would enable us to run this test successfully. 

The web application is made up of a few key areas for different types of users – admins, practices, and companies. Keeping in mind that we wanted to focus our test we decided that we could keep the beta work to company specific areas only. We still planned to do the work of updating the brand for the entire web app at the same point, but not turning it on for everyone until the final launch date to keep things more manageable technically.

It’s also worth pointing out that we also have a longer term project planned which focuses on updating the overall look and feel of our components to be more in line with the new brand refresh aesthetic as well as updating usability so that customers can easily do the things they need to do without having to think twice about what the UI is telling them. This overlapped in some ways with the brand refresh as will be mentioned later

We set off on the project with a plan:

  • Update the logos
  • Update the colours
  • Update the font family

And so we started with the most straightforward – logos.

Updating the Logos

A before and after of the FreeAgent logo as used in the web application

We had hoped a lot of this task would be straightforward search for and then replacement of images that had the FreeAgent logo in them.

First we ran into some naming and reusability problems- not all the files were named freeagent, or logo, having iterated through a number of naming conventions over the years meant it was a bit more tricky to find them all. But we eventually did, and in order to improve this we created a set logo folder and named our images accordingly so that it makes it quicker to find next time we need to make any changes, it might not be for many years, but it’s there. We also created a helper for rendering the logo, meaning that the majority of use cases won’t need to be updated manually as we can make the change to the re-usable helper instead.

Then there was also the technical issue in that SVGs aren’t supported everywhere – emails and an old PDF generator could not handle them, so we had to create PNG versions of the logos too. It was at this point that we decided against doing the email templates as part of the beta work, it was an area the team had little experience in and our time was best spent first getting the web application aligned with the brand refresh before updating something that had a lot of complexity and not much value for the beta.

And lastly we had not accounted for the favicons and had to get those created and updated too.


Ok, that wasn’t working, let’s make a new plan

Plan B : plan smaller, learn, over communicate, do what we can, improve what we can.

The logos were the least of our problems in the long term so it had us worried about what unknown unknowns were awaiting us with fonts and colours and how we would deal with the challenges presented to us. Let’s take a look at how that went!

Updating the Colours

A list of colours used in the FreeAgent web application

We made quite a big decision to use CSS custom properties early on for colours, this was a huge improvement in enabling us to use more modern native tech to achieve results and make our code more extendable and readable, we were using SASS variables previously and they did the job but we could do better. 

As a comparison, below is an example of SASS variables being used and the output, because variables are updated to their raw values when compiling, you lose all context of where they are set in the final compiled code.

// SASS
// variable set
$fa-green: #79cc6e;


//pre-compiled code
.fe-Button {
  background-color: $fa-green;
}


//compiled code
.fe-Button {
  background-color: #79cc6e;
}

Compare this to what you see when using CSS custom properties below, you get a lot more contextual information that can be more easily traced back to where the variable is declared, allowing for easier debugging and clearer understanding of what values are being used by their name.

// CSS custom properties
// custom property set
:root {
--fa-green: #79cc6e;
}


//pre-compiled code and compiled code look the same
.fe-Button {
background-color: var(--fa-green)
}

We saw the benefits of this when we had to set a different drop shadow colour depending on which part of the application it was in, with custom properties it was really straightforward to replace the value compared to doing it using SASS variables. (Read CSS tricks’s complete guide to custom properties for more details – pros, cons and how to use them)

One complication that we came up against was the charting library that we used in a variety of different ways over the years to achieve different types of charts, and in doing so had a variety of places and ways that we set colours within those charts. We documented where each chart colour was set and noted it down as an area of improvement for the future.

Updating the Fonts

The fonts were where we learnt the most about the unknown unknowns.


Ligatures were the first thing, it wasn’t something we hadn’t had to think about much with our old font face but became clear one we replaced the font that it created some unwanted outputs, this was easily fixed by setting font-variant-ligatures: none;

Examples of ligatures and how they can be displayed in the Circular font face

How we displayed numbers was another big problem, the spacing was quite a bit different with the Circular font face, by default it wasn’t consistent and made it hard to read when numbers were displayed in tables in particular. We had to use font-variant-numeric: tabular-nums; to achieve a consistent layout to help readability.

And although it wasn’t a display issue, we had to maintain 3 different font scales while we were working to ensure consistency in scale for everyone whether in beta or not. We had to spend more time thinking about the implications of that than we had anticipated.

Updating the Buttons and Cards

After some back and forth working through the pros and cons and evaluating the complexity of updating buttons and cards not only to the brand refresh but also to the new brand look and feel that was planned as part of the envisioning work. We decided the risk and effort was worth it.

A before and after of the Card component look

There were some technical challenges with updating these as they can be seen on every single page on the site and have a few variations each, because of the age of the codebase we did also accept that while we can update as many instances as possible there may be some that are missed but we would fix those forward where possible. 

In terms of the technical work, as a very short term solution we used :where to render styles that were specific to the beta and non-beta using a class we set on the page depending on if the feature was on or off. As we were changing a large amount of features of how the button looked and responded to interactions we opted to keep those styles very separate in code blocks to allow us to delete the old styles swiftly once we were live to all users.

We heavily relied on our visual regression tests to flag up any major issues, but as any test suite, it’s not foolproof and so we accepted some things to be fixed forward once we found them, having the suite of tests was invaluable to this project.


Releasing our changes at last

As the Beta went out, we had a few changes based on feedback but nothing major, our thorough testing and technical approaches meant we didn’t have a hard time fixing the things that did come up, and even the most major change – a tweak to the logo colour for contrast, didn’t end up taking very long to implement.

The live release was a massive undertaking across the whole company as all teams aligned with times and clear responsibilities so that we were consistent in showing our new branding across many different channels of communication, but it was a huge sigh of relief knowing the part we had to play in it went well.


So, what did we learn?

Over communication is key

Having open and honest communication about limitations and striving for a balance between design and engineering. Focusing on what we could do, rather than what we couldn’t or didn’t have the time to do with the tight deadlines ahead of us and communicating that early and often so we could make a well informed decision. 

Compromises are necessary

One key example of this was email templates – there were technical limitations with this and added complications if we were to include this in the beta, after some discussions it was agreed that we would instead roll this out with the live launch.

Visual regression tests are worth it

Visual regression tests were invaluable to us when making so many global changes. We plan to improve the coverage of these tests to automate some of the verification of changes going forward.

Design systems make changes easier

Componentisation makes global changes more straightforward, the more different versions we had of a component the longer it took to verify that it was working as expected. It motivates us to aim to have our web application using as many of the design system components throughout.

Planning tech debt saves time

Planning for tech debt meant we could plan writing our the code in a way that it’s easier to tear out once it’s not needed, and just the act of keeping a list as we went, knowing what needs to be reverted or cleaned up meant we didn’t have to spend a huge amount of time thinking through what needs to be done and instead we could simply get on with doing the work. 

We planned for the cleaning up of old code and assets to happen soon after the release of the brand refresh so that we aren’t acquiring more tech debt while new features continue to be developed on top of that code. We also wanted to lower the mental workload of anyone working around the feature flagging aspect of the code as soon as possible.

Five principles for writing an engineering progression framework

Posted by on July 20, 2023

Earlier this year we started using new versions of our progression frameworks for individual contributors and managers. The changes came from a desire to simplify the framework and make it easier to use for both individual contributors and their managers across the engineering department.

In this article I’ll share five principles that became apparent during the process. These may be helpful if you’re thinking about introducing a progression framework or making improvements to an existing framework. It’s not an exhaustive list!

Principle 1: Agree what your framework is for!

This might sound obvious but it’s important to make sure everyone is clear on the purpose of the progression framework before starting to write. Our framework has three main purposes:

  1. Evaluate staff performance against expectations for their role and level. We do this twice a year in line with the rest of the company.
  2. Help staff progress in their role and advance to the next level by identifying skills and behaviours to develop.
  3. Help hiring managers identify the right level when making job offers.

This is what we had to keep in mind when designing our framework.

Principle 2: Leave the implementation details in the role profile

The previous version of our framework included detailed examples that were relevant to some roles but not others. This meant managers had to do some translation of the language for their teams, and it was a big deal to make changes to the framework document that everyone used.

We realised taking those details out of the progression framework and putting them into role profiles would make the framework more inclusive across engineering and make it easier to delegate describing the nitty gritty to individual teams. Here’s a made up example:

Software EngineerData Scientist
TechnologiesTechnologies
Ruby on Rails
MySQL…
Python
Amazon SageMaker…
Role Profiles for specialisms
JuniorSenior
Technical skillsLearns the tech for their roleDemonstrates expertise in the tech for their role
Progression framework for everyone

Principle 3: Include the smallest number of independent skills and behaviours

Five or six years ago I was involved in writing an earlier iteration of the progression framework with a few other managers. We got started by trying to think of an exhaustive list of all the different things that engineers of different levels might be expected to do. This might seem like a sensible approach but stopping there can create problems:

  • “Box ticking” on the part of both staff being evaluated and their managers. Want to be a senior data scientist? You’d better run an A/B test next year!
  • “Overlap” between skills or behaviours. If several areas of the framework can be evidenced by completing one kind of task, are they really adding anything?

This sort of incentivisation is neither helpful for the individual nor the organisation and generates unnecessary admin.

In the latest version of the framework we distilled things down to the smallest possible list of distinct skills and behaviours without over specifying the details. This helped reduce box ticking and the associated admin overhead, and encouraged more active and creative conversations between individuals and their managers.

Principle 4: Use simple, accessible, inclusive language

Let’s make up another example of two skills described at two levels:

JuniorSenior
ProgrammingIs a beginner at writing codeIs an expert at writing code
DocumentationIs a beginner at documentationIs an exemplary writer
Less inclusive progression framework

When can one become an expert at programming? Who gets to decide? Are people with English as a second language more or less likely to feel like they can be an exemplary writer?

Now let’s make some tweaks to emphasise demonstration of a skill rather than a declaration:

JuniorSenior
ProgrammingLearns to write codeDemonstrates expertise in writing code
DocumentationRecords their workRecords complex issues clearly and concisely
More inclusive progression framework

By using “expertise” rather than “expert” we’ve shifted the onus toward demonstration of a skill rather than an arbitrary bar to be crossed. We have simply explained that we value clear and concise writing instead of asking for “exemplary” writing without any definition.

Principle 5: Ensure opportunity to demonstrate skills

Past versions of our framework included examples that related to responding to production incidents and involvement in the hiring process. While these are important skills, not everyone will have the opportunity to demonstrate them at a given time. We actively avoid production incidents and sometimes we don’t have any open roles.

In these cases it’s worth thinking about the real skill that we’re looking for. Responding to production incidents might be an example of demonstrating leadership and hiring might be an example of building high performing teams. There are more ways to develop and demonstrate leadership than just responding to incidents, and helping with team building goes beyond just hiring.

If you need to explicitly call out that software engineers are expected to respond to incidents or help with interviews then this can be done in the role profile, recalling principle 2.

Summary

Creating and maintaining a good progression framework takes significant time and effort. By following a few simple principles it’s possible to create a simple, fair framework that’s easy to use and easy to update over time:

  1. Agree what your framework is for!
  2. Leave the implementation details in the role profile
  3. Include the smallest number of independent skills and behaviours
  4. Use simple, accessible, inclusive language
  5. Ensure opportunity to demonstrate skills

Generative AI: Programmable Reasoning Machines of the Future

Posted by on July 13, 2023

These days Generative AI is being employed for everything from interpretation and summarisation of text to problem solving with a conversational natural language interface. You can now get output from a computer by using the same kind of language you use to speak to other people. Recent developments such as the release of tools like ChatGPT powered by Large Language Models have put Generative AI into the hands of anyone with an internet connection.

What sort of conceptual model should we have in mind when thinking about LLM systems? This question was on my mind a few weeks ago while attending TuringFest 2023. In this post I’ll share some highlights from the conference and attempt to pull together a conceptual model for generative AI systems based on what I learned at the conference.

Conceptual Models

In his talk “Building Products in the Age of AI”, Fergal Reid highlighted the “accidental bundling” of features in Generative AI systems. These components are a reasoning engine, which in my mind is the machine learning model trained on some input data to learn how to reason, and the database, which may or may not be used in addition to the model to generate output:

DatabaseReasoning Engine
Data used to generate outputLarge Language model

It’s a bit like a traditional computer with processing applied to some data. Except now the processing is to generate reasoned answers rather than execute predetermined instructions.

And then there’s the input. How do we interact with the model? Bastian Grimm shared some tips to create a well structured ChatGPT prompt in his talk “The Rise of AI: Strategies and Tips to Drive Growth”. The suggested structure included the following information:

Role
Context
Instructions
Who is ChatGPT creating as?What is the situation it is creating for?What specifically do you want it to do?
Format
Examples
Constraints
How do you want it to return its responseSamples of the output that you expect.What should ChatGPT not do?

This looks like we’re writing clauses in a SQL query. I’m suspicious of natural language as a good, precise interface for anything. I think we should consider writing prompts for LLMs more like structured programming, and this seems to be an active area of research.

Now we have a large language model trained on some input data to be able to carry out reasoning tasks defined by a declarative input. So let’s bring it all together. The name that comes to my mind is “Programmable Reasoning Machine”, although I may regret writing that later!

Programmable Reasoning Machine

DatabaseReasoning Engine
RoleContextInstructions
FormatExamplesConstraints

Fergal explained how LLMs tend to be better at interpolating between known data points than extrapolation away from the known. Unbundling the system into a separate reasoning engine and database allows us to exploit this by constraining the knowledge the system uses, for example by restricting it to a well curated set of documents.

Summary

We can unbundle a large language model system into three core components. These are a reasoning engine, a source of data to reason about and an interface to instruct the reasoning engine. I’m currently referring to this ensemble as a Programmable Reasoning Machine, but there may well be better labels out there.

Thinking about the system this way makes the importance of appropriate data and a clear interface apparent and might even encourage us to be more imaginative than thinking of every solution as “just another chat bot”.

Is this a useful way of thinking about AI systems building on LLMs? Let me know what you think!

Challenge Accepted: Our Weekly Looker Challenge

Posted by on July 11, 2023

We wanted to improve our stakeholders’ Looker skills

Here at FreeAgent we use Looker as our business intelligence tool. It’s used by over 150 stakeholders across the organisation, of which over 100 are active on a monthly basis.

To unlock Looker’s full potential, we’d like those stakeholders to be better equipped to explore the data using Looker’s range of features, rather than simply viewing a chart that somebody else has built. In short, we want to upskill our users.

We – the analytics team – manage the tool, as well as having a strategic lever specifically focused around supporting the personal development of other teams. In 2022, we had two distinct approaches to upskilling our Looker community – each of which we ran in parallel:

  1. Running formal in-house training sessions. These were a proven success, but are time consuming to deliver!
  2. Pointing stakeholders to the training materials provided by Google itself. There are lots of these out there, although they’re not tailored to our complex FreeAgent datasets.

However, we felt there was room for a third strand: bitesize “little and often” snippets of upskilling. Enter: the Weekly Looker Challenge. This blog post will discuss how we implemented the challenge, where we struggled, and what made it a success.

We launched the Weekly Looker Challenge

The idea was simple. Each week, using one of our FreeAgent datasets, we release a chart put together in Looker. Participants then try to recreate that chart.

As a team, we quickly agreed that this was something we should try. Our team lead weighed in with approval too, and the Weekly Looker Challenge was born.

We launched the challenge soon after.

We had 3 submissions in our first week. Participants were happy to share their entries even when they knew they weren’t perfect, which was fantastic to see.

Initially, we provided a series of hints for each challenge. Those hints ranged from “which explore (dataset) to use” to “where to find the settings for the dual Y-axes.” 

The real highlight was when participants began to discuss their approaches with each other.

Despite our best efforts, however, we weren’t getting the participation we’d hoped for.

Eager to learn lessons from Duolingo and the like, we decided to try gamifying the challenge in order to increase engagement.

To accompany our objective to increase other teams’ confidence using their data, we set a key result to reach 20 different participants. The challenge was real!

In a bid to signpost participants a bit better, we introduced our own chilli rating system for the challenges. We found that this really helped participants: having absolutely no idea how difficult something should be is quite unnerving.

We hit our target by the skin of our teeth. However, creating and monitoring the challenge each week proved to be too time consuming for one person. Not helped by the seasonal December chaos, the challenge was abandoned – with the exception of a festive one-off.

How we made it a success

With new year’s resolutions on our side, we relaunched the challenge in January. We shared the setting of the challenge amongst our team, reducing the burden and increasing the variety of datasets (and engaging phases) in use!

We began to track the skills at play in each week’s challenge to ensure we were testing a variety of concepts.

We took the gamifying a step further and floated the possibility of a prize for the person at the top of our leaderboard at the end of the (8 week) cycle.

However, we still weren’t quite getting the engagement we wanted. We held a session at PM Tactical – a weekly congregation of our product managers and business analysts – to get some feedback on barriers to entry amongst non-participants. The primary factor was the perceived time commitment required, and so we decided to increase the number of 1- or 2-chilli challenges (quicker and easier) and rein in the 3+ chilli challenges (which might take 15+ minutes to complete).

By the end of February, we had our first leaderboard winner (and accompanying prize!) – Louis from our Comms team.

We made a tactical decision to focus on a single explore per cycle, with the aim that participants become more familiar with a particular dataset over a series of 8 challenges.

In March, we had our first week with 10 submissions. By the start of May, we’d had our first cycle with 10+ submissions each week.

Later in May, we had our second leaderboard winner: Kirsty from our Support team. We were genuinely chuffed to have participation from stakeholders in every area of the organisation.

We were similarly chuffed by a shout-out from our VP Product.

By June, the leaderboard was positively buzzing.

And we’ve just announced our third winner: Kirsten from our Practice Experience team!

What we’re planning next

Our engagement rate is still low: we’re getting 10 submissions each week but we have 100 active monthly Looker users. After speaking to PMs to understand their barriers to entry, we were able to alleviate those; we’d like to conduct a similar exercise for other pockets of users around the organisation.

We’re also conscious that, while we have a dedicated group of 15 participants who are making submissions on a regular basis, eventually the format of the challenge might get stale. Finding ways to combat that will be important in future cycles. Our “skills tracker” mentioned above will help with that, ensuring that we test for a mix of skills.

The upcoming cycle is our Summer Cycle, in which we work 4-day weeks. Conscious of the time commitment involved in setting and marking the challenges each week, we decided that this was the perfect time to trial some guest slots. We’ve reached out to eight of our committed participants, each of whom will set and mark their own challenge. We’re excited to see them! 

Should you try something similar?

Despite its challenges, the Weekly Looker Challenge has been a huge success for us. Each week our participants are genuinely engaged. Every couple of weeks a new participant appears and we wonder how they got there. The challenge was even called out in some internal research recently undertaken by our user research team to understand how our stakeholders view Looker. 

Through wider conversations with our stakeholders and through challenge submissions themselves, we know that we are upskilling our stakeholders. While we continue to offer more formal in-house Looker training and point our users towards Google’s training materials, it’s clear that there’s a place for this third strand. 

We’d strongly recommend trying something similar, and we’d be very keen to discuss our experience further if it would help.