We wanted to improve our stakeholders’ Looker skills
Here at FreeAgent we use Looker as our business intelligence tool. It’s used by over 150 stakeholders across the organisation, of which over 100 are active on a monthly basis.
To unlock Looker’s full potential, we’d like those stakeholders to be better equipped to explore the data using Looker’s range of features, rather than simply viewing a chart that somebody else has built. In short, we want to upskill our users.
We – the analytics team – manage the tool, as well as having a strategic lever specifically focused around supporting the personal development of other teams. In 2022, we had two distinct approaches to upskilling our Looker community – each of which we ran in parallel:
- Running formal in-house training sessions. These were a proven success, but are time consuming to deliver!
- Pointing stakeholders to the training materials provided by Google itself. There are lots of these out there, although they’re not tailored to our complex FreeAgent datasets.
However, we felt there was room for a third strand: bitesize “little and often” snippets of upskilling. Enter: the Weekly Looker Challenge. This blog post will discuss how we implemented the challenge, where we struggled, and what made it a success.
We launched the Weekly Looker Challenge
The idea was simple. Each week, using one of our FreeAgent datasets, we release a chart put together in Looker. Participants then try to recreate that chart.
As a team, we quickly agreed that this was something we should try. Our team lead weighed in with approval too, and the Weekly Looker Challenge was born.
We launched the challenge soon after.
We had 3 submissions in our first week. Participants were happy to share their entries even when they knew they weren’t perfect, which was fantastic to see.
Initially, we provided a series of hints for each challenge. Those hints ranged from “which explore (dataset) to use” to “where to find the settings for the dual Y-axes.”
The real highlight was when participants began to discuss their approaches with each other.
Despite our best efforts, however, we weren’t getting the participation we’d hoped for.
Eager to learn lessons from Duolingo and the like, we decided to try gamifying the challenge in order to increase engagement.
To accompany our objective to increase other teams’ confidence using their data, we set a key result to reach 20 different participants. The challenge was real!
In a bid to signpost participants a bit better, we introduced our own chilli rating system for the challenges. We found that this really helped participants: having absolutely no idea how difficult something should be is quite unnerving.
We hit our target by the skin of our teeth. However, creating and monitoring the challenge each week proved to be too time consuming for one person. Not helped by the seasonal December chaos, the challenge was abandoned – with the exception of a festive one-off.
How we made it a success
With new year’s resolutions on our side, we relaunched the challenge in January. We shared the setting of the challenge amongst our team, reducing the burden and increasing the variety of datasets (and engaging phases) in use!
We began to track the skills at play in each week’s challenge to ensure we were testing a variety of concepts.
We took the gamifying a step further and floated the possibility of a prize for the person at the top of our leaderboard at the end of the (8 week) cycle.
However, we still weren’t quite getting the engagement we wanted. We held a session at PM Tactical – a weekly congregation of our product managers and business analysts – to get some feedback on barriers to entry amongst non-participants. The primary factor was the perceived time commitment required, and so we decided to increase the number of 1- or 2-chilli challenges (quicker and easier) and rein in the 3+ chilli challenges (which might take 15+ minutes to complete).
By the end of February, we had our first leaderboard winner (and accompanying prize!) – Louis from our Comms team.
We made a tactical decision to focus on a single explore per cycle, with the aim that participants become more familiar with a particular dataset over a series of 8 challenges.
In March, we had our first week with 10 submissions. By the start of May, we’d had our first cycle with 10+ submissions each week.
Later in May, we had our second leaderboard winner: Kirsty from our Support team. We were genuinely chuffed to have participation from stakeholders in every area of the organisation.
We were similarly chuffed by a shout-out from our VP Product.
By June, the leaderboard was positively buzzing.
And we’ve just announced our third winner: Kirsten from our Practice Experience team!
What we’re planning next
Our engagement rate is still low: we’re getting 10 submissions each week but we have 100 active monthly Looker users. After speaking to PMs to understand their barriers to entry, we were able to alleviate those; we’d like to conduct a similar exercise for other pockets of users around the organisation.
We’re also conscious that, while we have a dedicated group of 15 participants who are making submissions on a regular basis, eventually the format of the challenge might get stale. Finding ways to combat that will be important in future cycles. Our “skills tracker” mentioned above will help with that, ensuring that we test for a mix of skills.
The upcoming cycle is our Summer Cycle, in which we work 4-day weeks. Conscious of the time commitment involved in setting and marking the challenges each week, we decided that this was the perfect time to trial some guest slots. We’ve reached out to eight of our committed participants, each of whom will set and mark their own challenge. We’re excited to see them!
Should you try something similar?
Despite its challenges, the Weekly Looker Challenge has been a huge success for us. Each week our participants are genuinely engaged. Every couple of weeks a new participant appears and we wonder how they got there. The challenge was even called out in some internal research recently undertaken by our user research team to understand how our stakeholders view Looker.
Through wider conversations with our stakeholders and through challenge submissions themselves, we know that we are upskilling our stakeholders. While we continue to offer more formal in-house Looker training and point our users towards Google’s training materials, it’s clear that there’s a place for this third strand.
We’d strongly recommend trying something similar, and we’d be very keen to discuss our experience further if it would help.