How to count what counts

Posted by on May 30, 2019

At FreeAgent we’re building a new platform to allow our teams to explore their data and glean new insights from it. The platform is built using Looker on top of Amazon Redshift, and so far it’s been enthusiastically received by the teams that use it. However, the process of building up the platform and driving adoption hasn’t been entirely straightforward. There has been a recurring issue that we’ve have had to deal with: when the report in our legacy system has different values to the new Looker report.

In this situation, the user’s initial reaction is to want everything to match, and they often assume the old numbers are “right” and the new numbers are “wrong”. I believe that this is similar to the known phenomenon, where research shows information presented with an accompanying chart or scientific formula is more likely to be trusted. So in this instance, the old report is trusted because it was already in place.

This chart illustrates the trustworthiness of data.  Or does it?

The reasons for differences

In most cases, the differences I encountered in the data were due to a bug in the legacy system or a new bug introduced when building the LookML model. However, sometimes the differences weren’t due to a bug. So what was going on?

To solve the problem and explain the differences I needed to understand what I was counting. In many cases, the assumptions and decisions made in calculating the values presented in the old reports weren’t fully understood or explained. For instance, in a report that counts the number of customers using a product feature, it could be assumed that it simply counts customer IDs. However, considering how answers to the following questions may impact on the values presented in the report:

  • Do we include customers that have cancelled but still have access to their data?
  • How about prospective customers on free trials?
  • If a customer uses the feature twice, do we want to count that as one customer usage, or two? (i.e. the difference between a count and a count distinct).

You can see there’s work required to define exactly what we should be counting. It’s important to do that work and bring transparency to the data. It’ll enable you to understand and discuss the variations between reports with the people that actually use them. It was during discussions I had on this topic that it became apparent there were two recurring themes in our data discrepancies:

  • Users didn’t understand what we were counting, which meant they had been consuming data without truly appreciating how the values have been calculated, or what they represented.
  • Users didn’t have access to the numbers that actually mattered to them because the values most useful to them had evolved over time, but the reporting hadn’t evolved with their requirements.

By determining which numbers are the most valuable to the users – and helping them understand what each value means – I was able to define and implement a reporting layer in Looker that applies measures consistently across the entire platform, so that all users can work off the same foundation of consistent data.

Conclusion

Our team will love the ability to explore data and build new reports in Looker – provided they can trust the numbers. To gain their trust, it was important to help them truly comprehend the values presented to them. Of course, it’s not always that easy and can take time to do it well. It can be hard to work out why there are discrepancies when matching numbers between reports, but it is crucial that you do. It’s also beneficial to build an understanding early on in the process to establish confidence in the data platform, encourage usage, and prevent confusion amongst your users.

Leave a reply

Your email address will not be published. Required fields are marked *