The Analytics That Matter

Avatar of Chris Coyier
Chris Coyier on (Updated on )

📣 Freelancers, Developers, and Part-Time Agency Owners: Kickstart Your Own Digital Agency with UACADEMY Launch by UGURUS 📣

I’ve long been skeptical of quoting global browser usage percentages to justify their usage of browser features. It doesn’t matter what global usage of a browser is, other than nerdy cocktail party fodder. The usage that matters is what users on your site are using, and that can be wildly different from site to site.

That idea of tracking real usage of your actual site concept has bounced around my head the last few days. And it’s not just “I can’t use CSS grid because IE 11 has 1.42% of global usage still” stuff, it’s about measuring metrics that matter to your site, no matter what they are.

Performance metrics are a big one. When you’re doing performance testing, much of it is what you would call synthetic testing. An automated browser loads your site and tracks what it finds as it loads, like the timing of a thing, the size of assets, the number of assets, etc. Synthetic information like this enters my mind when spending tons of time on performance. “I bet I can get rid of this one extra request,” I think. “I bet I can optimize this asset a little further.” And the performance tools we use report this kind of data to us readily. How big is our JavaScript bundle? What is our “Largest Contentful Paint”? What is our Lighthouse performance score? All things that are related to performance, but aren’t measuring actual user’s experience.

Let that sit for a second.

There are other analytics we can gather on a site, like usage analytics. For example, we might slap Google Analytics on a site, doing nothing but installing the generic snippet. This is going to tell us stuff like what pages are the most popular, how long people spend on the site, and what countries deliver the most traffic. Those are real user analytics, but it’s very generic analytic information.

If you’re hoping for more useful analytics data on your site, you have to think about it a little harder up front. What do you want to know? Maybe you want to know how often people use Feature X. Or you want to know how many files they have uploaded this week. Or how many messages they have sent. Or how many times they have clicked the star button. This is stuff that tells you how your site is doing. Generic analytics tracking won’t do that; you’ll have to write a little JavaScript to capture and report on those things. It takes a little effort to get the analytics you really care about.

Now apply that to performance tooling.

Rather than generic synthetic tests, why not measure things that are actually important to your specific site? One aspect to this is RUM, that is, “Real User Monitoring.” So rather than a single synthetic test being the source of all performance testing on your site, you’re tracking real users actually using the site on their actual devices. That makes a lot of sense to me, but aside from the logic of it, it unlocks some important data.

For example, one of Google’s Web Core Vitals, which are soon to affect the SEO of our pages, include a metric called First Input Delay (FID) and you have to collect data via JavaScript¹ on your page to use it.

Another Web Core Vital is “Largest Contentful Paint” which is a fascinating attempt at a more meaningful performance metric. Imagine a metric like “start render” or the first page paint. Is that interesting? Sorta. At least it is signaling to the user that something is happening (probably). Yet that first render might not be actually useful content, like the headline and body copy of a news article. So this metric makes a guess at what that useful content probably is and measures that. Very clever.

But, why guess? I get why Google has to guess. They have to measure LCP on a bazillion sites and provide generically useful measurements. But on your own site (again, where the focused analytics actually matter) we can tell performance tools which elements matter to us and record when they render. Personally, I’d care about when the article itself renders on this site. With SpeedCurve’s hero rendering time, I could do something like:

<main elementtiming="article"></main>

<!-- or focus on the top of the page, like the "hero" timing suggests -->
<header elementtiming="hero"></header>

Now I’m measuring what matters to my site and not just generic numbers.

Similarly, FID is cool and all, but why not fire off a JavaScript event telling performance tooling when things happen that are important to your site. For example, on CodePen, we’d do that when the editor is ready to use. That’s called User Timing and it’s a dang W3C spec!

Editors.init();
performance.mark("Editors are initialized.");

These kind some-effort-required analytics are definitely better than the standard fare. Sure, a performance budget that warns you when you go over 200KB of JavaScript is great, but a performance budget that warns you when a core feature of your app isn’t ready until 1.4 seconds when your budget is 1.1 seconds is way more important.

  1. I say this because I was trying to make a chart on the SpeedCurve of the three Web Core Vitals, and you can’t add FID unless you have LUX running, which is their RUM thing. Phew that was a lot of acronyms, sorry.