HTTP/2 has been one of my areas of interest. In fact, I’ve written a few articles about it just in the last year. In one of those articles I made this unchecked assertion:
If the user is on HTTP/2: You’ll serve more and smaller assets. You’ll avoid stuff like image sprites, inlined CSS, and scripts, and concatenated style sheets and scripts.
I wasn’t the only one to say this, though, in all fairness to Rachel, she qualifies her assertion with caveats in her article. To be fair, it’s not bad advice in theory. HTTP/2’s multiplexing ability gives us leeway to avoid bundling without suffering the ill effects of head-of-line blocking (something we’re painfully familiar with in HTTP/1 environments). Unraveling some of these HTTP/1-specific optimizations can make development easier, too. In a time when web development seems more complicated than ever, who wouldn’t appreciate a little more simplicity?
As with anything that seems simple in theory, putting something into practice can be a messy affair. As time has progressed, I’ve received great feedback from thoughtful readers on this subject that has made me re-think my unchecked assertions on what practices make the most sense for HTTP/2 environments.
The case against bundling
The debate over unbundling assets for HTTP/2 centers primarily around caching. The premise is if you serve more (and smaller) assets instead of a giant bundle, caching efficiency for return users with primed caches will be better. Makes sense. If one small asset changes and the cache entry for it is invalidated, it will be downloaded again on the next visit. However, if only one tiny part of a bundle changes, the entire giant bundle has to be downloaded again. Not exactly optimal.
Why unbundling could be suboptimal
There are times when unraveling bundles makes sense. For instance, code splitting promotes smaller and more numerous assets that are loaded only for specific parts of a site/app. This makes perfect sense. Rather than loading your site’s entire JS bundle up front, you chunk it out into smaller pieces that you load on demand. This keeps the payloads of individual pages low. It also minimizes parsing time. This is good, because excessive parsing can make for a janky and unpleasant experience as a page paints and becomes interactive, but has not yet not fully loaded.
|Filename||Uncompressed Size||Gzip (Ratio %)||Brotli (Ratio %)|
|jquery-ui-1.12.1.min.js||247.72 KB||66.47 KB (26.83%)||55.8 KB (22.53%)|
|angular-1.6.4.min.js||163.21 KB||57.13 KB (35%)||49.99 KB (30.63%)|
|react-0.14.3.min.js||118.44 KB||30.62 KB (25.85%)||25.1 KB (21.19%|
|jquery-3.2.1.min.js||84.63 KB||29.49 KB (34.85%)||26.63 KB (31.45%)|
|vue-2.3.3.min.js||77.16 KB||28.18 KB (36.52%)|
|zepto-1.2.0.min.js||25.77 KB||9.57 KB (37.14%)|
|preact-8.1.0.min.js||7.92 KB||3.31 KB (41.79%)||3.01 KB (38.01%)|
|rlite-2.0.1.min.js||1.07 KB||0.59 KB (55.14%)||0.5 KB (46.73%)|
Sure, this comparison table is overkill, but it illustrates a key point: Large files, as a rule of thumb, tend to yield higher compression ratios than smaller ones. When you split a large bundle into teeny tiny chunks, you won’t get as much benefit from compression.
Side note: One astute commenter has pointed out that Firefox dev tools show that in the unsprited test, approximately 38 KB of data was transferred. That could affect how you optimize. Just something to keep in mind.
Browsers that don’t support HTTP/2
Yep, this is a thing. Opera Mini in particular seems to be a holdout in this regard, and depending on your users, this may not be an audience segment to ignore. While around 80% of people globally surf with browsers that can support HTTP/2, that number declines in some corners of the world. Shy of 50% of all users in India, for example, use a browser that can communicate to HTTP/2 servers (according to caniuse, anyway). This is at least the picture for now, and support is trending upward, but we’re a long ways from ubiquitous support for the protocol in browsers.
What happens when a user talks to an HTTP/2 server with a browser that doesn’t support it? The server falls back to HTTP/1. This means you’re back to the old paradigms of performance optimization. So again, do your homework. Check your analytics and see where your users are coming from. Better yet, leverage caniuse.com‘s ability to analyze your analytics and see what your audience supports.
The reality check
Would any sane developer architect their front end code to load 223 separate SVG images? I hope not, but nothing really surprises me anymore. In all but the most complex and feature-rich applications, you’d be hard-pressed to find so much iconography. But, it could make more sense for you to coalesce those icons in a sprite and load it up front and reap the benefits of faster rendering on subsequent page navigations.
Which leads me to the inevitable conclusion: In the nooks and crannies of the web performance discipline there are no simple answers, except “do your research”. Rely on analytics to decide if bundling is a good idea for your HTTP/2-driven site. Do you have a lot of users that only go to one or two pages and leave? Maybe don’t waste your time bundling stuff. Do your users navigate deeply throughout your site and spend significant time there? Maybe bundle.
This much is clear to me: If you move your HTTP/1-optimized site to an HTTP/2 host and change nothing in your client-side architecture, it’s not going to be a big deal. So don’t trust blanket statements some web developer writing blog posts (i.e., me). Figure out how your users behave, what optimizations makes the best sense for your situation, and adjust your code accordingly. Good luck!
I’ve checked in Firefox Nightly Developer Tools. The size of the sprite is “9.55 KB transferred,” while the size of the 223 individual SVGs is 36.53 KB. That’s still and increase by a factor of 3.8. However, that’s assuming that all icons are actually used on the page.
223 / 3.8 = ~60. Hence, if the page uses fewer than 60 icons, then it will use fewer KBs if it loads them individually. 60 should be enough for any given page, I think.
Very interesting. Chrome shows far more. Chrome appears to be sending more headers, but that doesn’t account for the additional weight.
You make a great counterpoint. I suppose I was trying to tie in something applicable to my point about compression ratios. If you have 60 icons spread out over several pages, but never use more than handful on any one page, it would certainly be advantageous to do as you say.
But it could also be argued that loading could be sped up for subsequent pages if you load that imagery up front, but there are practical arguments for and against that.
Thanks for weighing in. :)
Still, using HTTP2 Push has more disadventage while it’s so buggy in different browsers and OS’s.
Victor – Could you elaborate on the disadvantages / bugs you’ve encountered with HTTP/2 push? I haven’t personally used it much, and am interested to hear more about this as it’s not something I’d heard about in the past.
There is a great article by Jake Archibald that shows a few problems with HTTP2 Push.
HTTP/2 server push is sort of tangential to this topic, but not entirely unrelated. After all, inlining could be construed to be a form of bundling. Server push can address the caching-related shortcomings of inlining assets while providing a similar performance benefit.
But the problem is, as you say, that push is a bit buggy yet. Jake Archibald wrote an excellent article that outlines server push weirdness in across browsers.
That said, I’ve had a reasonable amount of success using a cookie-based mechanism on the back end to try to eliminate redundant pushes. This seems to work pretty well for me, but it’s far from optimal in all instances.
Thanks for reading. :)
Yes, you are right that it’s all up to the reader’s audience on the implementation process whilst on the development of the upcoming site. That’s why it’s worth a dig for even Front-End Developers to look into the Audience section of Analytics, to answer these questions whether HTTP/2 will be good in general for the audience of the upcoming website or not.
I don’t think anyone’s really saying, “don’t bundle your files at all,” the dark pattern as it relates to bundling in HTTP/2 is more so around loading code that isn’t used on a particular page. If your template/page is using 5 modules you can still bundle them together, just don’t bundle 10 modules on a page that uses 5.
For that matter, you can customize sprites for a page as well; there’s no reason to have a giant sprite file if you’re only using a few icons. You don’t have to break them up into a bunch of files, but you also don’t have to have a single giant sprite to work with (middle ground, if you will). With nicer header compression and multiplexing these are the types of build modifications that will see greater benefit with HTTP/2.
My understanding was that under HTTP/2, the user’s wait time for a group of images is dictated by the largest of the individual images.
If that is the case, and the largest of my svg images is smaller than the sprite that used to contain the group of images, then there is a net benefit to the user if the images are served separately, rather than as a sprite.
Then again I may have missed something. There may be a maximum number of files that will download in parallel under multiplexing.