I recently blogged about how images are hard and it ended up being a big ol’ checklist of things that you could/should think about and implement when placing images on websites.
I think it’s encouraging to see frameworks — these beloved tools that we leverage to help us build websites — offering additional tools within them to help tackle this checklist and take on the hard (but perfectly suited for computers) tasks of displaying images.
I’m not sure I’d give any of them flying colors as far as ease of use. There is stuff to install, configure, and it’s likely you’ll only reach for it if you already know you should be doing it, and your pre-existing knowledge of image performance can help you through the process. It’s not the failing of these frameworks; this stuff is complicated and the audience is developers who are, fair is fair, a little into the idea of control.
I do gotta hand it to my BFF WordPress on this one. You literally do nothing and just get responsive images out of the box. If you need to tap into the filters to control things, you can do that like you can anything else in WordPress: through hooks. If you go for Jetpack (and I highly encourage you to), you flip on the (incredibly, free) Site Accelerator feature, which takes all those images, optimizes them, CDN-hosts them, lazy loads them, and serves them in formats, like WebP, when possible (I would assume more next-gen formats will happen eventually). Jetpack is a sponsor, so full disclosure there, but I use it very much on purpose because the experience makes image handling something I literally don’t have to think about.
Another interesting aspect of frameworks-helping-with-images is that some of it was born out of Google getting involved. Google calls it “Aurora”:
For almost two years, we have worked with some of the most popular frameworks such as Next.js, Nuxt and Angular, working to improve web performance.
The project does all sorts of stuff, including hand out money to help fund open-source tools, and direct help specific initiatives. Like images:
An Image component in Next.js that encapsulates best practices for image loading, followed by a collaboration with Nuxt on the same. Use of this component has resulted in significant improvements to paint times and layout shift (example: 57% reduction in Largest Contentful Paint and 100% reduction in Cumulative Layout Shift on nextjs.org/give).
Cool, right? I think so? What weirds me out about this just a smidge is that it feels meaningful when Google’s squad rolls up to contribute to a framework. They didn’t pick underdog frameworks here, surely on purpose, because they want their work to impact the most people. So, frameworks that are already successful benefit from A-squad contributions. A rich-get-richer situation. I’m not sure it’s a huge problem, but it’s just something I think about.
I don’t know about Gatsby and Eleventy image components, but I can give my grain of salt on NextJs.
First, I think NextJs did a great job at making images accessible for everyone. They really pushed the enveloppe and pretty much any junior dev with no experience whatsoever can really pick it up quickly.
With that said, there are plenty of downsides with that component too.
The idiot proof design might be great for a starter, but it doesn’t evolve past that.
The article did mention devs like to be in control, but there are reasons for that…
Huge DOM bloat
Each image has 2 wrappers. Meaning that if you get 20 images on your page, you’re already using 60 DOM nodes. It’s really not efficient, you could easily achieve the same result with only 1 wrapper.
!importantor more DOM bloat
NextJs image component doesn’t give you access to its outer wrapper, you only ever control the
To add some spicy moments to your life, it also uses inline-styles…
Unless you want to use the
!importantnuclear option combined with a brittle css selector, that could easily break while you change something on your page, the only way to style your image correctly is to add an extra wrapper on top.
Following the previous example, the 20 images goes from 60 to 80 DOM nodes now…
That’s 3 wrappers per image while you could achieve the exact same result with only 1 wrapper.
Oh and those inline-styles, you can achieve the exact same result on 95%+ of browsers by disabling 77% of them…
Inline styles = CSP headaches
If you want to include a secure Content Security Policy (CSP), you can forget that. The inline-styles are in direct opposition to security.
Either you lower your security requirements by using
inline-unsafeor you should look elsewhere.
Image optimization on large screens
By default, image optimization is based on viewport.
This works amazingly well for mobile, you can simply take a 4K image as a source and it will optimize it for the user’s viewport. This represents huge savings in optimization.
But when applied to desktops, the solution falls flat. That same 4K image is still optimized for your viewport, not it’s actual size on the screen…
So if your viewport is 2560px, but you wanted to have a 600px image, the optimization delivers the browser a 2560px optimized image (from 4k) and the browser will scale it down to 600px to fit the width you specified.
This is a huge oversight, the data transmitted is a lot more than it should. It’s a lot worst with 4k screens.
Caching mechanism on client
To be fair, NextJs added in 11.0 an
importmethod for images that improved a lot the state of things.
importmethod will automatically add a
immutableon your images and add a hash on their name. With that change, users don’t ping your server everytime they reload the page to see if the image is still the latest.
But even with the new method, there are flaws:
What happens if your images are provided by users?
What happens if your content is in markdown?
Well though luck! You need to use the old method of using image, which is passing a string with the image’s path to the component.
The old method doesn’t allow you to chose your caching mechanism. You are stuck with a 60 seconds caching and that’s it. After that, every reload is a ping to the server. Granted, it’s just an Etag check, but it’s still a network request that adds at least 40ms latency.
With the old method, you’re also subject to the server re-optimizing your image on the fly, which can induce up to 3 seconds of delay.
Caching mechanism on the server
This one is also a fun one: the optimized images are kept only 60 seconds on the server.
After 60 seconds, NextJs will re-optimize your image on the next request. This can add between 200ms to 3000ms. Yes 3 seconds, that’s not a typo.
It doesn’t matter if your image has changed or not, NextJs will re-optimize it.
The image optimization use an AWS serverless function that executes from your own NextJs account.
While it’s very secure to separate the images like this, each site is separated, it’s still a serverless function executing every single minute if you have good traffic.
Multiply this by hundreds of thousands websites, you get a huge waste of ressources, they’re certainly not winning any green award anytime soon…
And if you want to change that 60 seconds caching stategy?
Well you couldn’t until very recently. That option, which I haven’t had time to test yet, has been added after countless complaints over more than 8 months.
It’s important to remind people that those frameworks do a lot of good things for the unintiated.
But it’s also fair to say they don’t do enough. The problems I listed are only the ones I encountered, there are others…
It’s a very complicated subject and frameworks like NextJs helps greatly to get the easy wins. Even after all my rambling, their solution is still making the overall situation better.
My point is more that, as developers, we need to be have a degree of control over those solutions. The role of these frameworks is to give you the best defaults, or what they think they are, with plenty of handles to adapt it to our situation.
This is helpful info, thanks for sharing!