With CSS filter effects and blend modes, we can now leverage various techniques for styling images directly in the browser. However, creating aesthetic theming isn’t all that filter effects are good for. You can use filters to indicate hover state, hide passwords, and now—for web performance.
While playing with profiling performance wins of using blend modes for duotone image effects (I’ll write up an article on this soon), I discovered something even more exciting. A major image optimization win! The idea is to reduce image contrast in the source image, reducing its file size, then boosting the contrast back up with CSS filters!

How It Works
Let’s put a point on exactly how this works:
- Reduce image contrast using a linear transform function (Photoshop can do this)
- Apply a contrast
filter
in CSS to the image to make up for the contrast removal
Step one involves opening your image in a program that lets you linearly reduce contrast in a linear way. Photoshop’s legacy mode does a good job at this (Image > Adjustments > Brightness/Contrast):

Not all programs use the same functions to apply image transforms (for example, this would not work with the macOS default image editor, since it uses a different technique to reduct contrast). A lot of the work done to build image effects into the browser was initially done by Adobe, so it makes sense that Photoshop’s Legacy Mode aligns with browser image effects.
Then, we apply some CSS filters to our image. The filters we’ll be using are contrast
and (a little bit of) brightness
. With the 50% Legacy Photoshop reduction, I applied filter: contrast(1.75) brightness(1.2);
to each image.
Major Savings
This technique is very effective for reducing image size and therefore the overall weight of your page. In the following study, I used 4 vibrant photos taken on an iPhone, applied a 50% reduction in contrast using Photoshop Legacy Mode, saved each photo at Maximum quality (10), and then applied filter: contrast(1.75) brightness(1.2);
to each image. These are the results:




You can play with the live demo here to check it out for yourself!
In each of the above cases, we saved between 23% and 28% in image size by reducing and reapplying the contrast using CSS filters. This is with saving each of the images at maximum quality.
If you look closely, you can see some legitimate losses in image quality. This is especially true with majority-dark images. so this technique is not perfect, but it definitely proves image savings in an interesting way.
Browser Support Considerations
Be aware that browser support for CSS filters is “pretty good”.
This browser support data is from Caniuse, which has more detail. A number indicates that browser supports the feature at that version and up.
Desktop
Chrome | Firefox | IE | Edge | Safari |
---|---|---|---|---|
18* | 35 | No | 79 | 6* |
Mobile / Tablet
Android Chrome | Android Firefox | Android | iOS Safari |
---|---|---|---|
119 | 119 | 4.4* | 6.0-6.1* |
As you can see, Internet Explorer and Opera Mini lack support. Edge 16 (the current latest version) supports CSS filters and this technique works like a charm. You’ll have to decide if a reduced-contrast image as a fallback is acceptable or not.
What About Repainting?
You may be thinking: “but while we’re saving in image size, we’re putting more work on the browser, wouldn’t this affect performance?” That’s a great question! CSS filters do trigger a repaint because they set off window.getComputedStyle()
. Let’s profile our example.
What I did was open an incognito window in Chrome, disable JavaScript (just to be certain for the extensions I have), set the network to “Slow 3G” and set the CPU to a 6x slowdown:

While the images took a while to load in, the actual repaint was pretty quick. With a 6x CPU slowdown, the longest individual Rasterize Paint took 0.27 ms, AKA 0.00027 seconds.
CSS filters originated from SVG filters, and are relatively browser optimized versions of the most popular SVG filter effect transformations. So I think its pretty safe to use as progressive enhancement at this point (being aware of IE users and Opera Mini users!).
Conclusion and the Future
There are still major savings to be had when reducing image quality (again, in this small study, the images were saved at high qualities for more of a balanced result). Running images through optimizers like ImageOptim, and sending smaller image file sizes based on screen sized (like responsive images in HTML or CSS) will give you even bigger savings.
In the web performance optimization world, I find image performance the most effective thing we can do to reduce web cruft and data for our users, since images are the largest chunk of what we send on the web (by far). If we can start leveraging modern CSS to help lift some of the weight of our images, we can look into a whole new world of optimization solutions.
For example, this could potentially be taken even further, playing with other CSS filters such as saturate
and brightness
. We could leverage automation tools like Gulp and Webpack to apply the image effects for us, just as we use automation tools to run our images through optimizers. Blending this technique with other best practices for image optimization, can lead to major savings in the pixel-based assets we’re sending our users.
This is a wicked cool trick. But…
If a user saves an image using this trick, does the user end up downloading the image without the contrast?
I would think yes. Although maybe if you get incredibly fancy you could draw the image to a canvas and apply the filter there and export that?!
Yep, you can’t save the image with the filter. You’d have to take a screenshot or include a download button
In addition to that, won’t the images appear without the contrast in:
RSS readers,
automated post-to-mail subscribers and
anywhere else the content is used without the styles (think mobile app pulling content from WordPress site using the WP Rest-API)?
re: RSS… kinda depends. Maybe you apply the contrast in an automated fashion anyway, and use an inline
<img style="filter..." />
anyway. Not all syndication is gonna leave inline styles alone, but some do.Pretty much. Filters are non destructive, so nothing is being done to the original file.
And we would convert this canvas to data or blob URL and replace the image URL with it.
what about saving the images?
This should have been built into the image format. Why they don’t take advantage of something like this?
I believe JPEG does in fact do this (I know it encodes and quantizes luminosity separately, anyway) — tweaking its compression might be able to replicate these savings.
All this “redesaturation trick” does is reduce the color palette, allowing better image (notably for lossy formats like jpeg) and file (notably for lossless formats like png) compression.
You should be able to perfectly do reduce color palette diversity without compromising in color range/space without all the back and forth CSS nonsense.
I guess if you need IE11 support you can go for svg: define filter as svg filter, insert filter inside dom, load image using svg image tag, apply filter from css.
This is almost identical to lowering JPEG compression quality. Lowering of contrast lowers pixel amplitudes, which lowers amplitudes of DCT coefficients, which given limited resolution of integer DCT coefficients, increases quantization. JPEG internally does the same thing using quantization tables.
So for straight-up compression you may be better off using MozJPEG, which tunes quantization itself directly.
Preprocessing like that does have a lot of sense in some situations, e.g. if you’re going to apply a color gradient to an image, you can as well make the original in grayscale (avoid sending color channels). Or if only a part of image will be clearly visible after filtering, you can reduce file size by blurring the less important parts.
Finally, someone who actually knows what JPEGs are.
Seriously – don’t use this technique. Just save your images at a lower quality setting. You’ll have the same benefits, but with higher perceived image quality, and none of the downsides when CSS isn’t available.
Here’s another technique I learned when working in an image-processing job: add a blur filter to just the blue colour channel. Up to a point, you can’t really notice it, because human vision is incredibly bad at seeing detail in blue photoreceptors.
Definitely, this is of course not a standalone solution, it’s an additional image optimization! :) People should always run images through compressions and optimizers.
Community note: That’s an awful rude way to start a comment. Let’s keep things a bit more respectful.
Yes, if a user saves an image, it is saved without the contrast. Of course, you could still try to put the filtered image into a canvas element, so saving would work.
However, what you actually do is to reduce the image signal and thus the bandwidth, boosting the signal afterwards, thereby loosing the information that you removed before.
You can save about the same amount of bytes by reducing the number of colors in the image, using tools like pngquant for PNGs, mozjpeg for JPEG or svgo for SVG. Know your image optimizations and always use the best possible image format or even how to mix them to the greatest effect.
Nice and right on time. I’m using velocity.js to simulate a full screen animated gif and trying to cut image load latency without sacrificing too much quality… this could be useful. Thanks.
A combination of this and some additional filters could be useful in stopping people from stealing images from your site. At least the ones who are not that motivated.
Now this seems like a real practical use case for this! The article should be updated to note this awesome use case.
Did you try to simply save the photo without changing the constat and made sure it didn’t reduce the size of the picture anyway?
Yes! Thank you for asking.
I also added back the contrast within Photoshop to see what effect that would have on the image. The starting image and ending image where I removed and re-added contrast was 5-20kb smaller. The images above show savings of 500-900kb!
Thank you so much for this. Major time saver as I edit a lot of images in photoshop, then run them through Compressor.io and the combination makes optimizing images very tedious.
Nice trick.
The examples you provided were all fairly hefty image sizes to begin with and even the resulting image sizes may not be ideal for the web.
Did you try out any smaller images to see if you still received the same percentage of bytes saving?
How about background images? I suppose this trick cannot be used for them?
I think you could use it if your background image gets applied to a pseudo-element.
hmm! Wonder if the same concept could be applied to video?
I’ve mentioned this in the past (in a talk in fact), but this is something that can be done at the video level and with success as per Netflix.
The concept of authoring low contrast is the key. The post filtering is tertiary in this matter.
So it works? That’s cool! Got a codepen?
I think it’s a nice tip but not that good, even though you can save the user to download more data, you affect performance on rendering the images with filters.
There is a section in the article on this. Look up.
Hello and thank you for taking the time to experiment on this. It is an interesting approach but according to my experiments it is not a good idea.
When you lower the contrast a compression algorithm takes advantage of this and applies more lossy compression. It is allowed to do that since an area with less contrast have less detail and a viewer would not perceive that. Once you pull the contrast up again, you have tricked the compression to apply heavier compression than what is suitable.
In my tests an image using the contrast trick has worse quality than an image with comparable file size where the compression level is increased to match the file size of the contrast trick image.
I don’t have any major doubts about what you are saying, but Una presented tests, and you’ve just said you’ve done tests. Post ’em!
This technique is certainly clever, but I would not use or recommend this technique.
It is basically a reduction of palette by any other name.
You will introduce banding.
Simply reducing quality or bit depth in export should be expected to have similar – or even better – results.
Filter effects are not free. Image size is probably with that trade-off, but 27ms is not .00027s, as the article claims. It is .027ms.
The big hint here of what is really going is that both images are saved at maximum quality. But in one, information has been deleted. (Hence why the file size is smaller. You essentially reduced the quality before saving.) No filter will restore the deleted information, although a glance won’t show the difference, usually. But the same could be said of reducing the export quality.
It’s a cool trick but I am concerned about the results then on google images.
Definitely a fair point :) You’d lose the contrast on there, which as some people pointed out, might be ideal for photographers and others wanting to avoid people using their images without permission
It’s works on IE 10-11?
Ugh. Typo. I meant .027s.
Good on you for experimenting and thinking outside-of-the-box, but those with an understanding of how JPEG works will recognize that this is a very inferior technique to just re-compressing the image with JPEG on a lower quality setting.
A JPEG codec will examine your image and determine which series of techniques to use to get you the highest quality image for a given filesize. What you are suggesting here is basically only a single “tool” in the “toolbox” of algorithms JPEG might use to compress your image.
So it’s a cool idea – But don’t do this in real life. Just re-encode the images and let the codecs figure out how to get you the best bang-for-your-buck.
Hi! Thanks for your comment. However, I would urge you to read the entire post and consider that the tone of this comment feels a bit condescending. For example:
“those with an understanding of how JPEG works will recognize that this is a very inferior technique to just re-compressing the image with JPEG on a lower quality setting.” <– this is a bit presumptuous.
“What you are suggesting here is basically only a single “tool” in the “toolbox” of algorithms JPEG might use to compress your image.” <– yes, this is exactly what I’m saying.
Contrary to your statement, I have in fact done extensive research on how JPGs work and have passed that along via an hour-long pixel-based media compression talk at An Event Apart this year, discussing tooling for image compression while explaining why certain techniques work based on how certain formats (i.e. JPEG) compress images.
In this article I conclude with “Blending this technique with other best practices for image optimization, can lead to major savings in the pixel-based assets we’re sending our users” and include a link to a free online book on the topic which I tech edited.
I wrote this post to describe a new technique for optimizing images even further (apart from the standard practices), not about general image optimizing. Was that unclear in the title or introduction for this article?
I’m not trying to be condescending, but at the same time I happen to be a bit of an expert in this area since I work on image and video codecs professionally for a living, and I’m not really stating my opinion – It’s a fact about the way that JPEG works. The technique that you proposed above will always produce images that are equal-or-lesser quality for the same file size than a JPEG codec would. Full stop.
Take your first example above – You lowered the contrast down and changed the file size from 3.2MB to 2.3MB – Awesome. But if instead you just lower the quality setting for the JPEG codec, you can still get a file that’s 2.3MB, but it would be higher visual fidelity than the picture your technique produces.
This is because you are forcing JPEG into using a single technique (Contrast reduction) to compress the image, when normally it would pick the best of a variety of different of different techniques to accomplish it based on a “human perception/file size” tradeoff that was studied by the JPEG engineers decades ago.
I’m not saying this to be a jerk – Smart people accidentally reinvent things all-the-time as they arrive at the same logical conclusion given a similar evidence. But what you are proposing was discovered a long time ago and already integrated into the JPEG standard.
Why not apply contrast again in the editor before saving rather than in browser? File size shouldn’t increase, after the palette reduction happened.
Hi, thanks for your question. I answered this above briefly (just now):
I did try this, to see if it would save image size from reducing some of the contrasting areas. The starting image and ending image where I removed and re-added contrast was 5-20kb smaller. The images above show savings of 500-900kb.
So that makes me think that the real savings came from a reduced color palette.
Why this technique isn’t being utilised in image codecs?
Exactly! JPEG already does this.
Rather than lowering the contrast to “trick” JPEG into shrinking a file from 3.2MB to 2.3MB, just lower the output quality on the JPEG codec. You still get a 2.3MB file, but it will be higher quality than the contrast-reduced one would be.
Is it possible to add image comparisons (similar to what you did in the “Major Savings” section) of a JPEG that uses a higher compression coefficient that results in the same size image as your “savings” oriented image? e.g. the original is 3.2 MB, your version in 2.3 MB – what does a 2.3 MB jpeg compressed image look like? All image editing programs I’m aware of support restricting the size.
Visually beating the actual JPEG algorithm would be quite the achievement indeed!
Late to the party here, but it strikes me that this might be a very good technique for online businesses that have concerns about image piracy (I am thinking of stock photography sites in particular). Anyone trying to riff an image for free gets the desaturated version and you can leave off the lame watermarks.
That’s actually a pretty good use case.
And then the user screenshots the image instead.
But lame watermarks don’t work either as image restoration improves. There are algorithms that do a fantastic job of removing text over an image with results that are “good enough”. See things like Deep Prior, LapSRN, SRResNet, or other super-resolution image repairing techniques/software.
The thief doesn’t necessarily want the original. They want something “close enough” that they can use. Whether that means cropping out a watermark placed on the corner, removing a watermark placed over a solid color, screenshotting the image, etc.
The best practice to avoid art theft is to not put art on the web. The second is to accept that it will happen and attempt to legally defend your copyright (which is expensive, time consuming, and difficult over the internet). The common practice is to stop caring and accept that it will happen.
I love clever ideas like this. Nice work Una!
I’m already using
filter: blur()
to smooth out images blurred in Photoshop, but I’ll have to add in this trick to save a few more bytes.