These are both things that you do to assets on your website (things like .css files and .js files). They are both things that reduce the size of the file, making it more efficient in crossing the network between servers and browsers. As in, good for performance. The network is the speed bottleneck of the web and reducing file size helps.
But these two things are distinctly different. If you didn’t already know that, it’s worth understanding.
Minification does things like removing whitespace, removing comments, removing non-required semicolons, reducing hex code lengths…
… and stuff like that. The file remains perfectly valid code. You wouldn’t want to try to read it or work with it, but it’s not breaking any rules. The browser can read it and use it just like it could the original file.
Minification creates a new file that you ultimately use. For instance, you’d create a `style.css` that you work with. Then you could minify it into `style.min.css`.
Gzipping finds all repetitive strings and replaces them with pointers to the first instance of that string.
Julia Evans created a wonderful way to understand this (see her post and video). See this first paragraph of a poem:
Once upon a midnight dreary, while I {pon}dered weak an{d wea}{ry,}
Over many{ a }quaint{ and }curious volume of forgotten lore,
W{hile I }nodded, n{ear}ly napping, su{dde}n{ly }th{ere} ca{me }a t{apping,}
As{ of }so{me o}ne gent{ly }r{apping, }{rapping} at my chamb{er }door.
`’Tis{ some }visitor,’{ I }mu{tte}r{ed, }`t{apping at my chamber door} –
O{nly th}is,{ and }no{thi}{ng }m{ore}.
The text within the curly brackets has been discovered by gzip to be repetitive. Thus will be replaced with a pointer that uses less space than the text itself does.
This can be incredibly effective at reducing file size, especially with code, since code be incredibly repetitive. Imagine how many instances of <div
there are in an HTML file or {
in a CSS file.
You can create gzipped version of files (i.e. style.css.zip) but you rarely ever do that and the browser won’t know what to do with it.
On the web, gzipping is done directly by your server. It’s a matter of configuring the server to do it. Once that’s done, gzipping automatically happens, there isn’t any ongoing work you have to do. The server compresses the file and sends it across the network like that. The browser receives the file and unzipped it before using it. I’ve never heard anyone mention anything about the overhead of zipping and unzipping, so I just assume it’s negligible and the benefits far outweigh the overhead.
Here’s how to enable it on Apache servers, where it uses the mod_deflate
module. And H5BP offers server configurations for all the popular servers that include gzipping.
An Example
We’ll use the CSS file from Bootstrap since it’s such a common asset.

You’ll save about 17% minifying the CSS, 85% gzipping, or 86% if you do both.
Here’s the ideal situation when checking everything is working from DevTools:

Gzipping is far more effective. Doing both is ideal.
Gzipping reduces the file size about five times as much as minifying does. But, you get a little boost from minifying as well, and since it likely requires little additional effort in a build step, you might as well.
There is also some evidence that browsers can read and parse a minified file faster:
As expected, minification helps parse-n-load in addition to network transmission time. This is probably due to the absence of comments and extra whitespace.
Microsoft is also starting to optimize their parsers for it:
So in Windows 10 and Microsoft Edge, we’ve added new fast paths, improved inlining and optimized some heuristics in Chakra’s JIT compiler to ensure that minified code runs as fast, if not faster than the non-minified versions. With these changes, the performance of individual code patterns minified using UglifyJS that we tested, improved between 20-50%.
Caching assets is also related to this conversation, as nothing is faster than a browser that doesn’t have to request an asset at all! There is plenty of information on that around the web (or in books), but we may just do a post on it soon with some tricks.
There is a third option which deserves some discussion: Compressing javascript files using javascript.
The source script will be compressed using a well-known method like gzip, lha etc.
The resulting string is then used as a parameter for a decompression javascript function. This works astonishingly well, since many decompression algos are quite simple to implement.
This might appear senseless in comparison to server-native gzip compression. It becomes interesting in respect to SEO bots: If those bots measure the unzipped payload you might be a few points ahead. On the other hand this method produces runtime overhead at the browser side, eventually producing worse total page load time.
Anyway, it’s technially interesting and, as I said, should be discussed.
At work we have an ongoing discussion with our SEO guys, since only the gods know exactly how Google really interprets your site.
And you will lose cache on next pageviews…
Hey thanks for writing this up! Real knowledge that comes in handy.
For my own website, I haven’t been bothering with minification – it’s more of a tech experiment, so it makes sense there. I’m open to letting people see how I wrote my website, and if I were to show the site to someone and encounter an issue, I’d like being able to debug anywhere, and while source maps alleviate this greatly, they’re still not perfect. It also cuts out a build step. (That said, for larger sites it certainly makes sense)
If you want that you can provide a source map file which can be automatically generated using Gulp/Grunt etc, also in Chrome Dev tools when you view a minified CSS file (or uglified JS file) you can click the {} button in the bottom left of the window and it will prettify the file for you…
Since most people would debug your page in the dev tools anyway, it doesn’t make sense to not minify because dev tools deminify your code anyway.
You can also run a minify task only on a production deployment and configure your server to gzip only in production environment.
Dev tools doesn’t “deminify” anything. It gives you access to the dom and styles, sure, but dom is often built using javascript, so you don’t get to see the original unminified markup. Also, dev tools doesn’t deminify your js in any sense of the word, which is prooobably what someone wants to see when they’re wondering how your site is built. Most other things are trivial.
Just so you know, minifying can (and will) decrease page load time. Less data to download = faster downloads. It doesn’t cost you anything and it takes just moments to set up properly. And, as already have been said, it’s 2015, if someone wants do debug your code, he’ll do it whether it’s minified or not.
One thing to keep in mind is that gzipping is a big no-no on HTTPS connections whenever secrets are involved. There’s no way to mitigate the attacks, so compression of all sorts should be turned off completely for secure connections.
That said, this is only a big deal when there are secrets in the compressed stream. So if you’re just serving up CSS and generic JavaScript off a separate domain that doesn’t have session cookies (or other secret bits in the header), that’s just fine. Just make sure you don’t gzip stuff on the domain that serves your HTML — especially if cookies get served along with it, or if any form of secret is enclosed in the document.
Not all environments are susceptible to CRIME attacks. See, for example, the Mitigation section in the same Wikipedia article.
How about the Obfuscation?
Oftentimes, the minification of JavaScript is alternatively called “obfuscation” because it makes the code very difficult to read.
There is JavaScript obfuscation, which is different from minification: when the second tries to reduce code the other does it then adds code to make it harder to be read.
Example: even if minification renames all the variables those can be statistically renamed back manually or using a tool like jsnice.org. In the other hand obfuscation wraps your code in a huge mess of eval constructs, browser environment checks and
setInterval('debugger')
to annoy you when using developer tools.Also, as it was just a example, minification and obfuscation tools can work in other ways so your mileage may vary.
No doubt minification is a form of obfuscation though, even if that’s not its sole purpose.
Just wanted to pitch in on this:
This is almost always true, except when you have a really big file. Let’s say you’re writing a game entirely in JavaScript and your files are massive. Or worse, maybe you wrote the game in another language, and you compiled that into JavaScript (see Emscripten, etc.).
In cases like these, you may find it beneficial to gzip the files ahead of time and configure your web server to serve the gzipped files directly (along with the appropriate content type headers, of course). This will reduce the amount of work that your web server has to do each time someone loads the page (of course, this could be mitigated in other ways, too, like using browser caching).
You can do that using something like this in Apache (as per this answer on SO):
Those rules basically say “serve
.css.gz
files with a content type oftext/css
, and make sure not to gzip them (because they’re already gzipped.” Same goes for the.js.gz
files.Other than that, couldn’t agree more with the article. Gzip your (textual) assets!
But by including the word “rarely” he means that the gzipping is in fact occasionally done. So, the statement “You can create gzipped version of files (i.e. style.css.zip) but you rarely ever do that and the browser won’t know what to do with it.” is totally correct.
Nice comment Agop, I was wondering under what circumstances you would serve the gzipped files directly. :)
Add something like the following to an Nginx conf to enable gzip:
gzip on;
gzip_disable "msie6";
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
Note, gzip_comp_level runs from 0-9. Higher values command greater compression, but require more resources to compute. A level of 5 or 6 is usually the sweet spot.
Some hosts don’t allow for on-the-fly gzipping (like Amazon Web Services or NearlyFreeSpeech.NET), which means one can end up with files like
styles.min.css.gz
depending on the environment. While this is a bit of a bummer, it does end up faster for the end consumer:The server doesn’t spend any time compressing the file
Since you’re already doing it ahead of time, you can gzip the file at maximum compression, which shaves off a few additional percentage points and decreases decompression time (gzip is unusual among compression technologies in that it inflates faster the harder its compression).
Heck, if it’s a really well-trafficked file, you can go ahead and zopfli it, which takes forever but ends up with the smallest damn file possible.
Because of my host, I have to “pre-zip” everything, but storage space is cheap. It just takes a little bit of
.htaccess
configuration to get Apache to use the.gz
version instead whenAccept-Encoding
is sent.This gets even better with SVG files, since any conforming SVG viewer is required to also handle SVGZ files. My testing indicates that all browsers (even IE9!) do indeed conform, so I needn’t include the original
.svg
at all.One reason why you might want to pre-gzip your assets is that you can increase the compression level. Many compression schemes allow you to supply a compression level (like 1 – 10) that specifies how aggressively the algorithm attempts to compress the data. Many times, a web server’s default configuration uses a level in the middle (like 5), to try to balance the CPU cost to do the compression versus the byte savings going over the wire. Some services like Amazon S3 allow you to upload pre-gzipped assets, so your servers (and theirs) don’t need to spend any CPU time compressing and your users get the smallest files possible. A java based tool called JetS3t makes uploading compressed content a breeze. Building something automated to fit this into your workflow is a bit complicated, but doable.
Unfortunately, gripping would not get around the IE9 nonsense that limits the amount of css the browser will read. Right?
Only minifying solve that problem.
That’s caused by the number of CSS selectors not the file size. A minifier may optimise the style sheet and reduce the number of selectors which may appear to help the problem but really you want something like blessed that splits up your CSS file and will allow your app to have as many rules as you like and never break :)
Gotta be honest … even the largest, most enterprise-y apps shouldn’t have that many selectors. If you’re worried about hitting a 4k+ selector limit, you need to quit worrying about GZIP and start paring down your CSS.
With IE there are 2 limits to contend with regarding CSS. There is a limit on the number of selectors AND a limit on the number of style sheet includes.
Luckily newer versions of IE have resolved these issues.
If the file is less than 1500 bytes, then it shouldn’t be compressed. Compressing a file that already fits into the MTU window only wastes CPU cycles on both ends of the connection. Depending on how much compression you’re doing, the minimum file size threshold may need to be as high as 5KB.
Good to read about this. Thanks for writing and the usefull comments.
I found it a bit weird that you write about minification yet you don’t minify your own HTML :)
I’m not that into minifying HTML. Whitespace in HTML is still meaningful (e.g. this).
Well, you can do something called “safe minification” where every sequence of white characters is replaced by one space — same result. Now, take a look how much (bytes) you could save just by replacing all the tabs and new line chars with space.
This is the exact same thing you do with CSS or JS. And no, whitespace in HTML is not meaningful, unless it’s in
<pre>..</pre>
.You should write a tutorial about all this that I could read!
@Artur: the problem with automated HTML minification is that you can’t know which elements have
white-space: pre
or other similar selectors that change the way white space is dealt with. This is why people don’t do it.Emerging technologies to produce pages can result into “minified” HTML output. For example it happens as a side effect when creating isomorphic React sites: there are barely any line changes on pages rendered by React.
These sites must not be confused as using minified HTML, because they’re not. It is just a result of how Virtual DOM and page generation works: there are no templates, there is no need for human formatted markup.
@chris haha, nope.
@veso well, specs says (in other words): you want whitespace — use
<pre>
. The thing is that most developers don’t know that and you can see ‘hacks’ like this mentioned in the article. As for React, never used, so I can’t comment on that.there are lots of minification techniques that require you to change your code style (because minifiers can only be so smart) that gets you a lot more.
e.g. never referencing twice a method mane. and using things like function… { var LEN=’length’; if( a[LEN] > 2…
having the local var a[LEN] instead of a.length everywhere will allow the minifier to replace it to a[A]. then you will have dozen of a[A] in your code (depending on the method the local A will be something else). then when gzip comes, it will rip trhu those repeated patters like nothing!
And that’s why you should always test your code after minification, but more than that — you should avoid writing code that can be misunderstood or is difficult to read (both for humans and computers).
Small correction re
.zip
is not gzip (unless you renamed the file). Gzipping that file would by default give youstyle.css.gz
You should also run your CSS through something like UNCSS to get rid of the 90% of Bootstrap you are not using as well.
Manually gzipping when using a service like AWS S3 then uploading to bucket after renaming from style.css.gz to style.css. If you don’t set the Content-Encoding: gzip header, a gzipped CSS is sent to your users and that gives a clearer understanding of how important that header is. Also that minified can work directly with the HTML but gzipped are meaningless to browsers if there aren’t headers that tell them to unzip it first
If anyone’s not clear on the benefits of using source maps for CSS (especially useful if you’re using Sass), I wrote a blog post on it not too long ago:
http://sheelahb.com/blog/up-your-efficiency-with-zurb-foundation-css-source-maps/
Well if you’d like have a peak inside a gzipped file you can dump it using defdb or produce compression efficiency heat maps using gzthermal.
gzthermal:
http://encode.ru/threads/1889-gzthermal-pseudo-thermal-view-of-Gzip-Deflate-compression-efficiency
defdb:
http://encode.ru/threads/1428-defdb-a-tool-to-dump-the-deflate-stream-from-gz-and-png-files
If you are gzipping your files manually please consider using zopfli or kzip+kzip2gz rather than the gzip command since both produce smaller files (even smaller than “gzip -9n”).
for instance on the book1 file:
312,275 bytes (gzip -9n)
299,455 bytes (kzip, after zip to gz conversion)
299,504 bytes zopfli
298,695 bytes zopfli –i1000
Zopfli:
https://github.com/google/zopfli
The faster and better KrzYmod customized version (includes binaries):
http://encode.ru/threads/2176-Zopfli-amp-ZopfliPNG-KrzYmod
KZIP:
http://advsys.net/ken/utils.htm
http://www.jonof.id.au/kenutils
kzip2gz:
http://encode.ru/threads/1630-kzip2gz-a-zip-%28single-file-amp-deflate-compression%29-to-gz-converter
Running deflopt and/or defluff afterwards can save a few more bytes.
Defluff:
http://encode.ru/threads/1214-defluff-a-deflate-huffman-optimizer
If you have a lot of CPU resources and time, you can recompress a file produced by kzip+kzip2gz using rezop, this will keep the block splitting done by kzip and recompress each deflate block using the Zopfli algorithm) finally use huffmix to cherry pick the smallest blocks at the end.
Huffmix was written to speed up the KZIP/PNGOUT random setting if you really need to save a few extra bytes you can loop through KZIP random runs as explained on the Huffmix page.
Rezop:
http://encode.ru/threads/2204-rezop-recompress-using-zopfli-while-preserving-blocksplit
Huffmix:
http://encode.ru/threads/1313-Huffmix-a-PNGOUT-r-catalyst
This is awesome. Would you mind putting this at a URL I could link to, for other people? (WordPress comment fragments can act strangely.)
Oh and since it is written large and bold I have to correct this sentence:
“Gzipping finds all repetitive strings and replaces them with pointers to the first instance of that string.”
LZ matches usually refer the last instance, since distances are expressed relatively to the decoding point (and not the beginning of the file) and since the longer the distance the more bits are needed to record it.
If you’re not afraid of hearing French* I made a video to explain the whole LZ77/LZSS thing and the differences between greedy/optimal/lazy parsers.
Video (in French) skip to 3:25 for a step by step gzip stream decoding:
*otherwise just mute sound and look how the arrows point to previous text references.
Worth mentioning that minified files are slightly more likely to stay in the browser’s cache longer, since they are smaller & won’t trigger the flushing as often. Might be only 0.01% improvement, but when you have thousands of visits, would help prevent re-requesting of assets.
More important in hard-limit storage of mobile & AppCache.
Oh, excellent point. Also applies to localStorage caching and the like.
I’ve been using my chart on https://mathiasbynens.be/demo/jquery-size to explain this. IMHO it nicely visualizes the difference.
Wow! That just proved that zopfli is overrated. Good old gzip is fine.
@Peter no, it’s not. As you can see from Mathias’ chart, he ran only 15 iterations. It’s completely worth zopfling files that are most important for page load like css, js, etc. Think about all those saved miliseconds :D
Very excellent article! We develop a lot in WordPress for client, so often times we are using pre-configured plugins for this like W3 Total Cache, BWP Minify, Super Cache, etc. Any thoughts on these types of Minify and Gzipping tools?
Thanks!
Larry Wolf
I had a similar revelation some time ago about trying to optimize a JSON blob: http://www.peterbe.com/plog/gzip-rules-the-world-of-optimization