HTTP/2 – A Real-World Performance Test and Analysis

Avatar of David Attard
David Attard on (Updated on )

Perhaps you’ve heard of HTTP/2? It’s not just an idea, it’s a real technology and slowly but surely, hosting companies and CDN services have been releasing it to their servers. Much has been said about the benefits of using HTTP/2 instead of HTTP1.x, but the proof the the pudding is in the eating.

Today we’re going to perform a few real-world tests, perform some timings and see what results we can extract out of all this.

Why HTTP/2?

If you haven’t read about HTTP/2, may I suggest you have a look at a few articles. There’s the HTTP/2 faq which gives you all the nitty gritty technical details whilst I’ve also written a few articles about HTTP/2 myself where I try to tone-down the tech and focus mostly on the why and the how of HTTP/2.

In a nutshell, HTTP/2 has been released to address the inherent problems of HTTP1.x

  1. HTTP/2 is binary instead of textual like HTTP1.x – this makes it transfer and parsing of data over HTTP/2 inherently more machine-friendly, thus faster, more efficient and less error prone.
  2. HTTP/2 is fully multiplexed allowing multiple files and requests to be transferred at the same time, as opposed to HTTP1.x which only accepted one single request / connection at a time.
  3. HTTP/2 uses the same connection for transferring different files and requests, avoiding the heavy operation of opening a new connection for every file which needs to be transferred between a client and a server.
  4. HTTP/2 has header compression built-in which is another way of removing several of the overheads associated with HTTP1.x having to retrieve several different resources from the same or multiple web servers.
  5. HTTP/2 allows servers to push required resources proactively rather than waiting for the client browser to request files when it thinks it need them.

These things are the best (if simplistic) depiction of how HTTP/2 is better than HTTP1.x. Rather than the browser having to go back to the server to fetch every single resource, it’s picking up all the resources and transferring them at once.

An semi-scientific test of HTTP/2 performance

Theory is great, but it’s more convincing if we can see some real-data and real performance improvements of HTTP/2 over HTTP1.x We’re going to run a few tests to determine whether we see a marked improvement in performance.

Why are we calling this a semi-scientific test?

If this were a lab, or even a development environment where we wanted to demonstrate exact results, we’d be eliminating all variables and just test the performance of the same HTML content, one using HTTP1.x and one using HTTP/2.

Yet (most of us) don’t live in a development environment. Our web applications and sites operate in the real world, in environments where fluctuations occur for all sorts of valid reasons. So while lab testing is great and is definitely required, for this test we’re going out in the real-world and running some tests on a (simulated) real website and compare their performance.

We’re going to be using a default one-page Bootstrap template (Zebre) for several reasons:

  1. It’s a very real-world example of what modern website looks like today
  2. It’s got quite a varied set of resources which are typical of sites today and which would typically go through a number of optimizations for performance under HTTP1.x circumstances
    • 25 images
    • 6 JS scripts
    • 7 CSS files
  3. It’s based on WordPress so we’ll be able to perform a number of HTTP1.x based optimizations to push its performance as far as it can go
  4. It was given out for free in January by ThemeForest. This was great timing, what better real-world test than using a premium theme by an elite author on ThemeForest?

We’ll be running these tests on a brand new account powered by Kinsta managed WordPress hosting who we’ve discovered lately, and whose performance we really find great. We do this because we want to avoid the stressed environments of shared hosting accounts. To reduce the external influence of other sites operating on the same account at the same time, this environment will be used solely for the purpose of this test.

We ran the tests on the lowest plan because we just need to test a single WordPress site. In reality, unlike most hosting services, there is no difference in speed/performance of the plans. The larger plans just have the capacity for more sites. We then set up one of the domains we hoard ( and installed WordPress on it.

We’ve also chosen to run these tests on WordPress.

The reason for doing that is for a bit of convenience rather than anything else. Doing all of these tests on manual HTML would require quite a lot of time to complete. We’d rather use that time to do more extensive and constructive tests.

Using WordPress, we can enable such plugins as:

  • A caching plugin (to remove generation time discrepancies as much as possible)
  • Combination and minification plugin to perform optimizations based on HTTP1.x
  • CDN plugin to easily integrate with a CDN whilst performing HTTP/2 tests integrated with a CDN

We setup the Zebre theme and installed several plugins. Once again, this makes the test very realistic. You’re hardly going to find any WordPress sites without a bunch of plugins installed. We installed the following:

We also imported the Zebre theme demo data to have a nicely populated theme with plenty of images, making this site an ideal candidate for HTTP/2 testing.

The final thing we did was to make sure there is page caching in place. We just want to make sure we were not suffering from drastic fluctuations due to page generation times. The great thing is that with Kinsta there’s no needed for any kind of caching plugin as page caching is fully built into the service at the server-level.

The final page looked a little like this:

That’s a Zebra!

And this is the below the fold:

We’re ready for the first tests.

Test 1 – HTTP1 – caching but no other optimizations

Let’s start running some tests to make sure we have a good test bed and get some baseline results.

We’re running these tests with only WordPress caching – no other optimizations.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Vancouver 3.3s 7.3MB 82
Pingdom tools New York 1.25s 7.3MB 82

There’s clearly something fishy going on. The load times are much too different. Oh yes: Google Cloud platform, Central US servers east are located in Iowa, making the test location of Pingdom tools New York much closer than Vancouver, skewing the results in favor of New York.

You probably know that if you want to improve the performance of your site, there is one very simple solution: host your site or application as physically close as possible to the location of your visitors. That’s the same concept CDNs use to boost performance. The closer the visitors to the server location of the site, the better the loading time.

For that reason, we’re going to run two types of tests. One is going to have a very close location between the hosting service and the test location. For the other, we’re going to choose to amplify the problem of distance. We’re thus going to perform a trans-atlantic trip with our testing, from the US to Europe, and see whether the HTTP/2 optimizations results in better performance or not.

Let’s try to find a similar testing location on both test services. Dallas, Texas is a common testing ground, so we’ll use that for the physically close location. For the second location, we’re going to use London and Stockholm, since there isn’ a shared European location.

Testing Site Location Page Load time Total Page Size Requests
Pingdom tools Dallas 2.15s 7.3MB 82

That’s better. Let’s run another couple of tests.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 1.6s 7.3MB 83
Pingdom tools Dallas 1.74s 7.3MB 82
GTMetrix London 2.6s 7.3MB 82
Pingdom tools Stockholm 2.4s 7.3MB 82

You might notice there are a few fluctuations in the requests. We believe these are coming from external scripts being called, which sometimes differ in the number of requests they generate. In fact, although the loading times seem to vary by about a second, by taking a look at the waterfall graph, we can see that the assets on the site are delivered pretty consistently. It’s the external assets (specifically: fonts) which fluctuate widely.

We can see clearly also how the distance affects the loading time significantly by about a second.

Before we continue, you’ll also notice that our speed optimization score is miserable. That’s why for our second round of tests we’re going to perform a number of speed optimizations.

Test 2 – HTTP1 with performance optimizations and caching

Now, given that we know that HTTP1.x is very inefficient in the handling of requests, we’re going to do a round of performance optimizations.

We’re going to install HummingBird from WPMUDEV on the WordPress installation. This is a plugin which handles page load optimizations without caching. Exactly what we need.

We’ll be enabling most of the optimizations which focus on reducing requests and combining files as much as possible.

  • Minification of CSS and JS files
  • Combining of CSS and JS files
  • Enabling of GZIP compression
  • Enabling of browser caching

We’re not going to optimize the images because this would totally skew the results.

As you can see below, following our optimization, we have a near perfect score for everything except images. We’re going to leave the images unoptimized on purpose so that we retain their large size and have a good “load” to carry.

Let’s flush the caches and perform a second run of tests. Immediately we can see a drastic improvement.

Never mind the C on YSlow. It’s because we’re not using a CDN and some of the external resources (the fonts) cannot be browser cached.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 1.9s 7.25MB 56
Pingdom tools Dallas 1.6s 7.2MB 56
GTMetrix London 2.7s 7.25MB 56
Pingdom tools Stockholm 2.28s 7.3MB 56

We can see quite a nice improvement on the site. Next up, we’re going to enable HTTPS on the site. This is a prerequisite for setting up HTTP/2.

Test 3 – HTTP/2 without optimizations and caching

We’ll be using the Let’s Encrypt functionality to create a free SSL certificate. This is built into Kinsta, which means setting up HTTPS should be pretty straightforward.

Once we’ve generated an HTTPS certificate, we’ll be using the Really Simple SSL WordPress plugin to force HTTPS across the site.

This plugin checks whether a secure certificate for the domain exists on your server, if it does, it forces HTTPS across your WordPress site. Really and truly, this plugin makes implementing HTTPS on your site a breeze. If you’re performing a migration from HTTP to HTTPS, do not forget to perform a full 301 redirection from HTTP to HTTPS, so that you don’t lose any traffic or search engine rankings whilst forcing HTTPS on your site.

Once we’ve fully enabled and tested HTTPS on our website, you might need to do a little magic to start serving resources over HTTP/2, although most servers today will switch you directly to HTTP/2 if you are running an SSL site.

Kinsta runs on Nginx, and enables HTTP/2 by default on SSL sites, so enabling SSL is enough to switch the whole site to HTTP/2.

Once we’ve performed the configuration our site should now be served on HTTP/2. To confirm that the site is running on HTTP/2, we’ve installed this nifty chrome extension which checks which protocols are supported by our site.

Once we’ve confirmed that HTTP/2 is up and running nicely on the site, we can run another batch of tests.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 2.7s 7.24MB 82
Pingdom tools* Dallas 2.04s 7.3MB 82
GTMetrix London 2.4s 7.24MB 82
Pingdom tools* Stockholm 2.69s 7.3MB 82

*Unfortunately, Pingdom tools uses Chrome 39 to perform the tests. This version of Chrome does not have HTTP/2 support so we won’t be able to realistically calculate the speed improvements. We’ll run the tests regardless because we can have a benchmark to compare with.

Test 4 – HTTP/2 with performance optimizations and caching

Now that we’ve seen HTTP/2 without any performance optimizations, it’s also a good idea to actually check whether HTTP1 based performance optimizations can and will make any difference when we have HTTP/2 enabled.

There are two ways of thinking about this:

  • Against: To perform optimizations aimed at reducing connections and size, we are adding performance overhead to the site (whilst the server performs minification and combination of files), therefore there is a negative effect on the performance.
  • In favor: Performing such minification and combination of files and other optimizations will have a performance improvement regardless of protocol, particularly minification which is essentially reducing the size of resources which need to be delivered. Any performance overhead can be mitigated using caching.
Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 1.0s 6.94MB 42
Pingdom tools** Dallas 1.45s 7.3MB 56
GTMetrix London 2.5s 7.21MB 56
Pingdom tools** Stockholm 2.46s 7.3MB 56

**HTTP/2 not supported

Test 5 – CDN with performance optimizations and caching (no HTTP/2)

You’ve probably seen over and over again how one of the main ways to improve the performance of a site is to implement a CDN (Content Delivery Network).

But why should a CDN still be required if we are now using HTTP/2?

There is still going to be a need for a CDN, even with HTTP/2 in place. The reason is that besides a CDN improving performance from an infrastructure point of view (more powerful servers to handle the load of traffic), a CDN actually reduces the distance that the heaviest resources of your website need to travel.

By using a CDN, resources such as images, CSS and JS files are going to be served from a location which is (typically) physically closer to your end user that your website’s hosting server.

This has an implicit performance advantage: the less content needs to travel, the faster your website will load. This is something which we’ve already encountered in our initial tests above. Physically closer test locations perform much better in loading times.

For our tests, we’re going to run our website on an Incapsula CDN server, one of the CDN services which we’ve been using for our sites lately. Of course, any CDN will have the same or similar benefits.

There are a couple of ways that your typical CDN will work:

  • URL rewrite: You install a plugin or write code such that the address of resources are rewritten such that they are served from the CDN rather than your site’s URL
  • Reverse proxy: you make DNS changes such that the CDN handles the bulk of your traffic. The CDN service then sends the requests for dynamic content to your web server.
Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 1.5s 7.21MB 61
Pingdom tools Dallas 1.65s 7.3MB 61
GTMetrix London 2.2s 7.21MB 61
Pingdom tools Stockholm 1.24s 7.3MB 61

Test 6 – CDN with performance optimizations and caching and HTTP/2

The final test which we’re going to perform is implementing all possible optimizations we can. That means we’re running a CDN using HTTP/2 on a site running HTTP/2, where all page-load optimizations have been performed.

Testing Site Location Page Load time Total Page Size Requests
GTMetrix Dallas 0.9s 6.91MB 44
Pingdom tools** Dallas 1.6s 7.3MB 61
GTMetrix London 1.9s 6.90MB 44
Pingdom tools** Stockholm 1.41s 7.3MB 61

**HTTP/2 not supported

Nice! We’ve got a sub-second loading time for a 7Mb sized website! That’s an impressive result if you ask me!

We can clearly see what a positive effect HTTP/2 is having on the site – when comparing the loading times, you can see that there is a 0.5 second difference on the loading times. Given that we’re operating in an environment which loads in less than 2 seconds in the worst-case scenario, a 0.5 second difference is a HUGE improvement.

This is the result which we were actually hoping for.

Yes, HTTP/2 does make a real difference.

Conclusion – Analysis of HTTP/2 performance

Although we tried as much as possible to eliminate fluctuations, there are going to be quite a few inaccuracies in our setup, but there is a very clear trend. HTTP/2 is faster and is the recommended way forward. It does make up for the performance overhead which is introduced with HTTPS sites.

Our conclusions are therefore:

  1. HTTP/2 is faster in terms of performance and site loading time than HTTP1.x.
  2. Minification and other ways of reducing the size of the web page being served is always going to provide more benefits than the overhead required to perform this “minification”.
  3. Reducing the distance between the server and the client will always provide page loading time performance benefits so using a CDN is still a necessity if you want to push the performance envelope of your site, whether you’ve enabled HTTP/2 or not.

What do you think of our results? Have you already implemented HTTP/2? Have you seen better loading times too?