performance

Musings on HTTP/2 and Bundling

HTTP/2 has been one of my areas of interest. In fact, I've written a few articles about it just in the last year. In one of those articles I made this unchecked assertion:

If the user is on HTTP/2: You'll serve more and smaller assets. You’ll avoid stuff like image sprites, inlined CSS, and scripts, and concatenated style sheets and scripts.

I wasn't the only one to say this, though, in all fairness to Rachel, she qualifies her assertion with caveats in her article. To be fair, it's not bad advice in theory. HTTP/2's multiplexing ability gives us leeway to avoid bundling without suffering the ill effects of head-of-line blocking (something we're painfully familiar with in HTTP/1 environments). Unraveling some of these HTTP/1-specific optimizations can make development easier, too. In a time when web development seems more complicated than ever, who wouldn't appreciate a little more simplicity?

(more…)

What is the Future of Front End Web Development?

I was asked to do a little session on this the other day. I'd say I'm underqualified to answer the question, as is any single person. If you really needed hard answers to this question, you'd probably look to aggregate data of survey results from lots of developers.

I am a little qualified though. Aside from running this site which requires me to think about front end development every day and exposes me to lots of conversations about front end development, I am an active developer myself. I work on CodePen, which is quite a hive of front end developers. I also talk about it every week on ShopTalk Show with a wide variety of guests, and I get to travel all around going to conferences largely focused on front end development.

So let me take a stab at it.

(more…)

Guetzli

Geutzili, Google's new open source algorithm...

...that creates high-quality JPEG images with file sizes 35% smaller than currently available methods, enabling webmasters to create webpages that can load faster and use even less data.

I've seen this fairly widely reported, and that's great because images are the main cause of web bloat these days and fighting back with tech seems smart.

I also saw Anselm Hannemann note:

This is great, but to put things into perspective, we also have to consider that it's up to 100 times slower as Mozilla's mozJPEG encoder and in many cases, it doesn't achieve the same quality at the same file size

Source: Kornel Lesiński, the guy behind ImageOptim, who says Guetzli will be incorporated into the next version.

HTTP/2 – A Real-World Performance Test and Analysis

Perhaps you've heard of HTTP/2? It's not just an idea, it's a real technology and slowly but surely, hosting companies and CDN services have been releasing it to their servers. Much has been said about the benefits of using HTTP/2 instead of HTTP1.x, but the proof the the pudding is in the eating.

Today we're going to perform a few real-world tests, perform some timings and see what results we can extract out of all this.

(more…)

Optimizing GIFs for the Web

Ire Aderinokun describes a frustrating problem that we’ve probably all noticed at one point or another:

Recently, I’ve found that some of my articles that are GIF-heavy tend to get incredibly slow. The reason for this is that each frame in a GIF is stored as a GIF image, which uses a lossless compression algorithm. This means that, during compression, no information is lost in the image at all, which of course results in a larger file size.

To solve the performance problem of GIFs on the web, there are a couple of things we can do.

Switching to the <video> element seems to have the biggest impact on file size but there are other optimization tools if you have to stick with the GIF format.

Most of the web really sucks if you have a slow connection

Dan Luu on the sorry state of web performance:

...it’s not just nerds like me who care about web performance. In the U.S., AOL alone had over 2 million dialup users in 2015. Outside of the U.S., there are even more people with slow connections.

This other note is also interesting, and I think that Dan is talking about Youtube’s project “Feather” here:

When I was at Google, someone told me a story about a time that “they” completed a big optimization push only to find that measured page load times increased. When they dug into the data, they found that the reason load times had increased was that they got a lot more traffic from Africa after doing the optimizations. The team’s product went from being unusable for people with slow connections to usable, which caused so many users with slow connections to start using the product that load times actually increased.

Performance Under Pressure

Here's a neat transcript of a talk by Mat Marquis where he details how he made the Bocoup website lightning fast, particularly with snazzy font loading tricks and performance tools to help monitor those improvements over time.

Although, my favorite part of the talk is when Mat goes into why he wants to make websites:

I don't get excited about frameworks or languages—I get excited about potential; about playing my part in building a more inclusive web.

I care about making something that works well for someone that has only ever known the web by way of a five-year-old Android device, because that's what they have—someone who might feel like they're being left behind by the web a little more every day. I want to build something better for them.

This browser tweak saved 60% of requests to Facebook

Ben Maurer & Nate Schloss:

The browser's reload button exists to allow the user to get an updated version of the current page. In order to meet this goal, when you reload, browsers revalidate the page that you are currently on, even if that page hasn't expired yet. However, they also go a step further and revalidate all sub-resources on the page — things like images and JavaScript files.

So even if you've set proper expires headers for resources, hitting that reload button (which people must do a ton at Facebook) still requires server round trips to revalidate assets (sometimes).

They worked with Chrome:

After fixing this, Chrome went from having 63% of its requests being conditional to 24% of them being conditional.

And Firefox:

Firefox implemented a proposal from one of our engineers to add a new cache-control header for some resources in order to tell the browser that this resource should never be revalidated.

So if you're using URLs for assets that never change (if they change, they'll be at a new URL) in Chrome you'll benefit automatically, and in Firefox you should use their new header.

Understanding the Critical Rendering Path

Ire Aderinokun:

There are 6 stages to the CRP -

  1. Constructing the DOM Tree
  2. Constructing the CSSOM Tree
  3. Running JavaScript
  4. Creating the Render Tree
  5. Generating the Layout
  6. Painting

I imagine if you're really getting into performance work, you'll want a firm understanding of this. There are lots of ways to block/delay parts of this process. The job of a perf nerd is to understand when and why that's happening, evaluate if it's necessary or not, and tweak things to get to that painting step as soon as possible.

I'm curious if this is generic enough that 100% of all rendering engines work 100% the same way, or if there are significant differences.

icon-anchoricon-closeicon-emailicon-linkicon-logo-staricon-menuicon-nav-guideicon-searchicon-staricon-tag