radEventListener: a Tale of Client-side Framework Performance

Avatar of Jeremy Wagner
Jeremy Wagner on (Updated on )

React is popular, popular enough that it receives its fair share of criticism. Yet, this criticism of React isn’t completely unwarranted: React and ReactDOM total about 120 KiB of minified JavaScript, which definitely contributes to slow startup time. When client-side rendering in React is relied upon entirely, it churns. Even if you render components on the server and hydrate them on the client, it still churns because component hydration is computationally expensive.

React certainly has its place when it comes to applications requiring complex state management, but in my professional experience, it doesn’t belong in most scenarios I see it used. When even a bit of React can be a problem on devices slow and fast alike, using it is an intentional choice that effectively excludes people with low-end hardware.

If it sounds like I have a grudge against React, then I must confess that I really like its componentization model. It makes organizing code easier. I think JSX is great. Server rendering is also cool—even if that’s just how we say “send HTML over the network” these days.

Still, even though I happily use React components on the server (or Preact, as is my preference), figuring out when it’s appropriate to use on the client is a bit challenging. What follows are my findings on React performance as I’ve tried to meet this challenge in a way that’s best for users.

Setting the scene

Lately, I’ve been chipping away at an RSS feed app side project called bylines.fyi. This app uses JavaScript on both the back and front end. I don’t think client-side frameworks are horrid things, but I’ve frequently observed two things about the client-side framework implementations I tend to run into in my day-to-day work and research:

  1. Frameworks have the potential to inhibit a deeper understanding of the things they abstract, which is the web platform. Without knowing at least some of the lower level APIs that frameworks rely on, we can’t know what projects benefit from a framework, and which projects are better off without one.
  2. Frameworks don’t always provide a clear path toward good user experiences.

You may be able to argue the validity of my first point, but the second point is becoming more difficult to refute. You might remember a little while ago when Tim Kadlec did some research on HTTPArchive about web framework performance, and came to the conclusion that React wasn’t exactly a stellar performer.

Still, I wanted to see if it was possible to use what I thought was best about React on the server while mitigating its ill effects on the client. To me, it makes sense to simultaneously want to use a framework to help to organize my code, but also restrict that framework’s negative impact on the user experience. That required a little experimentation to see what approach would be best for my app.

The experiment

I make sure to render every component I use on the server because I believe that the burden of providing markup should be assumed by the web app’s server, not the user’s device. However, I needed some JavaScript in my RSS feed app in order to get a toggleable mobile nav to work.

The mobile nav toggle functionality. At left, the mobile nav is in the closed state. On the right, it’s in the open state, which overlays the entire screen with the navigation.

This scenario aptly describes what I refer to as simple state. In my experience, a prime example of simple state are linear A to B interactions. We toggle a thing on, and then we toggle it off. Stateful, but simple.

Unfortunately, I often see stateful React components used to manage simple state, which is a trade-off that’s problematic for performance. Though that may be a vague utterance for the moment, you’ll come to find out as you read on. That said, it’s important to emphasize that this is a trivial example, but it’s also a canary. Most developers—I hope—aren’t going to rely solely on React to drive such simple behavior for just one thing on their website. So it’s vital to understand that the results you’re going to see are intended to inform you on how you architect your applications, and how the effects of your framework choices could scale when it comes to runtime performance.

The conditions

My RSS feed app is still in development. It contains no third party code, which makes for easy testing in a quiet environment. The experiment I conducted compared the mobile nav toggle behavior across three implementations:

  1. A stateful React component (React.Component) rendered on the server and hydrated on the client.
  2. A stateful Preact component, also server-rendered and hydrated on the client.
  3. A server-rendered stateless Preact component which was not hydrated. Instead, regular ol’ event listeners provide the mobile nav functionality on the client.

Each of these scenarios were measured across four distinct environments:

  1. A Nokia 2 Android phone on Chrome 83.
  2. A ASUS X550CC laptop from 2013 running Windows 10 on Chrome 83.
  3. An old first generation iPhone SE on Safari 13.
  4. A new second generation iPhone SE, also on Safari 13.

I believe this range of mobile hardware will be illustrative of performance across a broad spectrum of device capabilities, even if it’s slightly heavy on the Apple side.

What was measured

I wanted to measure four things for each implementation in each environment:

  1. Startup time. For React and Preact, this included the time it took to load the framework code as well as hydrating the component on the client. For the event listener scenario, this included only the event listener code itself.
  2. Hydration time. For the React and Preact scenarios, this is a subset of the startup time. Because of issues with remote debugging crashing in Safari on macOS, I couldn’t measure hydration time alone on iOS devices. Event listener implementations incurred zero hydration cost.
  3. Mobile nav open time. This gives us insight into how much overhead frameworks introduce in their abstraction of event handlers, and how that compares to the frameworkless approach.
  4. Mobile nav close time. As it turned out, this was quite a bit less than the cost of opening the menu. I ultimately decided not to include those numbers in this article.

It should be noted that measurements of these behaviors include scripting time only. Any layout, paint, and compositing costs would be in addition to and outside of these measurements. One should take care to remember that those activities compete for main thread time in tandem with scripts that trigger them.

The procedure

To test each of the three mobile nav implementations on each device, I followed this procedure:

  1. I used remote debugging in Chrome on macOS for the Nokia 2. For iPhones, I used Safari’s equivalent of remote debugging.
  2. I accessed the RSS feed app running on my local network on each device to the same page where the mobile nav toggling code could be run. Because of this, network performance was not a factor in my measurements.
  3. Without CPU or network throttling applied, I began recording in the profiler, and reloaded the page.
  4. After page load, I opened the mobile nav and then closed it.
  5. I stopped the profiler, and recorded how much CPU time was involved in each of the four behaviors listed earlier.
  6. I cleared the performance timeline. In Chrome, I also clicked the garbage collection button to free up any memory that may have been tied up by my app’s code from a previous session recording.

I repeated this procedure ten times for each scenario for each device. Ten iterations seemed to get just enough data to see a few outliers while getting a reasonably accurate picture, but I’ll let you decide as we go over the results. If you don’t want a play-by-play of my findings, you can view the results at this spreadsheet and draw your own conclusions, as well as the mobile nav code for each implementation.

The results

I initially wanted to present this information in a graph, but because of the complexity of what I was measuring, I wasn’t certain how to present the results without cluttering the visualization. Therefore, I’ll present the minimum, maximum, median, and average CPU times in a series of tables, all of which effectively illustrate the range of outcomes I encountered in each test.

Google Chrome on Nokia 2

The Nokia 2 is a low-cost Android device with a ARM Cortex-A7 processor. It is not a powerhouse, but rather a cheap and easily obtainable device. Android usage worldwide is currently around 40%, and though Android device specs vary greatly from one device to the next, low-end Android devices are not rare. This is a problem we must recognize as being one of both wealth and proximity to fast network infrastructure.

Let’s see what the numbers look like for startup cost.

Startup time
React ComponentPreact ComponentaddEventListener Code
Min137.2131.234.69
Median147.7642.065.99
Avg162.7343.166.81
Max280.8162.0312.06

I believe it says something that it takes, on average, over 160 ms to parse and compile React, and hydrate one component. To remind you, startup cost in this case includes the time it takes for the browser to evaluate the scripts needed for the mobile nav to work. For React and Preact, it also includes hydration time, which in both cases can contribute to the uncanny valley effect we sometimes experience during startup.

Preact fares much better, taking around 73% less time than React, which makes sense considering how tiny Preact is at 10 KiB sans compression. Still, it’s important to note that the frame budget in Chrome is about 10 ms to avoid jank at 60 fps. Janky startup is as bad as janky anything else, and is a factor when calculating First Input Delay. All things considered, though, Preact performs relatively well.

As for the addEventListener implementation, it turns out that parse and compile time for a tiny script with no overhead is unsurprisingly very low. Even at the sampled maximum time of 12ms, you’re barely in the outer ring of the Janksburg Metropolitan Area. Now let’s have a look at hydration cost alone.

Hydration time
React ComponentPreact Component
Min67.0419.17
Median70.3326.91
Avg74.8726.77
Max117.8644.62

For React, this is still in the vicinity of Yikes Peak. Sure, a median hydration time of 70 ms for one component isn’t a big deal, but think about how hydration cost scales when you have a bunch of components on the same page. It’s no surprise that the React websites I test on this device feel more like endurance trials than user experiences.

Preact’s hydration times are quite a bit less, which makes sense because Preact’s documentation for its hydrate method states that it “skips most diffing while still attaching event listeners and setting up your component tree.” Hydration time for the addEventListener scenario isn’t reported, because hydration isn’t a thing outside of VDOM frameworks. Next, let’s take a peek at the time it takes to open the mobile nav.

Mobile nav open time
React ComponentPreact ComponentaddEventListener Code
Min30.8911.943.94
Median43.6214.296.14
Avg43.1614.666.12
Max53.1920.468.60

I find these figures a bit surprising, because React commands almost seven times as much CPU time to execute an event listener callback than an event listener you could register yourself. This makes sense, as React’s state management logic is necessary overhead, but one has to wonder if it’s worth it for simplistic, linear interactions.

On the other hand, Preact manages to limit its overhead on event listeners to the point where it takes “only” twice as much CPU time to run an event listener callback.

CPU time involved in closing the mobile nav was quite a bit less at an average approximate time of 16.5 ms for React, with Preact and bare event listeners coming in at around 11 ms and 6 ms, respectively. I’d post the full table for the measurements on closing the mobile nav, but we have a lot left to sift through yet. Besides, you can check out those figures yourself in the spreadsheet I referred to earlier on.

A quick note on JavaScript samples

Before moving onto the iOS results, one potential sticking point I want to address is the impact of disabling JavaScript samples in Chrome DevTools when recording sessions on remote devices. After compiling my initial results, I wondered if the overhead of capturing entire call stacks was skewing my results, so I re-tested the React scenario samples disabled. As it turned out, this setting had no significant impact on the results.

Additionally, because the call stacks were truncated, I was unable to measure component hydration time. Average startup cost with samples disabled vs. samples enabled was 160.74 ms and 162.73 ms, respectively. The respective median figures were 157.81 ms and 147.76 ms. I would consider this squarely “in the noise.”

Safari on 1st Generation iPhone SE

The original iPhone SE is a great phone. Despite its age, it still enjoys devoted ownership owing to its more comfortable physical size. It shipped with the Apple A9 processor which is still a solid contender. Let’s see how it did on startup time.

Startup time
React ComponentPreact ComponentaddEventListener Code
Min32.067.630.81
Median35.609.421.02
Avg35.7610.151.07
Max39.1816.941.56

This is a big improvement from the Nokia 2, and it’s illustrative of the gulf between low-end Android devices and even older Apple devices with significant mileage.

React performance still isn’t great, but Preact gets us within a typical frame budget for Chrome. Event listeners alone, of course, are blazingly fast, leaving plenty of room in the frame budget for other activity.

Unfortunately, I couldn’t measure hydration times on the iPhone, as the remote debugging session would crash every time I would traverse the call stack in Safari’s DevTools. Considering that hydration time was a subset of the overall startup cost, you can expect that it probably accounts for at least half of the startup time if results from the Nokia 2 trials are any indicator.

Mobile nav open time
React ComponentPreact ComponentaddEventListener Code
Min16.915.450.48
Median21.118.620.50
Avg21.0911.070.56
Max24.2019.791.00

React does alright here, but Preact seems to handle event listeners a bit more efficiently. Bare event listeners are lightning fast, even on this old iPhone.

Safari on 2nd Generation iPhone SE

In mid-2020, I picked up the new iPhone SE. It has the same physical size as an iPhone 8 and similar phones, but the processor is the same Apple A13 used in the iPhone 11. It is very fast for its relatively low $400 USD retail price. Given such a beefy processor, how does it deal?

Startup time
React ComponentPreact ComponentaddEventListener Code
Min20.265.190.53
Median22.206.480.69
Avg22.026.360.68
Max23.677.180.88

I guess at some point there are diminishing returns when it comes to the relatively small workload of loading a single framework and hydrating one component. Things are a little faster on a 2nd generation iPhone SE than its first generation variant in some cases, but not terribly so. I’d imagine that this phone would tackle larger and sustained workloads better than its predecessor.

Mobile nav open time
React ComponentPreact ComponentaddEventListener Code
Min13.1512.060.49
Median16.4112.570.53
Avg16.1112.630.56
Max17.5113.260.78

Slightly better React performance here, but not much else. Strangely, Preact seems to take longer on average to open the mobile nav on this device than its first generation counterpart, but I’ll chalk that up to outliers skewing a relatively small dataset. I certainly would not assume the first generation iPhone SE is a faster device based on this.

Chrome on a dated Windows 10 Laptop

Admittedly, these were the results I was most excited to see: how does an ASUS laptop from 2013 with Windows 10 and an Ivy Bridge i5 of the day handle this stuff?

Startup time
React ComponentPreact ComponentaddEventListener Code
Min43.1513.111.81
Median45.9514.542.03
Avg45.9214.472.39
Max48.9816.493.61

The numbers aren’t bad when you consider that the device is seven years old. The Ivy Bridge i5 was a good processor in its day, and when you couple that with the fact that it’s actively cooled (rather than passively cooled as mobile device processors are), it probably doesn’t run into thermal throttling scenarios as often as mobile devices.

Hydration time
React ComponentPreact Component
Min17.757.64
Median23.558.73
Avg23.128.72
Max26.259.55

Preact does well here, and manages to stay within Chrome’s frame budget, and is almost three times faster than React. Things could look quite a bit different if you’re hydrating ten components on the page at startup time, possibly even in Preact.

Mobile nav open time
Preact ComponentaddEventListener Code
Min6.062.500.88
Median10.433.090.97
Avg11.243.211.02
Max14.444.341.49

When it comes to this isolated interaction, we see performance that’s similar to high-end mobile devices. It’s encouraging to see such an old laptop still keep up reasonably well. That said, this laptop’s fan spins up often when browsing the web, so active cooling is probably this device’s saving grace. If this device’s i5 was passively cooled, I suspect its performance might drop.

Shallow call stacks for the win

It’s not a mystery as to why it takes React and Preact longer to start up than it does for a solution that eschews frameworks altogether. Less work equals less processing time.

While I think startup time is crucial, it’s probably inevitable that you’ll trade some amount of speed for a better developer experience. Though I’d strenuously argue that we tend to trade too much toward developer experience than user experience far too often.

The dragons also lie in what we do after the framework loads. Client-side hydration is something that I think is far too often abused, and can sometimes be completely unnecessary. Every time you hydrate a component in React, this is what you’re throwing at the main thread:

A React stateful component hydration call stack captured in Chrome DevTools.

Recall that on the Nokia 2, the minimum time I measured for hydrating the mobile nav component was about 67 ms. In Preact—for which you’ll see the hydration call stack below—takes about 20 ms.

A Preact stateful component hydration call stack captured in Chrome DevTools.

These two call stacks aren’t to the same scale, but Preact’s hydration logic is simplified, probably because “most diffing is skipped” as Preact’s documentation states. There’s quite a bit less going on here. When you get closer to the metal by using addEventListener instead of a framework, you can get even faster.

A call stack of event listeners attaching to DOM elements.

Not every situation calls for this approach, but you’d be surprised at what you can accomplish when your tools are addEventListener, querySelector, classList, setAttribute/getAttribute, and so on.

These methods—and many more like them—are what frameworks themselves rely on. The trick is to evaluate what functionality you can safely deliver outside of what the framework provides, and rely on the framework when it makes sense.

A call stack of React firing a click event handler to open a mobile nav.

If this were a call stack for, say, making a request for API data on the client and managing the complex state of the UI in that situation, I’d find this cost more acceptable. Yet, it’s not. We’re just making a nav appear on the screen when the user taps a button. It’s like using a bulldozer when a shovel would be a better fit for the job.

Preact at least strikes the middle ground:

A call stack of Preact firing a click event handler to open a mobile nav.

Preact takes about a third of the time to do the same work React does, but on that budget device, it exceeds the frame budget often. This means opening that nav on some devices will animate sluggishly because the layout and paint work may not have enough time to finish without entering long task territory.

A call stack of a bare event listener opening the mobile nav.

In this case, an event listener is what I needed. It gets the job done seven times faster on that budget device than React.

Conclusion

This is not a React hit piece, but rather a plea for consideration of how we do our work. Some of these performance pitfalls can be avoided if we take care to evaluate what tools make sense for the job, even for apps with a great deal of complex interactivity. To be fair to React, these pitfalls likely exist in many VDOM frameworks, because the nature of them adds necessary overhead to manage all sorts of things for us.

Even if you’re working on something that doesn’t call for React or Preact, but you want to take advantage of componentization, consider keeping it all on the server to start with. This approach means you can decide if and when it’s appropriate to extend functionality to the client—and how you’ll do that.

In the case of my RSS feed app, I can manage this by putting lightweight event listener code in the entry point for that page of the app, and using an asset manifest to put the minimal amount of script necessary in order for each page to work.

Now let’s suppose that you have an app that truly needs what React provides. You have complex interactivity with lots of state. Here are some things you can do to try and get things going a bit faster.

  1. Check all of your stateful components—that is, any component which extends React.Component—and see if they can be refactored as stateless components. If a component doesn’t use lifecycle methods or state, you can refactor it to be stateless.
  2. Then, if possible, avoid sending JavaScript to the client for those stateless components, as well as hydrating them. If a component is stateless, only render it on the server. Prerender components when possible to minimize server response time, because server rendering has its own performance pitfalls.
  3. If you have a stateful component with simple interactivity, consider prerendering/server-rendering that component, and replace its interactivity with framework-independent event listeners. This avoids hydration entirely, and user interactions won’t have to filter through the framework’s state management logic.
  4. If you must hydrate stateful components on the client, consider lazily hydrating components that aren’t near the top of the page. An Intersection Observer that triggers a callback works very well for this, and will give more main thread time to critical components on the page.
  5. For lazily-hydrated components, assess whether you can schedule their hydration during main thread idle time with requestIdleCallback.
  6. If possible, consider switching from React to Preact. Given how much faster it runs than React on the client, it’s worth having the discussion with your team to see if this is possible. The latest version of Preact is nearly 1:1 with React for most things, and preact/compat does a great job of easing this transition. I don’t think Preact is a panacea for performance, but it gets you closer to where you need to be.
  7. Consider adapting your experience to users with low device memory. navigator.deviceMemory (available in Chrome and derived browsers) enables you to change the user experience for users on devices with little memory. If someone has such a device, it’s probable that its processor isn’t so fast either.

Whatever you decide to do with this information, the thrust of my argument is this: if you use React or any VDOM library, you should spend some time investigating its impact on an array of devices. Get a cheap Android device and see how your app feels to use. Contrast that experience with your high-end devices.

Most of all, don’t follow “best practices” if the result is that your app effectively excludes a part of your audience that can’t afford high end devices. Keep pushing for everything to be faster. If our daily work is any indication, this is an endeavor that will keep you busy for some time to come, but that’s OK. Making the web faster makes the web more accessible in more places. Making the web more accessible makes the web more inclusive. That’s the really good work we should all be trying our best to do.


I’d like to express my gratitude to Eric Bailey for his editorial feedback this piece, as well as the CSS-Tricks staff for their willingness to publish it.