There are so many static site generators (SSGs). It’s overwhelming trying to decide where to start. While an abundance of helpful articles may help wade through the (popular) options, they don’t magically make the decision easy.
I’ve been on a quest to help make that decision easier. A colleague of mine built a static site generator evaluation cheatsheet. It provides a really nice snapshot across numerous popular SSG choices. What’s missing is how they actually perform in action.

One feature every static site generator has in common is that it takes input data, runs it through a templating engine, and outputs HTML files. We typically refer to this process as The Build.
There’s too much nuance, context, and variability needed to compare how various SSGs perform during the build process to display on a spreadsheet — and thus begins our test to benchmark build times against popular static site generators.
This isn’t just to determine which SSG is fastest. Hugo already has that reputation. I mean, they say it on their website — The world’s fastest framework for building websites — so it must be true!
This is an in-depth comparison of build times across multiple popular SSGs and, more importantly, to analyze why those build times look the way they do. Blindly choosing the fastest or discrediting the slowest would be a mistake. Let’s find out why.
The tests
The testing process is designed to start simple — with just a few popular SSGs and a simple data format. A foundation on which to expand to more SSGs and more nuanced data. For today, the test includes six popular SSG choices:
Each test used the following approach and conditions:
- The data source for each build are Markdown files with a randomly-generated title (as frontmatter) and body (containing three paragraphs of content).
- The content contains no images.
- Tests are run in series on a single machine, making the actual values less relevant than the relative comparison among the lot.
- The output is plain text on an HTML page, run through the default starter, following each SSG’s respective guide on getting started.
- Each test is a cold run. Caches are cleared and Markdown files are regenerated for every test.
These tests are considered benchmark tests. They are using basic Markdown files and outputting unstyled HTML into the built output.
In other words, the output is technically a website that could be deployed to production, though it’s not really a real-world scenario. Instead, this provides a baseline comparison among these frameworks. The choices you make as a developer using one of these frameworks will adjust the build times in various ways (usually by slowing it down).
For example, one way in which this doesn’t represent the real-world is that we’re testing cold builds. In the real-world, if you have 10,000 Markdown files as your data source and are using Gatsby, you’re going to make use of Gatsby’s cache, which will greatly reduce the build times (by as much as half).
The same can be said for incremental builds, which are related to warm versus cold runs in that they only build the file that changed. We’re not testing the incremental approach in these tests (at this time).
The two tiers of static site generators
Before we do that, let’s first consider that there are really two tiers of static site generators. Let’s call them basic and advanced.
- Basic generators (which are not basic under the hood) are essentially a command-line interface (CLI) that takes in data and outputs HTML, and can often be extended to process assets (which we’re not doing here).
- Advanced generators offer something in addition to outputting a static site, such as server-side rendering, serverless functions, and framework integration. They tend to be configured to be more dynamic right out of the box.
I intentionally chose three of each type of generator in this test. Falling into the basic bucket would be Eleventy, Hugo, and Jekyll. The other three are based on a front-end framework and ship with various amounts of tooling. Gatsby and Next are built on React, while Nuxt is built atop Vue.
Basic generators | Advanced generators |
---|---|
Eleventy | Gatsby |
Hugo | Next |
Jekyll | Nuxt |
My hypothesis
Let’s apply the scientific method to this approach because science is fun (and useful)!
My hypothesis is that if an SSG is advanced, then it will perform slower than a basic SSG. I believe the results will reflect that because advanced SSGs have more overhead than basic SSGs. Thus, it’s likely that we’re going to see both groups of generators — basic and advanced — bundled together, in the results with basic generators moving significantly quicker.
Let me expand on that hypothesis a bit.
Linear(ish) and fast
Hugo and Eleventy will fly with smaller datasets. They are (relatively) simple processes in Go and Node.js, respectively, and their build output will reflect that. While both SSG will slow down as the number of files grows, I expect them to remain at the top of the class, though Eleventy may be a little less linear at scale, simply because Go tends to be more performant than Node.
Slow, then fast, but still slow
The advanced, or framework-bound SSGs, will start out and appear slow. I suspect a single-file test to contain a significant difference — milliseconds for the basic ones, compared to several seconds for Gatsby, Next, and Nuxt.
The framework-based SSGs are each built using webpack, bringing a significant amount of overhead along with it, regardless of the amount of content they are processing. That’s the baggage we sign up for in using those tools (more on this later).
But, as we add thousands of files, I suspect we’ll see the gap between the buckets close, though the advanced SSG group will stay farther behind by some significant amount.
In the advanced SSG group, I expect Gatsby to be the fastest, only because it doesn’t have a server-side component to worry about — but that’s just a gut feeling. Next and Nuxt may have optimized this to the point where, if we’re not using that feature, it won’t affect build times. And I suspect Nuxt will beat out Next, only because there is a little less overhead with Vue, compared to React.
Jekyll: The odd child
Ruby is infamously slow. It’s gotten more performant over time, but I don’t expect it to scale with Node, and certainly not with Go. And yet, at the same time, it doesn’t have the baggage of a framework.
At first, I think we’ll see Jekyll as pretty speedy, perhaps even indistinguishable from Eleventy. But as we get to the thousands of files, the performance will take a hit. My gut feeling is that there may exist a point at which Jekyll becomes the slowest of all six. We’ll push up to the 100,000 mark to see for sure.

The results are in!
The code that powers these tests are on GitHub. There’s also a site that shows the relative results.
After many iterations of building out a foundation on which these tests could be run, I ended up with a series of 10 runs in three different datasets:
- Base: A single file, to compare the base build times
- Small sites: From 1 to 1024 files, doubling each to time (to make it easier to determine if the SSGs scaled linearly)
- Large sites: From 1,000 to 64,000 files, double on each run. I originally wanted to go up to 128,000 files, but hit some bottlenecks with a few of the frameworks. 64,000 ended up being enough to produce an idea of how the players would scale with even larger sites.
Click or tap the images to view them larger.
Summarizing the results
A few results were surprising to me, while others were expected. Here are the high-level points:
- As expected, Hugo was the fastest, regardless of size. What I didn’t expect is that it wasn’t even close to any other generator, even at base builds.
- The basic and advanced groups of SSGs are quite obvious when looking at the results for small sites. That was expected, but it was surprising to see Next and Eleventy getting close at 64,000 files. Also surprising is that Jekyll performed faster than Eleventy for every run.
- I figured Gatsby to be the fastest among the advanced frameworks, and suspected it would be the one to get closer to the basics. But Gatsby turned out to be the slowest, producing the most dramatic curve.
- While it wasn’t specifically mentioned in the hypothesis, the scale of differences was larger than I would have imagined. At one file, Hugo was approximately 250 times faster than Gatsby. But at 64,000 files, it was closer — about 40 times faster. That means that, while Hugo remains the fastest (significantly), its times are closer to the other generators as the size of sites increases.
What does it all mean?
When I shared my results with the creators and maintainers of these SSGs, I generally received the same message. To paraphrase:
The generators that take more time to build do so because they are doing more. They are bringing more to the table for developers to work with, whereas the faster sites (i.e. the “basic” tools) focus their efforts largely in converting templates into HTML files.
I agree.
To sum it up: Scaling Jamstack sites is hard.
The challenges that will present themselves to you, Developer, as you scale a site will vary depending on the site you’re trying to build. That data isn’t captured here because it can’t be — every project is unique in some way.
What it really comes down to is your level of tolerance for waiting in exchange for developer experience.
For example, if you’re going to build a large, image-heavy site with Gatsby, you’re going to pay for it with build times, but you’re also given an immense network of plugins and a foundation on which to build a solid, organized, component-based website. Do the same with Jekyll, and it’s going to take a lot more effort to stay organized and efficient throughout the process, though your builds may run faster.
At work, I typically build sites with Gatsby (or Next, depending on the level of dynamic interactivity required). We’ve worked with the Gatsby framework to build a core on which we can rapidly build highly-customized, image-rich websites, packed with an abundance of components. Our builds become slower as the sites scale, but that’s when we get creative by implementing micro front-ends, offloading image processing, implementing content previews, along with many other optimizations.
On the side, I tend to prefer working with Eleventy. It’s usually just me writing code, and my needs are much simpler. (I like to think of myself as a good client for myself.) I feel I have more control over the output files, which makes it easier for me to get 💯s on client-side performance, and that’s important to me.
In the end, this isn’t only about what is fast or slow. It’s about what works best for you and how long you’re willing to wait.
Wrapping up
This is just the beginning! The goal of this effort was to create a foundation on which we can, together, benchmark relative build times across popular static site generators.
What ideas do you have? What holes can you poke in the process? What can we do to tighten up these tests? How can we make them more like real-world scenarios? Should we offload the processing to a dedicated machine?
These are the questions I’d love for you to help me answer. Let’s talk about it.
This is very useful, thank you! I think it would be even more useful if the charts had a labelled y-axis, though. Why exclude the labels?
The build times can vary dramatically depending on the machine on which they are run. For this article I wanted to focus more on the relative comparison, rather than getting hung up on the actual values, especially considering these aren’t really real-world (i.e. production-ready) scenarios.
That said, I’ve had a lot of requests for the actual values, so I’ve added tooltips to the current set of results and also included machine specs for the box on which the tests were run.
I get it, Gridsome isn’t anywhere near as popular as Gatsby, but it would be interesting to see where it places on the chart.
Yes, Gridsome is awesome and can do pretty much the same as Gatsby
I’ve heard great things about Gridsome! Unfortunately, I had to stop somewhere with this initial set. The goal here was to provide low-barrier means for anyone to add new SSGs to the lot. I’ve made a few changes recently that should make this process easier. Here are the updated instructions.
I’d have loved to have seen Zola (FKA Gutenberg): https://www.getzola.org/
Built with Rust, I think would’ve been a good competitor.
Would you be interested in putting the test case together? (Here are the instructions.)
Try Zola. You will be surprised to its elegant design and speed.
It would be interesting to map this to monthly costs for CI/CD. Hugo saves time and money, throw some Alpine and/or Svelte into it and all the dynamic components slide into place too.
That’s a really interesting idea, and could be a fun follow-up article. (Adding it to my list!)
I can already say it’s been a bear working through a process in which I could run these tests consistently at a low cost. I still don’t have a great solution.
In addition to labeling the y axis, the y axis should also be shown logarithmically, since the x-axis is logarithmic.
The logarithmic note was SUPER helpful. Thank you! I knew there was something that felt off about the shape of the curves, but I couldn’t put my finger on it.
I’ve made that adjustment in the code and it’s resulted in a different-looking picture. I’ve updated the snapshots here and adjusted a couple conclusions as a result.
I’ve intentionally left off the labels on the Y-axis because they would be so specific to the machine on which the tests were run. But I have added tooltips back to the interactive charts, along with machine specs.
Why are the actual build times not shown? There are no milliseconds on the build time axis.
I’m really not sure how helpful such a stripped down build is. I’m still sticking with Hugo, largely because I got fed up keeping several thousand node.js packages up to date (or not, when an update breaks build).
Thanks for running these tests! I’m wondering if you could compare Svelte’s Sapper to the others?
While I’d love to see all the SSGs in the list, I had to stop somewhere. However, I’ve made this project open-source so that anyone can put a new SSG in there and compare it to the rest. I’ve updated the instructions on adding a new test scenario to the README to make it a little easier of a process.
https://github.com/seancdavis/ssg-build-performance-tests/blob/main/src/results.json based on this i would think that the unit of time is second on the charts and the gatsby took 100+ seconds for the 64k files
Thanks for the write-up, Sean. As you mention, one reason the advanced SSGs have a longer build time is that they’re first generating the application that will then render the pages.
Nuxt.js recently released “faster re-deploys” that cache the application and only rebuild the payloads (pages) if the underlying application code has not changed (https://nuxtjs.org/blog/going-full-static#smarter-nuxt-generate). With a feature like that, I’d be curious how these benchmarks change on subsequent generations that benefit from a “primed” (cached) application.
I’ve played around with the idea of cold v warm builds from the start, but chose to keep the first pass at this a very simple, baseline comparison. I plan to introduce the idea of warm builds to the mix here soon, allowing SSGs to make use of their caching mechanisms. I’m super curious to see how that impacts the results.
Interesting idea. I like that it doesn’t show absolute build times: those vary too much between machines to be useful, but relative comparisons between different site sizes and different SSGs are great and novel.
I was hoping to piggyback on the testing infrastructure to benchmark my own SSG (https://soupault.neocities.org) but it’s “a bit” tricky since it doesn’t use front matter (it extracts metadata from HTML itself similar to microformats).
Makes me think I need to add a way for users to automatically migrate from an SSG that does use front matter.
Cool, Dan! If you can get it working with markdown, you’re welcome to add your solution to the mix.
And I agree. More and more SSGs are (in some way) supporting markdown along with frontmatter. If you aren’t going to support it, you’ll probably want some sort of importer or transformer so editors can work how they’re already comfortable working.
Can you tell us the vCPU + memory of your VPC as well as the memory + CPU utilisation of the builds?
I added a page to the results site that shows machine specs on which the tests were run. I also re-ran everything in a dedicated environment, though I’m looking to simplify further, so this may continue to change.
I’m glad I switched from Jekyll to Hugo earlier this year. I couldn’t stand the slow build times, and I can’t imagine using anything even slower than that. The downside to Hugo is that the docs are a mess.
This is a tricky chart. The building time is so different because of the underlying structure and requirements they use. For example, gatsby is compatible with graphql but hugo is not; hugo is written in go, and the others are written in js. There are too many to compare, and simply comparing the building speed does not explain the problem
Did you use the
--incremental
flag in Eleventy v0.11.1? It seems to be much quicker. And just wait for Eleventy v1.0.0 …I’m on the edge of my seat for Eleventy v1.0! I’m using it for a few personal sites (including the results site for this project). I’d be happy to bump the version in the tests and re-run when the new one comes out.
I’m also exploring what it looks like to include warm (i.e. incremental) builds into the mix here. No updates yet.
The funny part about all this is that it really doesn’t matter.
Building your static site will happen wither locally or via CI, then the static files will get deployed to your hosting, so the build time doe snot factor in anything.
You could argue it does factor during development, but all SSGs have live updating during development, and it only builds the files you change, so that live build time can be ignored too.
Choosing an SSG is not about speed, it is about what language you are comfortable with, like to contribute to its code or build plugins, and the workflow and framework of it.
Si, the only thing I learned is that Gatsby must have a terrible codebase to be slower than jekyll, purely based on node vs ruby speed comparisons.
I disagree on most of your points here.
The build time absolutely factors in. It is the amount of time between pushing code and seeing it live in production. Previewing unpublished content is still a big challenge with Jamstack sites. Content editors don’t want to wait a half hour to see how their content will look in production.
Sure, “live updating” is a common theme among SSGs’ development environments. But incremental builds is absolutely not a universal attribute of static site generators. It’s a difficult problem to solve and not all have solved it. Take Eleventy as an example. It rebuilds the entire project on every update.
I wanted to include incremental builds in these tests, but I chose not to do it because it’s not the same across the board. I may add this to the project in the future and the landscape matures.
There are many factors to include when considering which SSG(s) to invest in. It’s up to you to build that list of attributes that determine your value. And that’s fine if this is your list. But I encourage you to keep build performance as one of those considerations.
First, I’m sorry that’s the only thing you learned from this article. My goal was to have a more nuanced conversation after studying the data.
That said, Gatsby’s codebase is not terrible. I did not make that conclusion when analyzing this data. Gatsby is complex. And it has many benefits that may be alluring to dev teams, despite its poor build performance. I find the fact that it’s so popular as evidence that the product (and its codebase) is actually spectacular — that devs love working with it, despite its build troubles at scale.
In some cases, the construction time is very important. In my case, for example, like many other developers entering the world of JAMstack.
We use a free Netlify plan, which offers only 300 minutes per month for the build process. With Gatsby on Netlify each build takes 5 to 15 minutes, for a small test website. This limits to one build per day, maximum.
Instead, with Hugo on Netlify you get 3 or 4 builds per minute.
Hi,
Thanks for your work!
But I’m asking myself, who use a SSG for a website with thousands of pages?!
An example of a real case:
https://www.smashingmagazine.com/2020/01/migration-from-wordpress-to-jamstack/
https://blog.cloudflare.com/new-dev-docs/
comparing-static-site-generator-build-times is a great idea and thank you for share this, I would like to understand or wanted to know the steps that how can we add one or more static site(like docusaurus) to ssg folder to compare and generate the report ?
Any lead will regarding this help us a lot .
Thank you in advance .
Love that you want to add other SSGs to the project! To do so, follow the process outlined here. If you get stuck at some point, please create an issue that explains where you are stuck and shares your code.
I conducted a test Saaze vs. Hugo vs. Zola:
https://eklausmeier.goip.de/blog/2021/11-13-performance-comparison-saaze-vs-hugo-vs-zola
This shows that Zola fares favorably against Hugo.
Outdated now…
Blades claims 10 times improvement with Rust code.
https://github.com/grego/blades