There were jokes coming back from the holiday break that JavaScript decided to go all server-side. I think it was rooted in:
- The Basecamp gang releasing Hotwire, which looks like marketing panache around a combination of technologies. “HTML over the wire,” they say, meaning it makes the server generate and serve HTML, and leaves client-side JavaScript to things only client-side JavaScript can do.
- The React gang Introducing Zero-Bundle-Size React Server Components, which I believe is the first step of the core project toward server-side anything.
I’m all about some marketing hype, but it’s worth noting that these are just fresh takes on already solid (dare I say old) ideas.
Turbo (“The heart of Hotwire”) is an evolution of Turbolinks, which is a terrifically simple base idea: intercept clicks on internal links. Rather than the browser doing a full page refresh, fetch
the contents of the new page, plop it in place, and History.pushState()
the URL. Now you’ve got a Single Page App feel, but you didn’t have to build a SPA. That’s mighty convenient if you’ve already built your app in Rails with ERB templates.
But is that actually efficient? Well, it hasn’t been particularly popular so far. The thinking has been that the network is the bottleneck, so let’s send as little as possible over the network. “As little as possible” typically translates into JSON, typically. If you get JSON on the client, now you need a templating system on the client to turn that into usable DOM. With that technique, you’re paying two costs: 1) loading a client-side library 2) data-to-DOM processing. If you sent “HTML over the wire,” you pay neither of those costs (faster), but theoretically are sending beefier payloads across the network (slower), which assumes that HTML is heavier than JSON, which is… questionable.
So… it depends. It depends on how big the payloads are and what is expected to be done with them.
You’d expect the React opinion would be: definitely use the client. But that’s not true with the new preview of server side components. The video is abundantly clear: “rendering” the components on the server is faster, particularly in nested component situations where many of the components are responsible for fetching their own data. So what comes across the network then? Is it DOM-ready HTML? Not here. From a peek at the video, it looks like the network response is some proprietary format¹ that describes a React component. That seems important because it means the client-side JavaScript bundle doesn’t contain that component at all, and state² can be passed back and forth. Lauren Tan is also clear in the video: this is kinda SSR but distinct from how something, like Next.js, does SSR today. And the point is to make the Next.js of tomorrow far better.
So: servers. They are just good at doing certain things (says the guy typing into his WordPress blog). There does seem to be some momentum toward doing less on the client, which I think most of us would agree has been taking on a bit much lately, which asset sizes doing nothing but growing and growing.
Let’s push those servers to the edge while we’re at it.
- It is a proprietary format. I’m told it’s like “JSON with holes”, that is, chunks of JSON that are white space new line separated. But, while the format matters a little because you might find yourself inspecting network requests for debugging reasons, this is React talking to React, it’s not an open API where the format would matter much more.
- The main “state” being passed is like the current route. I’m told you pass as little as possible to the server. The server holds no state.
Oh yes. Please. Took you guys 10 years to realize we’re heading the wrong way but HURRAY.
I objected to this entire move-everything-to-client-js-thing for much more fundamental reasons like progressive enhancement but hey, whatever stops this madness is fine with me.
(And no – please don’t let the next hype be CDN EVERYTHING!!1)
You don’t like CDNs?
On the flip-side, if you need to build a non-web client (mobile for example), you may have lost all those convenient APIs to source your data from, since you didn’t build them to support your SPA.
That’s not entirely true, right? I’d argue that in a well-designed system, you’d be able to pop in some controllers that leverage your existing services to provide an API.
also these SSRs are using the same API on the server, so that remains intact…
What I want is:
component driven development with JSX (and TypeScript)
static/server generated HTML
dynamic component reuse on the client
only ship dynamic components to the browser (this is where Next/Gatsby etc fail, they send your static content twice, once as HTML then as JS/JSON, then eat the added cost of rehydration); there’s a lot of good work in this area, including Microsite which I’m using now (and contributing to as of today); Server Components can likely achieve this too
build tools that make the static/dynamic distinction a first class concept by providing appropriate configuration APIs
dynamic client implementations that make the static/dynamic distinction a first class concept by providing appropriate runtime APIs
compilers that can automate the static/dynamic distinction with static analysis; Marko is already doing all of this except support for JSX
html {
Server: js
}
You should all try AureliaJs.
I love html, css and JavaScript. It is my belief that they can do any and most user interface applications.
This being said it is not easy to manage state, have one or two way binding, simple routing etc.
Aurelia comes here, it provides just that, in a KISS fashion. I’m actually doing html, css & js and Aurelia is being non obstrusive at all and just being extremely handy while allowing me to keep the spirit of what I love doing. (No html in js weird stuff, easy state management, beautiful components).
Add to that you can easily use pug, sass and typescript if you’d rather, easily, easiliestly.
Your not doing Aurelia, they don’t want to take space, they want to disappear and let you do what you love.
https://aurelia.io/
(It’s also open source, there is no GAFAM behind or any big tech ;) ).
Keep the web open, transparent and beautiful
You are assuming rendering is faster on server, it is true only when you have sufficient computational resources there, however this may not be the case.
Original server can be heavily overloaded if all the computation goes there. So the answer must be edge server. However, you have to use a good CDN for a good edge distribution, and then you have to use the edge server from the very CDN provider.
Let’s say it is cloudflare. So the edge server have to be cloudflare worker. CF claims that they scale, which is good. However, they scale with a price. You have to pay per request if you exceed the daily or monthly limit, though it is already the cheapest I guess.
And let’s be realistic. Cloudflare is not a magic. It runs its own cluster on edge, and it has its own capacity. It would never be enough if every sites running on it are fully server side rendered.
However if you do it purely on client side, you might be able keep your frontend purely static. No computation done on server means your original server and edge servers are happier, therefore you are happier, and finally your users are happier.
And another thing is that edge servers are slow. Cloudflare workers is strictly slow compared with users’ device. You would probably have a multi-core, not bad CPU, and probably a usable GPU, and sufficient memory there: although it is not there all for you. However on edge servers, well, you know how bad it is.
I find that to be exactly the opposite.
@Chris Coyier
For IO bound tasks, I agree that CF workers can be faster since they can fetch data from CF cache and CF kv faster. And they can usually access original server faster.
But for computation bound tasks, users’ devices are faster, and you cannot even effectively do computation using CF workers because that 128MB-50ms.
But server side rendering has its own disadvantage:
* If you do it without client side rendering, you have not-so-slow first load, but not-so-fast second load.
* If you do it both way, you have fast first load and fast second load, but you download everything twice on first load.
Using
link: preload
, (and link[rel=”preload”], and caching, like service worker), instead of all these SSRs, you have:* fast first load, as fast as your SSR if your can do server push, and only one additional round trip otherwise.
* fast second load, faster than anything without preloading.
* keep your download minimal
* and, you keep the computation client side: you pay less, and your user won’t mind that 128MB-50ms.