I made a site all about serverless and how it relates to front-end developers.
Every time I use the word “serverless”, which is somewhat regularly lately, as we’ve had a few articles using the term lately and use the concept at CodePen for a variety of things, I get some version of:
CMON BRAH YOU’RE STILL USING “SERVERS”.
And they aren’t wrong. Yes, when you build things on the web, there are always servers involved. Always. Whether it’s some old computer in a church basement, a computer in a rack at some big hosting company, or “The Cloud”, it’s a server.

I rolled my eyes at the term the first few times I heard it too. But now I’m hesitant to call it a bad term, in part because it’s really stuck, and there is something to be said for new terms that catch on so strongly. Also in part because it signifies a dramatic change in how you can use servers. It’s different economically, different devops-wise, and different in how you code for them.
To many of us, we’re aware a server is a computer. There are various ways to buy them, but you buy them. Here’s some money, here’s your server. It might be virtual, but it’s still something you’re responsible for. You put software on it. You spin them up and spin them down. You load balance them. You make choices about how much memory and disk space they have. You’re in charge of provisioning and managing them.
What serverless is trying to mean, it seems to me, is a new way to manage and pay for servers. You don’t buy individual servers. You don’t manage them. You don’t scale them. You don’t balance them. You aren’t really responsible for them.
You just pay for what you use. For example, AWS Lambda is free for 1,000,000 requests and then costs $0.0000002 per request after that. Cheap. Just this week Firebase launched “functions” which are essentially a serverless concept, and their $25 a month plan has 2,000,000 requests (along with all the rest of the stuff Firebase gets you).
That doesn’t work for all applications. It works for things in which you can write some code that is designed to take some stuff, do some work, and return some new stuff. You write an API.
You don’t have to go all-in with the “serverless” idea. You can, and I imagine most people do, use it for things that make sense to use it for, and use traditional servers for the rest.
A marketing term to keep frontend devs focusing on what they know,
To prevent them from being tempted to dive into the current devops hype ?
I had to laugh when a former client decided that going from onsite IBM 720 series to a “cloud based” ERP would be quicker, faster, and propel the organization into the future. I still don’t think they understand why RDP’ing into their ERP that’s cloud based and is “GUI” vs “green screen” is so slow… It’s all a server no matter which way you flip it.
I can’t imagine people taking “serverless” literally… Do they? I mean, of course there are servers, what do people think their browser is connecting to? :)
Building things with a serverless architecture is very rewarding. For example, take this scenario: We have a share page for content that users are creating,
share.example.com/abc123
. The naive approach is simple. We provision some servers, have them respond to requests sent toshare.example.com/:short_id
, have them look upshort_id
in the database, and have them return something to the browser.But wait… What happens if a user shares some content that becomes viral? What if we have 1k, 10k, or 100k simultaneous requests to this content? Do we need to scale our servers responding to
share.example.com/v1r4l
? Do we need to scale our database because of the increasing number of requests? Do we need to implement some sort of caching?Now, here’s the serverless approach. We generate the HTML of the share page at content creation time. We stick it into an S3 (or GCP or Azure equivalent) bucket, and we serve it through a global CloudFront (or GCP or Azure equivalent) distribution. We read from the database only once per share, and we have 0 (zero!) servers to maintain and scale. When we get 100k simultaneous requests, we have to… do nothing. It’ll Just Work.
Serverless architecture rocks.
What you describe seems to be caching since what you serve from S3 is just static content. To be classified as “serverless” the service should be running some backend code which you wrote.
Personally I think the term “serverless” is bad. “Cloud Functions” as Firebase calls them is a more descriptive term.
Static generated content !== caching
But, sure, what I described (serving static generated content) is not technically “serverless computing”, because it’s not actually running any custom code, like you pointed out.
This also reminds me of the term “offline first”, which can be a bit eyerolling at first because, of course, the web can’t work at all unless some kind of network connection is made, which needs to be “first”.
But I think the “first” part refers more to philosophy. About designing to deal with a broken network with the same or more attention than a working one.
also “the cloud” as name is strange since the actual things that make it up are often stored into remote underground facilities that benefit of natural cooling due to being located several meters below earth surface, producing heating for above-surface houses following basic principle of gas dynamics where hot air travels upward, we could very well call it “the mole” but I guess it would be less marketable as it doesn’t give out that “freedom” feeling that the sky recalls
An example of using AWS Lambda’s “serverless” architecture was when I worked for Stamen Design we used it to generate web map tiles from geospatial data. Typically one would need a “tile server” to do this, which involves server creation, maintenance, etc. With Lamda we could invoke some Python code that reads the geographic data and then writes tiles (256×256 pixel images) to S3, which then are sent to the client. If the tile already exists, the code is not run. You can read more about the approach here: https://hi.stamen.com/stamen-aws-lambda-tiler-blog-post-76fc1138a145#.5od5om8tf