What is Serverless Computing?

Mike Diaz
5 min readJul 2, 2020
Cartoon servers with a line through them. Text that says “Deploy applications without fiddling with servers.”

As an engineer, it’s not uncommon to start investigating a new concept, only to learn that it requires a deep drilling down into practices, processes, and connections to be understood. In my quest to understand different types of monitoring (still working on that one), I kept running into the phrase serverless. I would come to learn that this method of application processing and deployment has exploded in popularity over the last 5 years and offers some exciting benefits over its alternatives.

What is the default? What did we use before serverless?

Let’s get one thing out of the way: serverless is a bit of a misnomer because programs require servers to run. Servers store and manipulate every piece of data we code, collect, and organize, from databases to variables. Without a server, code can’t work.

var num = 1;
Server required.

If you’re a beginner, you might be writing small programs that can use your own computer as a server, also known as localhost (a host is a device used to run a server program, hence local-host). As our programs grow, however, we require external hosts to run all of our code and store all of our data. We also require a larger team, all of which may need to work on different parts of our project at different times, but need to run the entire program to check their work. This complicates things.

Servers, just like our personal computers, must be configured and provisioned if they are able to run certain programs. They need a specific operating system, licenses, packages, development kits, and more. Historically, this meant a DevOps or IT Engineer building, provisioning, and configuring a server to run the program you needed to run. If something changed about that program, something changed about the server. And there might be a lot of servers to update.

Rows of servers in a server farm
Photo by Manuel Geissinger from Pexels

Virtualization changed this, giving severs the ability to run multiple operating systems and providing engineers with an interface to easily select an OS, as well as other convenient variables such as RAM space. This revolution continued with the advent of containers, which are well-defined by Docker: software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. With containers, it doesn’t matter what the server’s OS looks like, because the container image is configured to your program and will build a container to run that program on any platform. Hence container: environment is irrelevant because our world lives inside this “box.”

A diagram showing containerized applications. Each app has its own container that goes through Docker before hitting hardware
yay!

So what’s wrong with that?

Containers are great because they allow for full environmental customization: your team can build your containers however you like them and change that at your will. But that also means there’s significant overhead in getting started — you’ll want to plan ahead so that you consider all of your potential needs while configuring. If your site gets more traffic than you were expecting or you have to add a new dependency, it’s back to the drawing board to re-configure your containers. This isn’t prohibitive, but it’s inconvenient to say the least.

A relevant and common example of this challenge is in the number of containers, or amount of server space, needed to host an application. One container lives on one machine at a time and so, though you might not need to uniquely provision each of these servers, you do have to figure out how many your product will require to run. This means making a deal with a vendor for a certain amount of server space. If you don’t use all that space, you’ll have paid for something you don’t need. If traffic to your website spikes, you’ll have had to quickly buy more space with the likelihood that you won’t have needed all of that space and that it’ll be wasted once your spike subsides. Cost is frequently cited as the primary practical difference between serverless computing and competing solutions.

Okay, so how does serverless scale?

Serverless computing runs as a series of functions and serverless vendors charge their clients based on how long each function is running. If 90% of your site traffic occurs between 9am and 10am, 90% of your traffic server costs will be incurred between 9am and 10am. For all the time your application is idle, you are incurring no cost.

This makes serverless solutions ideal for startups: everything is unpredictable and you want to be able to grow fast. You don’t have the time or resources to configure and maintain containers, much less physical servers, nor do you want to worry about hitting capacity when your site finally breaks through and the customers come streaming in.

A cost benefit analysis of serverless vs containers. Serverless costs increase linearly.
We want to avoid those big price bumps.

So then everyone must be using serverless, right?

According to Businesswire, the Function-as-a-Service (FaaS) market size is estimated to grow from USD 1.88 billion in 2016 to USD 7.72 billion by 2021 (FaaS being another term for serverless, since it breaks down our programs into functions). That’s a big number, but it’s not everyone. Serverless solutions work great for scaling, but at a certain point the pay-as-you-go model doesn’t return as much value and if you’re consistently dealing with heavy loads, it can actually be more expensive than running containers. Likewise, though abstracting customization can be convenient, it can also be restrictive for organizations that want a lot of control over their servers. Serverless architecture hands off a lot of decisions to the vendor.

Conclusion: who are the players?

Most of the resources I read stressed that serverless is new and exciting, but shouldn’t automatically be considered “better” than containers for the reasons I’ve listed, among others. To close things out, I’ll provide the articles I used for reference in this piece, as well as a list of providers of both serverless and container-based solutions. If you’re anything like me, it’ll be helpful to see some of these names and realize that you now know a little bit about what they actually do.

Popular Container Management Software:

  • Docker Platform
  • Kubernetes Engine
  • Amazon Elastic Container Service (Amazon ECS)

Popular Serverless Vendors:

  • AWS Lambda
  • Cloudflare Workers
  • Google Cloud Functions
  • IBM Cloud Functions

Sources:

TechCrunch introduces us to AWS Lambda

Frazer Jamieson looks at the serverless revolution through the eyes of an industry veteran

TechTarget explains server virtualization

Docker on containers

Agent of Change presents a comparison of AWS Lambda and Kubernetes on YouTube

CloudFlare’s lesson on Serverless vs Containers

Businesswire predicts that serverless solutions will continue to grow in market share

--

--