An Introduction to Serverless Architecture

I’ve been building web applications for nearly 20 years, and the most difficult problem has always been scaling the architecture to support heavy load. With the advent of cloud computing with services like Amazon Web Services and Google Cloud Platform, the cost of scaling has been dramatically reduced, but the same underlying problems of scaling still exist. When dealing with data, you still need to build complex methods of efficiently accessing records. These are challenges fit for cloud engineers, but not for your average group of developers. Several months ago Amazon Web Services released two new services. These services create a new paradigm that not only make it easier to create scalable applications in the cloud, but essentially eliminates any server maintenance. It has been coined, Serverless Architecture, and it could be the future of cloud computing.

Let’s start with some background. Your typical cloud-based application normally starts with a cluster of servers to handle your incoming web requests. In more modern applications these would most likely be RESTful APIs taking requests from your web and mobile apps. In order to spread your traffic across multiple machines, you’d simply put a load balancer in front of them. Amazon’s ELB (Elastic Load Balancer), for example, even handles SSL termination and health checks for you, which means you can already reduce the complexity of your backend cluster. Now your traffic is bounced between however many servers are in that cluster. You would most likely enable Auto Scaling so that during times of heavy peak traffic another server or more spins up. An application such as this would probably need to be stateless, so you’ll be passing in a token on every request that identifies and authenticates the call. You probably would have enabled a few layers of caching that reduce load on your data layer as well.

The example above is a very simple use case, but fairly standard. For most technical people, setting this up is relatively simple, but requires knowledge of server architecture, configuration, etc. These servers need to be maintained, upgraded, and patched. If there is a software problem, then typically you, the Cloud service’s customer, has to fix it. You can get even more complex and start configuring your servers using Chef scripts and OpWorks. That means writing cookbooks to deploy servers and assign security groups, map resources, and install software. If you have the resources to hire a full-time cloud engineer, go for it, but there might just be a better way.

Amazon released Lambda and API Gateway last summer, and with these two services, you can nearly eliminate your complex server architecture and never need to worry about maintaining servers or scaling your application again. Lambda runs your “functions as a service”, meaning that you build stateless microservices that run inside “containers.” These containers purport to be infinitely scalable. By coupling these serverless functions with API Gateway, RESTful APIs are automatically available and can route events to the backend functions. This removes the need for load balancers and auto-scaling servers. Google, Rackspace, Azure and others are working on similar services. I’m certainly going to spend a lot more time experimenting with these new services. This could completely change the way we (or at least I) build cloud applications.

Tags: , , , , , , ,


Did you like this post? 👍  Do you want more? 🙌  Follow me on Twitter or check out some of the projects I’m working on. You can sign up for my WEEKLY newsletter too. You'll get links to my new posts (like this one), industry happenings, project updates and much more! 📪

Sign Up for my WEEKLY email newsletter


I respect your privacy and I will NEVER sell, rent or share your email address.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.