How To: Reuse Database Connections in AWS Lambda

Update 9/2/2018: I wrote an NPM module that manages MySQL connections for you in serverless environments. Check it out here.

I work with AWS Lambda quite a bit. The ability to use this Functions-as-a-Service (FaaS) has dramatically reduced the complexity and hardware needs of the apps I work on. This is what’s known as a “Serverless” architecture since we do not need to provision any servers in order to run these functions. FaaS is great for a number of use cases (like processing images) because it will scale immediately and near infinitely when there are spikes in traffic. There’s no longer the need to run several underutilized processing servers just waiting for someone to request a large job.

AWS Lambda is event-driven, so it’s also possible to have it respond to API requests through AWS’s API Gateway. However, since Lambda is stateless, you’ll most likely need to query a persistent datastore in order for it to do anything exciting. Setting up a new database connection is relatively expensive. In my experience it typically takes more than 200ms. If we have to reconnect to the database every time we run our Lambda functions (especially if we’re responding to an API request) then we are already adding over 200ms to the total response time. Add that to your queries and whatever additional processing you need to perform and it becomes unusable under normal circumstance. Luckily, Lambda lets us “freeze” and then “thaw” these types of connections.

Update 4/5/2018: After running some new tests, it appears that “warm” functions now average anywhere between 4 and 20ms to connect to RDS instances in the same VPC. Cold starts still average greater than 100ms. Lambda does handle setting up DB connections really well under heavy load, but I still favor connection reuse as it cuts several milliseconds off your execution time.

Continue Reading…

Going Serverless

Design patterns and server architecture change over time as technology changes. Advancements in cloud computing has created unprecedented opportunities for organizations large and small to leverage shared resources to create faster, more reliable application stacks. This allows organizations to better serve their customers with highly-available services tailored to the needs of narrower customer segments. With this new level of flexibility and power, organizations must choose how best to utilize these resources to maximize their efforts and provide the iteration capacity to adapt quickly in rapidly changing markets.

Continue Reading…

An Introduction to Serverless Architecture

I’ve been building web applications for nearly 20 years, and the most difficult problem has always been scaling the architecture to support heavy load. With the advent of cloud computing with services like Amazon Web Services and Google Cloud Platform, the cost of scaling has been dramatically reduced, but the same underlying problems of scaling still exist. When dealing with data, you still need to build complex methods of efficiently accessing records. These are challenges fit for cloud engineers, but not for your average group of developers. Several months ago Amazon Web Services released two new services. These services create a new paradigm that not only make it easier to create scalable applications in the cloud, but essentially eliminates any server maintenance. It has been coined, Serverless Architecture, and it could be the future of cloud computing.

Continue Reading…