How To: Manage Serverless Environment Variables Per Stage

I often find myself creating four separate stages for each ⚡ Serverless Framework project I work on: dev, staging, prod, and local. Obviously the first three are meant to be deployed to the cloud, but the last one, local, is meant to run and test interactions with local resources. It’s also great to have an offline version (like when you’re on a plane ✈ or have terrible wifi somewhere). Plus, development is much faster because you’re not waiting for round trips to the server. 😉

A really great feature of Serverless is the ability to configure ENVIRONMENT variables in the serverless.yml file. This lets us store important global information like database names, service endpoints and more. We can even reference passwords securely using AWS’s Service Manager Parameter Store and decode encrypted secrets on deployment, keeping them safe from developers and source repositories alike. 😬 Just reference the variable with ${ssm:/myapp/my-secure-value~true} in your configuration file.

Continue Reading…

How To: Manage RDS Connections from AWS Lambda Serverless Functions

Someone asked a great question on my How To: Reuse Database Connections in AWS Lambda post about how to end the unused connections left over by expired Lambda functions:

I’m playing around with AWS lambda and connections to an RDS database and am finding that for the containers that are not reused the connection remains. I found before that sometimes the connections would just die eventually. I was wondering, is there some way to manage and/or end the connections without needing to wait for them to end on their own? The main issue I’m worried about is that these unused connections would remain for an excessive amount of time and prevent new connections that will actually be used from being made due to the limit on the number of connections.

🧟‍♂️ Zombie RDS connections leftover on container expiration can become a problem when you start to reach a high number of concurrent Lambda executions. My guess is that this is why AWS is launching Aurora Serverless, to deal with relational databases at scale. At the time of this writing it is still in preview mode.

Update September 2, 2018: I wrote an NPM module that manages MySQL connections for you in serverless environments. Check it out here.

Update August 9, 2018: Aurora Serverless is now Generally Available!

Overall, I’ve found that Lambda is pretty good about closing database connections when the container expires, but even if it does it reliably, it still doesn’t solve the MAX CONNECTIONS problem. Here are several strategies that I’ve used to deal with this issue.

Continue Reading…

Securing Serverless: A Newbie’s Guide

So you’ve decided to build a serverless application. That’s awesome! May I be the first to welcome you to the future. 🤖 I bet you’ve done a lot of research. You’ve probably even deployed a few test functions to AWS Lambda or Google Cloud Functions and you’re ready to actually build something useful. You probably still have a bunch of unanswered questions, and that’s cool. We can still build some really great applications even if we only know the basics. However, when we start working with new things we typically make a bunch of dumb mistakes. While some are relatively innocuous, security mistakes can cause some serious damage.

I’ve been working with serverless applications since AWS launched Lambda in early 2015. Over the last few years I’ve developed many serverless applications covering a wide range of use cases. The most important thing I’ve learned: SECURE YOUR FUNCTIONS! I can tell you from personal experience, getting burned by an attack is no bueno. I’d hate to see it happen to you. 😢

To make sure it doesn’t happen to you, I’ve put together a list of 🔒Serverless Security Best Practices. This is not a comprehensive list, but it covers the things you ABSOLUTELY must do. I also give you some more things to think about as you continue on your serverless journey. 🚀

Continue Reading…

How To: Build a Serverless API with Serverless, AWS Lambda and Lambda API

AWS Lambda and AWS API Gateway have made creating serverless APIs extremely easy. Developers can simply create Lambda functions, configure an API Gateway, and start responding to RESTful endpoint calls. While this all seems pretty straightforward on the surface, there are plenty of pitfalls that can make working with these services frustrating.

There are, for example, lots of confusing and conflicting configurations in API Gateway.  Managing deployments and resources can be tricky, especially when publishing to multiple stages (e.g. dev, staging, prod, etc.). Even structuring your application code and dependencies can be difficult to wrap your head around when working with multiple functions.

In this post I’m going to show you how to setup and deploy a serverless API using the Serverless framework and Lambda API, a lightweight web framework for your serverless applications using AWS Lambda and API Gateway. We’ll create some sample routes, handle CORS, and discuss managing authentication. Let’s get started.

Continue Reading…

How To: Stub AWS Services in Lambda Functions using Serverless, Sinon.JS and Promises

I know the title is a quite a mouthful, but if you are trying to run tests on your Lambda functions that interact with AWS Services using the aws-sdk node module, then you’ve probably run into an issue stubbing or mocking the requests. In this post we’ll learn how to stub different AWS Services with Sinon.JS so that you can properly test your scripts.

UPDATE: AWS Lambda now supports Node v8.10, so we can use async/await instead of promises. The examples below still work with either v6.10 or v8.10, however, I recommend switching to async/await as they are more compact than promises. Read my post How To: Stub “.promise()” in AWS-SDK Node.js to learn how to deal with the .promise() method on aws-sdk services.

Let’s say you have a Lambda function that interacts with AWS’s SQS (Simple Queue Service). v6.10 of Node doesn’t support async/await, so you will most likely use promises if you don’t want to transpile your code or deal with callback hell. This means you need to Promisify an instance of the AWS SQS service. This is easy enough with:

Continue Reading…

How To: Access Your AWS VPC-based Elasticsearch Cluster Locally

AWS recently announced that their Elasticsearch Service now supports VPC, which is awesome, for a number of reasons:

1. No more signing every request

Remember this?

Every request had to be signed with AWS’s SigV4 so that the Elasticsearch endpoint could be properly authorized. That meant additional code to sign all your requests, and additional time for the endpoint to decode it. It might only be a few milliseconds of extra processing time, but those can add up. Now we can call our VPC Elasticsearch endpoint with a simple HTTP request.

Continue Reading…

Solve Your Own Problem…And You Have At Least One Customer

Last week I was deploying an app to AWS Lambda using Serverless and I accidentally misspelled a stage name. This resulted in deploying an entire CloudFormation stack with the wrong name, which wasted time and resources. I immediately realized that I spelled it wrong, so I was able to undeploy the stack quickly. I fully admit that misspelling the name was my fault, but since Serverless can deploy complex CloudFormation stacks that provision users, databases, SNS topics, etc., it seemed like there should be a better way to avoid this simple mistake.

Continue Reading…

How To: Reuse Database Connections in AWS Lambda

Update 9/2/2018: I wrote an NPM module that manages MySQL connections for you in serverless environments. Check it out here.

I work with AWS Lambda quite a bit. The ability to use this Functions-as-a-Service (FaaS) has dramatically reduced the complexity and hardware needs of the apps I work on. This is what’s known as a “Serverless” architecture since we do not need to provision any servers in order to run these functions. FaaS is great for a number of use cases (like processing images) because it will scale immediately and near infinitely when there are spikes in traffic. There’s no longer the need to run several underutilized processing servers just waiting for someone to request a large job.

AWS Lambda is event-driven, so it’s also possible to have it respond to API requests through AWS’s API Gateway. However, since Lambda is stateless, you’ll most likely need to query a persistent datastore in order for it to do anything exciting. Setting up a new database connection is relatively expensive. In my experience it typically takes more than 200ms. If we have to reconnect to the database every time we run our Lambda functions (especially if we’re responding to an API request) then we are already adding over 200ms to the total response time. Add that to your queries and whatever additional processing you need to perform and it becomes unusable under normal circumstance. Luckily, Lambda lets us “freeze” and then “thaw” these types of connections.

Update 4/5/2018: After running some new tests, it appears that “warm” functions now average anywhere between 4 and 20ms to connect to RDS instances in the same VPC. Cold starts still average greater than 100ms. Lambda does handle setting up DB connections really well under heavy load, but I still favor connection reuse as it cuts several milliseconds off your execution time.

Continue Reading…

Going Serverless

Design patterns and server architecture change over time as technology changes. Advancements in cloud computing has created unprecedented opportunities for organizations large and small to leverage shared resources to create faster, more reliable application stacks. This allows organizations to better serve their customers with highly-available services tailored to the needs of narrower customer segments. With this new level of flexibility and power, organizations must choose how best to utilize these resources to maximize their efforts and provide the iteration capacity to adapt quickly in rapidly changing markets.

Continue Reading…

An Introduction to Serverless Architecture

I’ve been building web applications for nearly 20 years, and the most difficult problem has always been scaling the architecture to support heavy load. With the advent of cloud computing with services like Amazon Web Services and Google Cloud Platform, the cost of scaling has been dramatically reduced, but the same underlying problems of scaling still exist. When dealing with data, you still need to build complex methods of efficiently accessing records. These are challenges fit for cloud engineers, but not for your average group of developers. Several months ago Amazon Web Services released two new services. These services create a new paradigm that not only make it easier to create scalable applications in the cloud, but essentially eliminates any server maintenance. It has been coined, Serverless Architecture, and it could be the future of cloud computing.

Continue Reading…