How To: Build a Serverless API with Serverless, AWS Lambda and Lambda API

Learn how to build a serverless API using the Serverless framework, AWS Lambda, and the Lambda-API node module.

AWS Lambda and AWS API Gateway have made creating serverless APIs extremely easy. Developers can simply create Lambda functions, configure an API Gateway, and start responding to RESTful endpoint calls. While this all seems pretty straightforward on the surface, there are plenty of pitfalls that can make working with these services frustrating.

There are, for example, lots of confusing and conflicting configurations in API Gateway.  Managing deployments and resources can be tricky, especially when publishing to multiple stages (e.g. dev, staging, prod, etc.). Even structuring your application code and dependencies can be difficult to wrap your head around when working with multiple functions.

In this post I'm going to show you how to setup and deploy a serverless API using the Serverless framework and Lambda API, a lightweight web framework for your serverless applications using AWS Lambda and API Gateway. We'll create some sample routes, handle CORS, and discuss managing authentication. Let's get started.

Requirements

First we need to install the Serverless framework, using our Terminal:

sh
$ npm install -g serverless

Next we need to clone the Serverless API Sample project from Github. Navigate to the folder you wish to create the project in and then:

sh
$ git clone https://github.com/jeremydaly/serverless-api-sample.git

Navigate to the serverless-api-sample folder and install our Node dependencies:

sh
$ npm install

And that's it. Now we're ready to start working on our API.

The serverless.yml file

Let's open the serverless.yml file. I've set this up to be very basic. You can learn more about this file and its options here. I suggest you do that when you get a chance as this is a very powerful tool.

For now, we just need to configure one thing to get started. On line 8 there is a property named profile. This refers to the name of your local AWS profile. If you only have one account, then it is probably named default. If you don't have a local AWS profile set up, you can configure one using this: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html

Once your profile is configured, update the <span class=""><YOUR AWS PROFILE></span> with your profile name. Save that file.

A few other things of note…

There are a few more parts of the serverless.yml file that you should know about. The iamRoleStatements section creates a new role for your Lambda function. Again, I've set this up to be basic. All it does is allow logs to be created and for an S3 bucket to be created to store your deployments.

The functions section is another important piece. As shown below, this creates a function called serverless-api and names it using the service name and then deployment stage (more on that later). It has a handler, which specifies which module and which function to route requests to. Finally it attaches an http event that responds to any HTTP method that matches a path that starts with v1/. The {proxy+} part of the path is API Gateway's all-encompassing proxy resource that routes any path, no matter how deep, to the specified resource.

yaml
# Functions functions: serverless-api-sample: name: ${self:service}-${self:provider.stage}-serverless-api-sample handler: handler.router timeout: 30 events: - http: path: 'v1/{proxy+}' method: any

Understanding the handler function

Open the file named handler.js. This is our handler module that we specified in the serverless.yml function. Let's skip to line 113 first and look at the following:

javascript
module.exports.router = (event, context, callback) => { ... }

Here we are exporting a function call router (as in handler: handler.router from serverless.yml). This is the function that will be called when we route API requests to this Lambda function.

Let's jump back to the top of the script and look at line 12:

javascript
// Require and init API router module const app = require('lambda-api')({ version: 'v1.0', base: 'v1' })

Here we require lambda-api and instantiate it. This module allows for a version number to be set and a base. The base (in this case v1) is used to preface routes, meaning you don't need to specify the version in every route you create, just the path. We now have an instance of lambda-api in our app variable. Let's create some routes.

Creating Routes

Lambda API is similar to other Node.js web frameworks like Fastify and Express, so if you've used any of those, then this should seem very familiar. Full documentation can be found at https://github.com/jeremydaly/lambda-api.

Creating routes is easy. Call the convenience method for the HTTP method you wish to create the route for and specify the route and a function that receives two arguments. For example, a GET method for /posts would look like this:

javascript
app.get('/posts', (req,res) => { })

Once we've created our route, we can now write our code to respond to the request. This is a normal Javascript function, so you can put your code directly in the function block or pass it off to another module. Just make sure you pass the res variable. This contains the RESPONSE object that is needed to return the request to the user.

The RESPONSE object (https://github.com/jeremydaly/lambda-api#response) allows you to manipulate and send the response. You can set the status with the .status() method, add headers with the .header() method, and return the contents of the response with the .send() method. There are also convenience methods like .json() and .html() that will add the correct Content-Type headers for you.

The methods are all chainable, so you can call:

javascript
res.header('Content-Type','application/json').status(200).send({ status: 'ok' })

Or with a convenience method:

javascript
res.status(200).json({ status: 'ok' })

You can even respond with an error by calling the .error() method:

javascript
res.error('This is an error')

And if you want to set the error code:

javascript
res.status(404).error('This is a 404 error')

Path parameters and query strings

The REQUEST object automatically parses path parameters and query strings for you. For example:

javascript
app.put('/posts/:post_id', (req,res) => { res.status(200).json({ params: req.params }) })

This creates a PUT route with a post_id path parameter. If you PUT something to https://<your-api-endpoint>/v1/posts/1234, then the req.params will contain a javascript object like this:

javascript
{ "post_id": "1234" }

You can create as many params as you'd like:

javascript
app.put('/posts/:post_id/:foo/:bar', (req,res) => { res.status(200).json({ params: req.params }) })

Query strings are automatically parsed into an object and are available using req.query. Our previous route to https:///v1/posts/1234/?test=true&foo=bar would return:

javascript
{ "query": { "test": "true", "foo": "bar" }, "params": { "post_id": "123" } }

Processing the BODY

Most POST and PUT routes will expect some kind of BODY input. Lambda API handles this for you automatically regardless of what you send in the body. If you post JSON, the module will attempt to parse it into a Javascript object. If it can't be parsed, it will just include the raw string. If you post FORM variables, Lambda API will parse and decode them as long as you send in a Content-Type: "application/x-www-form-urlencoded" header. The parsed object is then available via req.body.

Middleware

Middleware allows you to process the request BEFORE it goes to a specific route. This is useful for things like authentication or CORS. Middleware is defined using the .use() method and takes a function as its single argument. This function takes three arguments, the REQUEST and RESPONSE objects and a next function. When called, the next function tells the middleware to move on to the next middleware or to the route if there are no more defined.

If we wanted to perform authorization before every request, we could use:

javascript
// Add Authorization Middleware app.use((req,res,next) => { // Check for Authorization Bearer token if (req.auth.type === 'Bearer') { // Get the Bearer token value let token = req.auth.value // Set the token in the request scope req.token = token // Do some checking here to make sure it is valid (set an auth flag) req.auth = true } // Call next to continue processing next() })

Notice that we check the req.auth parameter. Lambda API will automatically parse several types of authorization schemas and normalize them for you. If this is a "Bearer" token authorization, we can grab the token value using req.auth.value and then do something with that to confirm that the request is authorized. We may look this up in a database or cache and then flag the request as authorized, or throw an error. The REQUEST object is writable, so setting req.auth=true will allow other middleware and routes to access that value. When we are done processing, we call the next() function to move on.

CORS

If you are writing an API that will be accessed directly from a web browser, then you'll need to implement Cross-Origin Resource Sharing (CORS). API Gateway has a built-in CORS implementation, but it is a static implementation and requires extra configuration. CORS is nothing more than setting the correct headers when responding to a request from a web browser. Middleware allows you to return the correct CORS headers by simply setting the headers directly or using the res.cors() convenience method:

javascript
// Add CORS Middleware app.use((req,res,next) => { // Add default CORS headers for every request res.cors() // Call next to continue processing next() })

If you'd like to get even fancier, you can use the referrer information and crosscheck that against a list of approved URLs. Then you could manipulate your Access-Control-Allow-Origin header to only allow certain domains. You can customize the CORS headers by passing in an options object.

Browsers also require a preflight call to an OPTIONS method. This checks to see if the route has the proper CORS headers set. The easiest way to do this is to just set an OPTIONS route with a wildcard. This will create an OPTIONS method for every route you have defined.

javascript
// Default Options for CORS preflight app.options('/*', (req,res) => { res.status(200).json({}) })

Error Handling

Error handling is automatic by default, so calling res.error('Some error occured) will return a formatted error response. It will also log the error using console.log so it will be accessible in your Cloudwatch logs. If you'd like to override errors, you can use Lambda API's Error Handling (https://github.com/jeremydaly/lambda-api#error-handling) feature.

Running our routes

Unlike Fastify or Express.js, Lambda API doesn't respond directly to HTTP requests on a specific port. Instead, it just accepts the event passed through the handler function and processes that. Within our router function we call app.run() with the event, context and callback passed from the handler:

javascript
// Run the request app.run(event,context,callback)

This will process the event, route it correctly, and return the response.

Testing our API locally

Before we deploy our API, we're going to want to test the routes locally. In the sample project I've included five events that replicate what API Gateway could send to your Lambda function. You can test your functions using these with Serverless' invoke local command:

shell
$ sls invoke local -f serverless-api-sample -p test/get_sample.json $ sls invoke local -f serverless-api-sample -p test/post_sample.json $ sls invoke local -f serverless-api-sample -p test/put_sample.json $ sls invoke local -f serverless-api-sample -p test/delete_sample.json $ sls invoke local -f serverless-api-sample -p test/form_sample.json

The GET sample will return the following from our sample project:

javascript
{ "headers": { "Content-Type": "application/json", "Access-Control-Allow-Origin": "*", "Access-Control-Allow-Methods": "GET, PUT, POST, DELETE, OPTIONS", "Access-Control-Allow-Headers": "Content-Type, Authorization, Content-Length, X-Requested-With" }, "statusCode": 200, "body": { "status": "ok", "version": "v1.0", "auth": true, "body": null, "query": { "qs1": "q1" } } }

Deploying our API

Now we actually want people to be able to call our API. We can deploy to Amazon Web Services with a single Serverless command:

shell
$ sls deploy

This will deploy your service to the default dev stage and return something like this:

shell
Serverless: Stack update finished... Service Information service: serverless-api stage: dev region: us-east-1 stack: serverless-api-dev api keys: None endpoints: ANY - https://.execute-api.us-east-1.amazonaws.com/dev/v1/{proxy+} functions: serverless-api: serverless-api-dev-serverless-api

And that's it! Now you can GET, POST, PUT, and DELETE to https://.execute-api.us-east-1.amazonaws.com/dev/v1/posts

Deployment Stages

The Serverless framework is very good at handling deployment stages. Having multiple stages lets you create different versions of your API for testing or other purposes. I've configured the sample project to use dev as the default stage, but you can specify other stages using the -s option:

shell
$ sls deploy -s staging

I've also configured the sample project to use the serverless-stage-manager which allows you to configure a list of allowable stages. This is helpful so that you don't accidentally deploy to the "pord" instead of "prod" stage.

Where do we go from here?

I hope this post gave you the basics needed to create your first (or perhaps a better) serverless API using Lambda and API Gateway. The capabilities are near endless with these powerful services from Amazon Web Services. Combine those with the ease of use of the Serverless framework and Lambda API and you should be able to create some pretty amazing serverless applications.

I would suggest reading up on the Serverless framework at serverless.com.

Also, the documentation for Lambda API can be found on Github: https://github.com/jeremydaly/lambda-api. v0.5 is loaded with features (like binary support, route prefixing, etc.) that cover lots of use cases for your serverless applications.

If you plan on integrating database connections into your Lambda functions, which you will probably need to do at some point, check out How To: Reuse Database Connections in AWS Lambda and How To: Manage RDS Connections from AWS Lambda Serverless Functions.

Another great resource is the Serverless Optimize Plugin. With a few configurations you can optimize your functions so that they only use the necessary dependencies. Check out How To: Optimize the Serverless Optimizer Plugin for more tips. By default, your entire Serverless project is contained in each Lambda function, which doesn't make a ton of sense. This plugin will fix that and transpile your code if necessary.

Finally, I would suggest reading up on configuring AWS resources via Cloudformation. You can do some pretty amazing things using the resources section of the serverless.yml file. AWS resources can be deployed per stage which lets you do some very useful things. You most likely don't want to spin up database instances with it, but it's great for creating SQS queues, SNS topics, DynamoDB tables and more.

Comments are currently disabled, but they'll be back soon.