Takeaways from the State of Serverless Report

On a recent episode of Serverless Chats, I spoke with Stephen Pinkerton and Darcy Rayner of Datadog to dig into The State of Serverless report, which was released at the end of February 2020. After frequently fielding customer questions about the topic, Datadog looked at its data and customer use cases, and examined how they were using serverless. Datadog’s report is a way to break it all down, but it’s also an opportunity for its customers (and serverless users alike) to see how other people are using serverless in a data-driven way. I discussed methodology, findings, and key takeaways with Stephen and Darcy, and thought it’d be worthwhile to consolidate and share that insight.

Continue Reading…

Throttling Third-Party API calls with AWS Lambda

In the serverless world, we often get the impression that our applications can scale without limits. With the right design (and enough money), this is theoretically possible. But in reality, many components of our serverless applications DO have limits. Whether these are physical limits, like network throughput or CPU capacity, or soft limits, like AWS Account Limits or third-party API quotas, our serverless applications still need to be able to handle periods of high load. And more importantly, our end users should experience minimal, if any, negative effects when we reach these thresholds.

There are many ways to add resiliency to our serverless applications, but this post is going to focus on dealing specifically with quotas in third-party APIs. We’ll look at how we can use a combination of SQS, CloudWatch Events, and Lambda functions to implement a precisely controlled throttling system. We’ll also discuss how you can implement (almost) guaranteed ordering, state management (for multi-tiered quotas), and how to plan for failure. Let’s get started!

Continue Reading…

How To: Use SNS and SQS to Distribute and Throttle Events

An extremely useful AWS serverless microservice pattern is to distribute an event to one or more SQS queues using SNS. This gives us the ability to use multiple SQS queues to “buffer” events so that we can throttle queue processing to alleviate pressure on downstream resources. For example, if we have an event that needs to write information to a relational database AND trigger another process that needs to call a third-party API, this pattern would be a great fit.

This is a variation of the Distributed Trigger Pattern, but in this example, the SNS topic AND the SQS queues are contained within a single microservice. It is certainly possible to subscribe other microservices to this SNS topic as well, but we’ll stick with intra-service subscriptions for now. The diagram below represents a high-level view of how we might trigger an SNS topic (API Gateway → Lambda → SNS), with SNS then distributing the message to the SQS queues. Let’s call it the Distributed Queue Pattern.

Distributed Queue Pattern

This post assumes you know the basics of setting up a serverless application, and will focus on just the SNS topic subscriptions, permissions, and implementation best practices. Let’s get started!

Continue Reading…

15 Key Takeaways from the Serverless Talk at AWS Startup Day

I love learning about the capabilities of AWS Lambda functions, and typically consume any article or piece of documentation I come across on the subject. When I heard that Chris Munns, Senior Developer Advocate for Serverless at AWS, was going to be speaking at AWS Startup Day in Boston, I was excited. I was able to attend his talk, The Best Practices and Hard Lessons Learned of Serverless Applications, and it was well worth it.

Chris said during his talk that all of the information he presented is on the AWS Serverless site. However, there is A LOT of information out there, so it was nice to have him consolidate it down for us into a 45 minute talk. There was some really insightful information shared and lots of great questions. I was aware of many of the topics discussed, but there were several clarifications and explanations (especially around the inner workings of Lambda) that were really helpful. 👍

Continue Reading…

Mixing VPC and Non-VPC Lambda Functions for Higher Performing Microservices

I came across a post the in the Serverless forums that asked how to disable the VPC for a single function within a Serverless project. This got me thinking about how other people structure their serverless microservices, so I wanted to throw out some ideas. I often mix my Lambda functions between VPC and non-VPC depending on their use and data requirements. In this post, I’ll outline some ways you can structure your Lambda microservices to isolate services, make execution faster, and maybe even save you some money. ⚡️💰

Continue Reading…

Serverless Consumers with Lambda and SQS Triggers

On Wednesday, June 27, 2018, Amazon Web Services released SQS triggers for Lambda functions. Those of you who have been building serverless applications with AWS Lambda probably know how big of a deal this is. Until now, the AWS Simple Queue Service (SQS) was generally a pain to deal with for serverless applications. Communicating with SQS is simple and straightforward, but there was no way to automatically consume messages without implementing a series of hacks. In general, these hacks “worked” and were fairly manageable. However, as your services became more complex, dealing with concurrency and managing fan out made your applications brittle and error prone. SQS triggers solve all of these problems. 👊

Update September 3, 2020: There are a number of important recommendations available in the AWS Developer Guide for using SQS as a Lambda Trigger: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html.

The most important are:

“To allow your function time to process each batch of records, set the source queue’s visibility timeout to at least 6 times the timeout that you configure on your function. The extra time allows for Lambda to retry if your function execution is throttled while your function is processing a previous batch.”

“To give messages a better chance to be processed before sending them to the dead-letter queue, set the maxReceiveCount on the source queue’s redrive policy to at least 5.”

It is also imperative that you set a minimum concurrency of 5 on your processing Lambda function due to the initial scaling behavior of the SQS Poller.

Update November 19, 2019: AWS announced support for SQS FIFO queues as a Lambda event source (announcement here). FIFO queues guarantee message order, which means only one Lambda function is invoked per MessageGroupId.

Update December 6, 2018: At some point over the last few months AWS fixed the issue with the concurrency limits and the redrive policy. See Additional experiments with concurrency and redrive polices below.

Audio Version (please note that this audio version is out of date given the new updates)

Continue Reading…