On a recent episode of Serverless Chats, I spoke with Stephen Pinkerton and Darcy Rayner of Datadog to dig into The State of Serverless report, which was released at the end of February 2020. After frequently fielding customer questions about the topic, Datadog looked at its data and customer use cases, and examined how they were using serverless. Datadog’s report is a way to break it all down, but it’s also an opportunity for its customers (and serverless users alike) to see how other people are using serverless in a data-driven way. I discussed methodology, findings, and key takeaways with Stephen and Darcy, and thought it’d be worthwhile to consolidate and share that insight.
Tag: sqs
Throttling Third-Party API calls with AWS Lambda
In the serverless world, we often get the impression that our applications can scale without limits. With the right design (and enough money), this is theoretically possible. But in reality, many components of our serverless applications DO have limits. Whether these are physical limits, like network throughput or CPU capacity, or soft limits, like AWS Account Limits or third-party API quotas, our serverless applications still need to be able to handle periods of high load. And more importantly, our end users should experience minimal, if any, negative effects when we reach these thresholds.
There are many ways to add resiliency to our serverless applications, but this post is going to focus on dealing specifically with quotas in third-party APIs. We’ll look at how we can use a combination of SQS, CloudWatch Events, and Lambda functions to implement a precisely controlled throttling system. We’ll also discuss how you can implement (almost) guaranteed ordering, state management (for multi-tiered quotas), and how to plan for failure. Let’s get started!
How To: Use SNS and SQS to Distribute and Throttle Events
An extremely useful AWS serverless microservice pattern is to distribute an event to one or more SQS queues using SNS. This gives us the ability to use multiple SQS queues to “buffer” events so that we can throttle queue processing to alleviate pressure on downstream resources. For example, if we have an event that needs to write information to a relational database AND trigger another process that needs to call a third-party API, this pattern would be a great fit.
This is a variation of the Distributed Trigger Pattern, but in this example, the SNS topic AND the SQS queues are contained within a single microservice. It is certainly possible to subscribe other microservices to this SNS topic as well, but we’ll stick with intra-service subscriptions for now. The diagram below represents a high-level view of how we might trigger an SNS topic (API Gateway → Lambda → SNS), with SNS then distributing the message to the SQS queues. Let’s call it the Distributed Queue Pattern.

This post assumes you know the basics of setting up a serverless application, and will focus on just the SNS topic subscriptions, permissions, and implementation best practices. Let’s get started!
15 Key Takeaways from the Serverless Talk at AWS Startup Day
I love learning about the capabilities of AWS Lambda functions, and typically consume any article or piece of documentation I come across on the subject. When I heard that Chris Munns, Senior Developer Advocate for Serverless at AWS, was going to be speaking at AWS Startup Day in Boston, I was excited. I was able to attend his talk, The Best Practices and Hard Lessons Learned of Serverless Applications, and it was well worth it.
Chris said during his talk that all of the information he presented is on the AWS Serverless site. However, there is A LOT of information out there, so it was nice to have him consolidate it down for us into a 45 minute talk. There was some really insightful information shared and lots of great questions. I was aware of many of the topics discussed, but there were several clarifications and explanations (especially around the inner workings of Lambda) that were really helpful. 👍
Mixing VPC and Non-VPC Lambda Functions for Higher Performing Microservices
I came across a post the in the Serverless forums that asked how to disable the VPC for a single function within a Serverless project. This got me thinking about how other people structure their serverless microservices, so I wanted to throw out some ideas. I often mix my Lambda functions between VPC and non-VPC depending on their use and data requirements. In this post, I’ll outline some ways you can structure your Lambda microservices to isolate services, make execution faster, and maybe even save you some money. ⚡️💰