Throttling Third-Party API calls with AWS Lambda

In the serverless world, we often get the impression that our applications can scale without limits. With the right design (and enough money), this is theoretically possible. But in reality, many components of our serverless applications DO have limits. Whether these are physical limits, like network throughput or CPU capacity, or soft limits, like AWS Account Limits or third-party API quotas, our serverless applications still need to be able to handle periods of high load. And more importantly, our end users should experience minimal, if any, negative effects when we reach these thresholds.

There are many ways to add resiliency to our serverless applications, but this post is going to focus on dealing specifically with quotas in third-party APIs. We’ll look at how we can use a combination of SQS, CloudWatch Events, and Lambda functions to implement a precisely controlled throttling system. We’ll also discuss how you can implement (almost) guaranteed ordering, state management (for multi-tiered quotas), and how to plan for failure. Let’s get started!

Continue Reading…

How To: Use SNS and SQS to Distribute and Throttle Events

An extremely useful AWS serverless microservice pattern is to distribute an event to one or more SQS queues using SNS. This gives us the ability to use multiple SQS queues to “buffer” events so that we can throttle queue processing to alleviate pressure on downstream resources. For example, if we have an event that needs to write information to a relational database AND trigger another process that needs to call a third-party API, this pattern would be a great fit.

This is a variation of the Distributed Trigger Pattern, but in this example, the SNS topic AND the SQS queues are contained within a single microservice. It is certainly possible to subscribe other microservices to this SNS topic as well, but we’ll stick with intra-service subscriptions for now. The diagram below represents a high-level view of how we might trigger an SNS topic (API Gateway → Lambda → SNS), with SNS then distributing the message to the SQS queues. Let’s call it the Distributed Queue Pattern.

Distributed Queue Pattern

This post assumes you know the basics of setting up a serverless application, and will focus on just the SNS topic subscriptions, permissions, and implementation best practices. Let’s get started!

Continue Reading…

Stop Calling Everything Serverless!

I’ve been building serverless applications since AWS Lambda went GA in early 2015. I’m not saying that makes me an expert on the subject, but as I’ve watched the ecosystem mature and the community expand, I have formed some opinions around what it means exactly to be “serverless.” I often see tweets or articles that talk about serverless in a way that’s, let’s say, incompatible with my interpretation. This sometimes makes my blood boil, because I believe that “serverless” isn’t a buzzword, and that it actually stands for something important.

I’m sure that many people believe that this is just a semantic argument, but I disagree. When we refer to something as being “serverless”, there should be an agreed upon understanding of not only what that means, but also what it empowers you to do. If we continue to let marketers hijack the term, then it will become a buzzword with absolutely no discernible meaning whatsoever. In this post, we’ll look at how some leaders in the serverless space have defined it, I’ll add some of my thoughts, and then offer my own definition at the end.

Continue Reading…

Serverless Tip: Don’t overpay when waiting on remote API calls

Our serverless applications become a lot more interesting when they interact with third-party APIs like Twilio, SendGrid, Twitter, MailChimp, Stripe, IBM Watson and others. Most of these APIs respond relatively quickly (within a few hundred milliseconds or so), allowing us to include them in the execution of synchronous workflows (like our own API calls).  Sometimes we run these calls asynchronously as background tasks completely disconnected from any type of front end user experience.

Regardless how they’re executed, the Lambda functions calling them need to stay running while they wait for a response. Unfortunately, Step Functions don’t have a way to create HTTP requests and wait for a response. And even if they did, you’d at least have to pay for the cost of the transition, which can get a bit expensive at scale. This may not seem like a big deal on the surface, but depending on your memory configuration, the cost can really start to add up.

In this post we’ll look at the impact of memory configuration on the performance of remote API calls, run a cost analysis, and explore ways to optimize our Lambda functions to minimize cost and execution time when dealing with third-party APIs.

Continue Reading…

re:Capping re:Invent: AWS goes all-in on Serverless

Last week I spent six incredibly exhausting days in Las Vegas at the AWS re:Invent conference. More than 50,000 developers, partners, customers, and cloud enthusiasts came together to experience this annual event that continues to grow year after year. This was my first time attending, and while I wasn’t quite sure what to expect, I left with not just the feeling that I got my money’s worth, but that AWS is doing everything in their power to help customers like me succeed.

There have already been some really good wrap-up posts about the event. Take a look at James Beswick’s What I learned from AWS re:Invent 2018, Paul Swail’s What new use cases do the re:Invent 2018 serverless announcements open up?, and All the Serverless announcements at re:Invent 2018 from the Serverless, Inc. blog. There’s a lot of good analysis in these posts, so rather than simply rehash everything, I figured I touch on a few of the announcements that I think really matter. We’ll get to that in a minute, but first I want to point out a few things about Amazon Web Services that I learned this past week.

Continue Reading…

Aurora Serverless Data API: An (updated) First Look

Update June 5, 2019: The Data API team has released another update that adds improvements to the JSON serialization of the responses. Any unused type fields will be removed, which makes the response size 80+% smaller.

Update June 4, 2019: After playing around with the updated Data API, I found myself writing a few wrappers to handle parameter formation, transaction management, and response formatting. I ended up writing a full-blown client library for it. I call it the “Data API Client“, and it’s available now on GitHub and NPM.

Update May 31, 2019: AWS has released an updated version of the Data API (see here). There have been a number of improvements (especially to the speed, security, and transaction handling). I’ve updated this post to reflect the new changes/improvements.

On Tuesday, November 20, 2018, AWS announced the release of the new Aurora Serverless Data API. This has been a long awaited feature and has been at the top of many a person’s #awswishlist. As you can imagine, there was quite a bit of fanfare over this on Twitter.

Obviously, I too was excited. The prospect of not needing to use VPCs with Lambda functions to access an RDS database is pretty compelling. Think about all those cold start savings. Plus, connection management with serverless and RDBMS has been quite tricky. I even wrote an NPM package to help deal with the max_connections issue and the inevitable zombies 🧟‍♂️ roaming around your RDS cluster. So AWS’s RDS via HTTP seems like the perfect solution, right? Well, not so fast. 😞 (Update May 31, 2019: There have been a ton of improvements, so read the full post.)

Continue Reading…

Takeaways from ServerlessNYC 2018

I had the opportunity to attend ServerlessNYC this week (a ServerlessDays community conference) and had an absolutely amazing time. The conference was really well-organized (thanks Iguazio), the speakers were great, and I was able to have some very interesting (and enlightening) conversations with many attendees and presenters. In this post I’ve summarized some of the key takeaways from the event as well as provided some of my own thoughts.

Note: There were several talks that were focused on a specific product or service. While I found these talks to be very interesting, I didn’t include them in this post. I tried to cover the topics and lessons that can be applied to serverless in general.

Update November 16, 2018: Some videos have been posted, so I’ve provided the links to them.

Audio Version:

Continue Reading…

What 15 Minute Lambda Functions Tells Us About the Future of Serverless

Amazon Web Services recently announced that they increased the maximum execution time of Lambda functions from 5 to 15 minutes. In addition to this, they also introduced the new “Applications” menu in the Lambda Console, a tool that aggregates functions, resources, event sources and metrics based on services defined by SAM or CloudFormation templates. With AWS re:Invent just around the corner, I’m sure these announcements are just the tip of the iceberg with regards to AWS’s plans for Lambda and its suite of complementary managed services.

While these may seem like incremental improvements to the casual observer, they actually give us an interesting glimpse into the future of serverless computing. Cloud providers, especially AWS, continue to push the limits of what serverless can and should be. In this post, we’ll discuss why these two announcements represent significant progress into serverless becoming the dominant force in cloud computing.

Continue Reading…

🚀 Project Update:

Lambda API: v0.8.1 Released

Lambda API v0.8.1 has been released to patch an issue with middleware responses and a path prefixing options bug. The release is immediately available via NPM. Read More...

An Introduction to Serverless Microservices

Thinking about microservices, especially their communication patterns, can be a bit of a mind-bending experience for developers. The idea of splitting an application into several (if not hundreds of) independent services, can leave even the most experienced developer scratching their head and questioning their choices. Add serverless event-driven architecture into the mix, eliminating the idea of state between invocations, and introducing a new per function concurrency model that supports near limitless scaling, it’s not surprising that many developers find this confusing. 😕 But it doesn’t have to be. 😀

In this post, we’ll outline a few principles of microservices and then discuss how we might implement them using serverless. If you are familiar with microservices and how they communicate, this post should highlight how these patterns are adapted to fit a serverless model. If you’re new to microservices, hopefully you’ll get enough of the basics to start you on your serverless microservices journey. We’ll also touch on the idea of orchestration versus choreography and when one might be a better choice than the other with serverless architectures. I hope you’ll walk away from this realizing both the power of the serverless microservices approach and that the basic fundamentals are actually quite simple.  👊

Audio Version:

Continue Reading…