A Tale of Two Teams

Audio Version:

It was the best of times, it was the worst of times, it was the age of wisdom, it was the age of foolishness… ~ A Tale of Two Cities by Charles Dickens

There is a revolution happening in the tech world. An emerging paradigm that’s letting development teams focus on business value instead of technical orchestration. It is helping teams create and iterate faster, without worrying about the limits or configurations of an underlying infrastructure. It is enabling the emergence of new tools and services that foster greater developer freedom. Freedom to experiment. Freedom to do more with less. Freedom to immediately create value by publishing their work without the traditional barriers created by operational limits.

Continue Reading…

Serverless Peeps You Need To Follow

In my never ending quest to consume all things serverless, I often find myself scouring the Interwebs for new and interesting serverless articles, blog posts, videos, and podcasts. There are more and more people doing fascinating things with serverless every day, so finding content is becoming easier and easier. However, this increase in content comes with an increase in noise as well. Cutting through that noise isn’t always easy. 🙉

Great content with valuable insights

I personally love reading articles that introduce new use cases or optimizations for serverless. Stories about companies using serverless in production and how their architectures are set up are also extremely interesting. I’ve been working in the serverless space for several years now, and have come across a number of people who produce and/or share really great content. I’ve put together a list of people that I follow and enjoy their content regularly. Hopefully these people will help you learn to love serverless as much as I do. ❤️⚡️

Continue Reading…

How To: Tag Your Lambda Functions for Smarter Serverless Applications

As our serverless applications start to grow in complexity and scope, we often find ourselves publishing dozens if not hundreds of functions to handle our expanding workloads. It’s no secret that serverless development workflows have been a challenge for a lot of organizations. Some best practices are starting to emerge, but many development teams are simply mixing their existing workflows with frameworks like Serverless and AWS SAM to build, test and deploy their serverless applications.

Beyond workflows, another challenge serverless developers encounter as their applications expand, is simply trying to keep all of their functions organized. You may have several functions and resources as part of a microservice contained in their own git repo. Or you might simply put all your functions in a single repository for better common library sharing. Regardless of how code is organized locally, much of that is lost when all your functions end up in a big long list in the AWS Lambda console. In this post we’ll look at how we can use AWS’s resource tagging as a way to apply structure to our deployed functions. This not only give us more insight into our applications, but can be used to apply Cost-Allocation Tags to our billing reports as well. 👍

Continue Reading…

Lambda Warmer: Optimize AWS Lambda Function Cold Starts

At a recent AWS Startup Day event in Boston, MA, Chris Munns, the Senior Developer Advocate for Serverless at AWS, discussed Lambda cold starts and how to mitigate them. According to Chris (although he acknowledge that it is a “hack”) using the CloudWatch Events “ping” method is really the only way to do it right now. He gave a number of really good tips to pre-warm your functions “correctly”:

  • Don’t ping more often than every 5 minutes
  • Invoke the function directly (i.e. don’t use API Gateway to invoke it)
  • Pass in a test payload that can be identified as such
  • Create handler logic that replies accordingly without running the whole function

Continue Reading…

15 Key Takeaways from the Serverless Talk at AWS Startup Day

I love learning about the capabilities of AWS Lambda functions, and typically consume any article or piece of documentation I come across on the subject. When I heard that Chris Munns, Senior Developer Advocate for Serverless at AWS, was going to be speaking at AWS Startup Day in Boston, I was excited. I was able to attend his talk, The Best Practices and Hard Lessons Learned of Serverless Applications, and it was well worth it.

Chris said during his talk that all of the information he presented is on the AWS Serverless site. However, there is A LOT of information out there, so it was nice to have him consolidate it down for us into a 45 minute talk. There was some really insightful information shared and lots of great questions. I was aware of many of the topics discussed, but there were several clarifications and explanations (especially around the inner workings of Lambda) that were really helpful. 👍

Continue Reading…

Serverless Consumers with Lambda and SQS Triggers

On Wednesday, June 27, 2018, Amazon Web Services released SQS triggers for Lambda functions. Those of you who have been building serverless applications with AWS Lambda probably know how big of a deal this is. Until now, the AWS Simple Queue Service (SQS) was generally a pain to deal with for serverless applications. Communicating with SQS is simple and straightforward, but there was no way to automatically consume messages without implementing a series of hacks. In general, these hacks “worked” and were fairly manageable. However, as your services became more complex, dealing with concurrency and managing fan out made your applications brittle and error prone. SQS triggers solve all of these problems. 👊

Update September 3, 2020: There are a number of important recommendations available in the AWS Developer Guide for using SQS as a Lambda Trigger: https://docs.aws.amazon.com/lambda/latest/dg/with-sqs.html.

The most important are:

“To allow your function time to process each batch of records, set the source queue’s visibility timeout to at least 6 times the timeout that you configure on your function. The extra time allows for Lambda to retry if your function execution is throttled while your function is processing a previous batch.”

“To give messages a better chance to be processed before sending them to the dead-letter queue, set the maxReceiveCount on the source queue’s redrive policy to at least 5.”

It is also imperative that you set a minimum concurrency of 5 on your processing Lambda function due to the initial scaling behavior of the SQS Poller.

Update November 19, 2019: AWS announced support for SQS FIFO queues as a Lambda event source (announcement here). FIFO queues guarantee message order, which means only one Lambda function is invoked per MessageGroupId.

Update December 6, 2018: At some point over the last few months AWS fixed the issue with the concurrency limits and the redrive policy. See Additional experiments with concurrency and redrive polices below.

Audio Version (please note that this audio version is out of date given the new updates)

Continue Reading…

Event Injection: Protecting your Serverless Applications

Updated January 25, 2019: This post was updated based on feedback from the community.

The shared security model of cloud providers extends much further with serverless offerings, but application security is still the developer’s responsibility. Many traditional web applications are front-ended with WAFs (web application firewalls), RASPs (runtime application self-protection), EPPs (endpoint protection platforms) and WSGs (web security gateways) that inspect incoming and outgoing traffic. These extra layers of protection can save developers from themselves when making common programming mistakes that would otherwise leave their applications vulnerable. If you’re invoking serverless functions from sources other than API Gateway, you no longer have the ability to use the protection of a WAF. 

Continue Reading…

Solving the Cold Start Problem

Dear AWS Lambda Team,

I have a serious problem: I love AWS Lambda! In fact, I love it so much that I’ve pretty much gone all in on this whole #serverless thing. I use Lambda for almost everything now. I use it to build backend data processing pipelines, distribute long running tasks, and respond to API requests. Heck, I even built an Alexa app just for fun. I found myself building so many RESTful APIs using Lambda and API Gateway that I went ahead and created the open source Lambda API web framework to allow users to more efficiently route and respond to API Gateway requests.

Serverless technologies, like Lambda, have revolutionized how developers think about building applications. Abstracting away the underlying compute layer and replacing it with on-demand, near-infinitely scalable function containers is brilliant. As we would say out here in Boston, “you guys are wicked smaht.” But I think you missed something very important. In your efforts to conform to the “pay only for the compute time you consume” promise of serverless, you inadvertently handicapped the service. My biggest complaint, and the number one objection that I hear from most of the “serverless-is-not-ready-for-primetime” naysayers, are Cold Starts.

Continue Reading…

How To: Stub “.promise()” in AWS-SDK Node.js

Since AWS released support for Node v8.10 in Lambda, I was able to refactor Lambda API to use async/await instead of Bluebird promises. The code is not only much cleaner now, but I was able to remove a lot of unnecessary overhead as well. As part of the refactoring, I decided to use AWS-SDK’s native promise implementation by appending .promise() to the end of an S3 getObject call. This works perfectly in production and the code is super compact and simple:

The issue came with stubbing the call using Sinon.js. With the old promise method, I was using promisifyAll() to wrap new AWS.S3() and then stubbing the getObjectAsync method. If you’re not familiar with stubbing AWS services, read my post: How To: Stub AWS Services in Lambda Functions using Serverless, Sinon.JS and Promises.

Continue Reading…

How To: Manage RDS Connections from AWS Lambda Serverless Functions

Someone asked a great question on my How To: Reuse Database Connections in AWS Lambda post about how to end the unused connections left over by expired Lambda functions:

I’m playing around with AWS lambda and connections to an RDS database and am finding that for the containers that are not reused the connection remains. I found before that sometimes the connections would just die eventually. I was wondering, is there some way to manage and/or end the connections without needing to wait for them to end on their own? The main issue I’m worried about is that these unused connections would remain for an excessive amount of time and prevent new connections that will actually be used from being made due to the limit on the number of connections.

🧟‍♂️ Zombie RDS connections leftover on container expiration can become a problem when you start to reach a high number of concurrent Lambda executions. My guess is that this is why AWS is launching Aurora Serverless, to deal with relational databases at scale. At the time of this writing it is still in preview mode.

Update September 2, 2018: I wrote an NPM module that manages MySQL connections for you in serverless environments. Check it out here.

Update August 9, 2018: Aurora Serverless is now Generally Available!

Overall, I’ve found that Lambda is pretty good about closing database connections when the container expires, but even if it does it reliably, it still doesn’t solve the MAX CONNECTIONS problem. Here are several strategies that I’ve used to deal with this issue.

Continue Reading…