All Posts

The Storage First Pattern

The Storage First Pattern allows you to reliably capture data from incoming API requests without needing a Lambda function to parse, process, transform and save the data. Under the right circumstances, this pattern can reduce latency, save money, and minimize bugs.

Interactive Reference Architecture

Click on the components or numbered steps below to explore how this architecture works.

The Storage First pattern is useful when your application doesn’t require a lot of data transformation on incoming API requests. Rather than attaching API Gateway to a Lambda function that has to parse, process, transform, and save data, we can bypass the Lambda function by using a “service integration” that will send the data directly to an AWS service, like SQS. This reduces the latency of our API calls, saves money by removing the need to run a processing Lambda function, and makes our application more reliable because we are not introducing additional code.

In our example above, we’re using an SQS queue and then processing data off of that using a Lambda function subscription. There are plenty of other services that can be written to directly including DynamoDB, Kinesis, and EventBridge. To the best of my knowledge, Eric Johnson from AWS coined the term “Storage First” to indicate that we want to ensure that we save a user’s raw data before we attempt to run any processing on it. That way, if downstream services or processing fails, we always have a copy of the original request. He explains the process in his post Building a serverless URL shortener app without AWS Lambda.

The incoming data can be transformed and verified using VTL templates, but the more complexity you introduce, the more likely you are to create issues with edge cases. This is an incredibly useful pattern for high velocity workloads like webhooks and clickstream data because it provides low latency and high reliability. Additional processing can be done asynchronously, allowing you to add resiliency to your application if downstream systems are unavailable.

Deploy this Pattern

Below are the basic configurations for deploying this pattern using different frameworks and platforms. Additional configuration for your environment will be necessary. The source files and additional examples are available in the GitHub repo.

  • TO IMPLEMENT

The Circuit Breaker

The Circuit Breaker pattern keeps track of the number of failed (or slow) API calls by using a cache to share the status across multiple Lambda functions. This allows you to perform load shedding when downstream services become unavailable.

Interactive Reference Architecture

Click on the components or numbered steps below to explore how this architecture works.

The Circuit Breaker pattern keeps track of the number of failed (or slow) API calls by using a cache to share the status across multiple Lambda functions. In this example, we’re using a DynamoDB table so that we can avoid using a VPC. If you were in a VPC already, ElastiCache would be a good alternative.

Here’s how it works. When the number of failures reaches a certain threshold, we “open” the circuit and send errors back to the calling client immediately without even trying to call the API. After a short period of time, we “half open” the circuit, sending just a few requests through to see if the API is finally responding correctly. All other requests receive an error. If the sample requests are successful, we “close” the circuit and start letting all traffic through. However, if some or all of those requests fail, the circuit stays “open”, and the process repeats with some algorithm for increasing the timeout between “half open” retry attempts.

This is an incredibly powerful (and cost saving) pattern for any type of synchronous request to an API or downstream system. You are accumulating charges whenever a Lambda function is running and waiting for another task to complete. Allowing your systems to self-identify issues like this, provide incremental backoff, and then self-heal when the service comes back online, adds a tremendous amount of resiliency to your applications.

Deploy this Pattern

Below are the basic configurations for deploying this pattern using different frameworks and platforms. Additional configuration for your environment will be necessary. The source files and additional examples are available in the GitHub repo.

🚀 Project Update:

Data API Client: v1.1 Released

Bug fixes and feature updates including support for native JavaScript dates (thanks @cklam2), support for non-specific database queries, and deprecation of the HTTP keepAlive workaround in favor of the native SDK support. Read More...

Announcing the Serverless Reference Architectures Project

Serverless gives us the power to focus on delivering value to our customers without worrying about the maintenance and operations of the underlying compute resources. Cloud providers (like AWS), also give us a huge number of managed services that we can stitch together to create incredibly powerful, and massively scalable serverless microservices.

Almost 2 years ago now, I wrote a post on Serverless Microservice Patterns for AWS that became a popular reference for newbies and serverless veterans alike. The capabilities of serverless have changed dramatically since then, opening up a ton of new patterns and possibilities. Today I’m announcing the Serverless Reference Architectures Project. This project is intended to capture, share, explore, and debate the patterns and practices being used in serverless production applications today.

Continue Reading…

The Simple Web Service

A basic of pattern for creating a serverless API or web service. This example uses DynamoDB as the database because it scales nicely with the high concurrency capabilities of AWS Lambda.

Interactive Reference Architecture

Click on the components or numbered steps below to explore how this architecture works.

This is the most basic of patterns you’re likely to see with serverless applications. The Simple Web Service fronts a Lambda function with an API Gateway. I’ve shown DynamoDB as the database here because it scales nicely with the high concurrency capabilities of Lambda.

Deploy this Pattern

Below are the basic configurations for deploying this pattern using different frameworks and platforms. Additional configuration for your environment will be necessary. The source files and additional examples are available in the GitHub repo.

  • Are you a CDK Guru?
    Would you like to contribute patterns to the community?
    Check out the Github repo!

The Scalable Webhook

Simple pattern for handling high-velocity or unpredicatable workloads while mitigating downstream pressure.

Interactive Reference Architecture

Click on the components or numbered steps below to explore how this architecture works.

If you’re building a webhook, the traffic can often be unpredictable. This is fine for Lambda, but if you’re using a “less-scalable” backend like RDS, you might just run into some bottlenecks. There are ways to manage this, but because Lambda supports SQS triggers, we can throttle our workloads by queuing the requests and then using a throttled (low concurrency) Lambda function to work through our queue. Under most circumstances, your throughput should be near real-time. If there is some heavy load for a period of time, you might experience some small delays as the throttled Lambda chews through the messages.

You’ll also want to handle failed messages using a Dead Letter Queues (DLQ). The SQS Poller will adjust its polling frequency based on your Lambda function’s concurrency. You’ll need to configure your redrive policies to appropriately handle failed messages.

Deploy this Pattern

Below are the basic configurations for deploying this pattern using different frameworks and platforms. Additional configuration for your environment will be necessary. The source files and additional examples are available in the GitHub repo.

  • Are you a CDK Guru?
    Would you like to contribute patterns to the community?
    Check out the Github repo!

The Strangler Pattern

This pattern lets you route requests to your legacy APIs, while allowing you to direct specific routes to new serverless services as you add them.

Interactive Reference Architecture

Click on the components or numbered steps below to explore how this architecture works.

The Strangler is another popular pattern that lets you incrementally replace pieces of an application with new or updated services. Typically you would create some sort of a “Strangler Facade” to route your requests, but API Gateway can actually do this for us using “AWS Service Integrations” and “HTTP Integrations”. For example, an existing API (front-ended by an Elastic Load Balancer) can be routed through API Gateway using an “HTTP” integration. You can have all requests default to your legacy API, and then direct specific routes to new serverless service as you add them.

Deploy this Pattern

Below are the basic configurations for deploying this pattern using different frameworks and platforms. Additional configuration for your environment will be necessary. The source files and additional examples are available in the GitHub repo.

  • Are you a CDK Guru?
    Would you like to contribute patterns to the community?
    Check out the Github repo!

Takeaways from Programming AWS Lambda by Mike Roberts and John Chapin

Recently, Symphonia co-founders Mike Roberts and John Chapin wrote a book called Programming AWS Lambda: Build and Deploy Serverless Applications with Java. I personally abandoned Java long ago, but I knew full well that anything written by Mike and John was sure to be great. So despite the title (and my past war stories of working with Java), I picked up the book and gave it a read. I discovered that it’s not really a book about Java, but a book about building serverless applications with the examples in Java. Sure, there are a few very Java specific things (which every Java developer probably needs to read), but overall, this book offers some great insight into serverless from two experts in the field.

I had the chance to catch up with Mike on a recent episode of Serverless Chats. We discussed the book, how John and Mike got started with serverless (by building Java Lambda functions, of course), and what are some of the best practices people need to think about when building serverless applications. It was a great conversation (which you can watch/listen to here), but it was also jam packed with information, so I thought I’d highlight some of the important takeaways.

Continue Reading…

Making the Case for Serverless Use Cases

For quite some time, there was a running joke that “serverless” was just for converting images to thumbnails. That’s still a great use case for serverless, of course, but since AWS released Lambda in 2014, serverless has definitely come a long way. Even still, newcomers to the space often don’t realize just how many use cases there are for serverless. I spoke with Gareth McCumskey, a Solutions Architect at Serverless Inc, on a recent two part episode (part 1 and part 2) of Serverless Chats, and we discussed nine very applicable use cases that I thought I’d share with you here.

Continue Reading…

12 Important Lessons from The DynamoDB Book

Fellow serverless advocate, and AWS Data Hero, Alex DeBrie, recently released The DynamoDB Book, which ventures way beyond the basics of DynamoDB, but still offers an approachable and useful resource for developers of any experience level. I had the opportunity to read the book and then speak with Alex about it on Serverless Chats. We discussed several really important lessons from the book that every DynamoDB practitioner needs to know. Here are twelve of my favorites, in no particular order.

Continue Reading…