Event Injection: A New Serverless Attack Vector

As more and more developers and companies adopt serverless architecture, the likelihood of hackers exploiting these applications increases dramatically. The shared security model of cloud providers extends much further with serverless offerings, but application security is still the developer’s responsibility. There has been a lot of hype about #NoOPS with serverless environments 🤥, which is simply not true 😡. Many traditional applications are frontended with WAFs (web application firewalls), RASPs (runtime application self-protection), EPPs (endpoint protection platforms) and WSGs (web security gateways) that inspect incoming and outgoing traffic. These extra layers of protection can save developers from themselves when making common programming mistakes that would otherwise leave their applications vulnerable. With serverless, these all go away. 😳

Serverless makes it easy to deploy a function to the cloud and not think about the infrastructure it’s running on. While certainly convenient, this leaves many developers with a false sense of security. By relying too heavily on the cloud provider, and not coding defensively, developers can significantly reduce their overall security posture. As with any type of software, there are a myriad of attacks possible against serverless infrastructures. However, unlike traditional web applications, serverless architectures are “event-driven”. This means they can be triggered by a number of different sources with multiple formats and encodings, rendering WAFs useless and opening up a completely new attack vector. 🤯

Where does event data come from? 🤔

Common web application exploits include SQL injection, code injection and cross-site scripting (XSS) attacks. WAFs are usually pretty good at detecting and defending against these types of attacks, but unless you have a regional endpoint with API Gateway, it’s not really an option with serverless. I would like to think that most of us assume there isn’t a WAF, so we’ll apply some simple application layer input sanitization. But even the best programmers are prone to overlook less common data patterns like local or remote file inclusion (LFI/RFI) attacks.

Above we’re assuming the input is coming from a web application. In this case, we (or a WAF) would be inspecting just the request body and the URL parameters. Still a lot to think about, but what about data from other triggers? At the time of this writing, there are 19 supported event types that can directly trigger an AWS Lambda function:

  • Amazon S3
  • Amazon DynamoDB
  • Amazon Kinesis Data Streams
  • Amazon Simple Notification Service
  • Amazon Simple Email Service
  • Amazon Cognito
  • AWS CloudFormation
  • Amazon CloudWatch Logs
  • Amazon CloudWatch Events
  • AWS CodeCommit
  • Scheduled Events
  • AWS Config
  • Amazon Alexa
  • Amazon Lex
  • Amazon API Gateway
  • AWS IoT Button
  • Amazon CloudFront
  • Amazon Kinesis Data Firehose
  • Amazon Simple Queue Service (added June 27, 2018)

This list doesn’t even include invoking functions on demand from your own code or accessing data from SQS and other message brokers. Take a look at the sample event data from these sources and you’ll see that they all vary in their formats and complexity. That’s a lot of places where malicious, user-supplied data could sneak into our apps. AWS isn’t alone when it comes to supporting events either. Google Cloud Functions supports four types of triggers and Microsoft Azure Functions supports at least nine.

If we look closer at the data attribute for CloudWatch Logs and Kinesis Data Streams, we’ll see that the records are Base64 encoded and compressed with the gzip format:

This data has to be decoded, unzipped, and then inspected to make sure it’s safe to use. Even if we had the luxury of a WAF, it would have no idea how to deal with this type of input.

How are these attacks possible? 🤷‍♂️

I too was a bit skeptical when I first heard of this type of attack. How would user-supplied data even make its way into these other types of events? The answer is frighteningly simple. Ory Segal, CTO & Co-Founder of PureSec, gave a talk at Serverless Days TLV and presented a few examples. I highly suggest you watch this entire 22 minute talk as it gives a great overview (examples here). There is an obvious case of XSS with an API Gateway request, but also less obvious cases where he demonstrates how trusting seemingly harmless input can get executed as shell commands to devastating effect.

I thought these examples were interesting, but I was curious to dig a bit deeper and see if I could exploit something even more obscure. I thought about a case where we would be tempted to implicitly trust the input. This led me to S3 files names. I created a sample bucket and attached a trigger that fires off a Lambda function. Then I uploaded a text file and the event data looked like this (edited for brevity):

The event conveniently gives me a key with the name of the file I uploaded. This is URL encoded, so I’ll decode it like this:

And then record that to my database:

The SQL query above is prone to SQL injection and is obviously bad practice. But let’s be honest, we’ve all done this. Maybe it was before we knew what SQL injection was or because we were just quickly coding a prototype. Either way, this kind of mistake creeps into code ALL THE TIME. We might even think, “well, this is just a file name from an S3 event, it should be a trusted value.” If only that were true.

I created a new file named “1");(delete * from uploads” and uploaded it to my S3 bucket. The event came through like this:

And as soon as I decode it, my SQL query becomes:

Uh oh! 🤦🏻‍♂️ Hope you have a recent database backup. While this example might be a bit contrived, you can see that this type of exploit is entirely plausible, especially for the developer who isn’t overly security conscious (read: most developers). This only scratches the surface of creative ways that hackers can inject data into SNS topics, CloudWatch logs, Amazon Alexa commands and more.

Trust no one, including yourself 👽

Application Security 101 tells us we should ALWAYS sanitize user input. With serverless, we have to think about even more sources where unfiltered user input can come from. This means we need to be hypervigilant about sanitizing every piece of data that comes into our functions, even if it we think we can trust the source. You’ve been warned! 😬

If you’d like to learn more about serverless security, read my post Securing Serverless: A Newbie’s Guide. Also be sure to check out 10 Things You Need To Know When Building Serverless Applications to jumpstart your serverless knowledge.

Tags: , , , , ,

Did you like this post? 👍  Do you want more? 🙌  Follow me on Twitter or check out some of the projects I’m working on. You can sign up for my WEEKLY newsletter too. You'll get links to my new posts (like this one), industry happenings, project updates and much more! 📪

Sign Up for my WEEKLY email newsletter

I respect your privacy and I will NEVER sell, rent or share your email address.

2 thoughts on “Event Injection: A New Serverless Attack Vector”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.