Aurora Serverless Data API: An (updated) First Look

Update June 5, 2019: The Data API team has released another update that adds improvements to the JSON serialization of the responses. Any unused type fields will be removed, which makes the response size 80+% smaller.

Update June 4, 2019: After playing around with the updated Data API, I found myself writing a few wrappers to handle parameter formation, transaction management, and response formatting. I ended up writing a full-blown client library for it. I call it the “Data API Client“, and it’s available now on GitHub and NPM.

Update May 31, 2019: AWS has released an updated version of the Data API (see here). There have been a number of improvements (especially to the speed, security, and transaction handling). I’ve updated this post to reflect the new changes/improvements.

On Tuesday, November 20, 2018, AWS announced the release of the new Aurora Serverless Data API. This has been a long awaited feature and has been at the top of many a person’s #awswishlist. As you can imagine, there was quite a bit of fanfare over this on Twitter.

Obviously, I too was excited. The prospect of not needing to use VPCs with Lambda functions to access an RDS database is pretty compelling. Think about all those cold start savings. Plus, connection management with serverless and RDBMS has been quite tricky. I even wrote an NPM package to help deal with the max_connections issue and the inevitable zombies 🧟‍♂️ roaming around your RDS cluster. So AWS’s RDS via HTTP seems like the perfect solution, right? Well, not so fast. 😞 (Update May 31, 2019: There have been a ton of improvements, so read the full post.)

Update May 31, 2019: The Data API is now GA (see here)

Before I go any further, I want to make sure that I clarify a few things. First, the Data API is (still) in BETA, so this is definitely not the final product. Second, AWS has a great track record with things like this, so I’m betting that this will get a heck of lot better before it reaches GA. And finally, I am a huge AWS fan (and I think they know that 😉), but this first version is really rough, so I’m not going to pull any punches here. I can see this being a complete #gamechanger once they iron out the kinks, so they definitely can benefit from constructive feedback from the community.

Enabling the Data API

Before we dive into performance (honestly I want to avoid telling you about it for as long as possible), let’s look at the set up. There is an AWS guide that tells you how to switch on the Data API. The guide is pretty thin right now, so I’ll give you basics. Update May 31, 2019: The documentation has gotten much better and can walk you through the set up.

NOTE: The Data API only works with Aurora Serverless clusters AND it is only available in the us-east-1 regionUpdate May 31, 2019: Data API is available in US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), and Europe (Ireland) Regions. If you haven’t played around with Aurora Serverless yet, check out my post Aurora Serverless: The Good, the Bad and the Scalable.

Enable Data API in Network & Security settings of your cluster

You need to modify your Aurora Serverless cluster by clicking “ACTIONS” and then “Modify Cluster”. Just check the Data API box in the Network & Security section and you’re good to go. Remember that your Aurora Serverless cluster still runs in a VPC, even though you don’t need to run your Lambdas in a VPC to access it via the Data API.

Next you need to set up a secret in the Secrets Manager. 🤫 This is actually quite straightforward. User name, password, encryption key (the default is probably fine for you), and select the database you want to access with the secret.

Enter database credentials and select database to access

Next we give it a name, this is important, because this will be part of the arn when we set up permissions later. You can give it a description as well so you don’t forget what this secret is about when you look at it in a few weeks.

Give your secret a name and add a description

You can then configure your rotation settings, if you want, and then you review and create your secret. Then you can click on your newly created secret and grab the arn, we’re gonna need that next.

Click on your secret to get the arn.

Using the AWS SDK’s RDSDataService

If you were looking for this in the AWS guide for the Data API, you probably won’t find it. As of this writing it isn’t in there. Update May 31, 2019: It’s sort of in the documentation now. You may have stumbled across the SDK docs and found Class: AWS.RDSDataService. But there are a bunch of options that bury the lead. Right now we just care about executeSql(). Here is the snippet from the docs:


Update May 31, 2019: executeSql() has been deprecated in favor of executeStatement() and batchExecuteStatement(). Your snippet looks like this:

As you can see, lots of new parameters have been added here. More detail to follow.

Easy enough. Looks like we’re going to need that arn from our secret we just created, the arn of our Aurora Serverless cluster (you can find that in the cluster details), and then our SQL statements. Before we take this out for a drive, we need some data to query. I set up a database with a single table and started by inserting five rows:

A really, really, really complex MySQL table 😂

Now let’s set up a simple Lambda function and give it a try.


Updated May 31, 2019: Using the executeStatement() method instead here. Notice that you can now use named parameters, which is pretty cool.

Notice above that I’m using async/await, so I’m taking advantage of the .promise() method that AWS provides to promisify their services. You can use callbacks if you really want to. But I wouldn’t. 😉

I used the Serverless framework to publish this to AWS, but those are just details. Let’s give this a try and see what happens when we publish and run it.

Error: AWS.RDSDataService is not a constructor

Hmm, looks like the version of aws-sdk running on Lambdas in us-east-1 isn’t the latest version. Let’s repackage our function with the aws-sdk and try it again.

Updated May 31, 2019: AWS.RDSDataService is included in the SDK available from Lambda. So no need to include the dependency anymore.

AccessDeniedException: User is not authorized to perform: rds-data:ExecuteSql

Okay, looks like we need some IAM permissions. Let add those:

Update May 31, 2019: According to the documentation, here are the minimum permissions required to use the DATA API. However, from your Lambda function, most are not necessary. I’ve note them inline as “unnecessary.”

NOTE: The permission below are no longer accurate. Use the ones above. Keeping this here for the history.

And try it again.

BadRequestException: User is not authorized to perform: secretsmanager:GetSecretValue

Crap. Okay, we need some more IAM permission:

Okay, now we should be good to go! Let’s run it again.

Update May 31, 2019: Here is the new response, compare to the old one below it.

Update June 5, 2019: This response is even smaller now that they removed the unused type fields from the response.

Old response:

What the?

This is querying our tiny little table of 5 rows with 5 columns with very little data. The Data API returned this monstrous JSON response that was over 11 KBs and took 228ms to run! Update May 31, 2019: The response is smaller now (just over 7 KBs), but still quite heavy. Update June 5, 2019: The JSON serialization of the response has been improved, so empty type fields are removed from the response! The performance has improved a bit as well. I’m now getting an average of sub 100ms when querying just a few rows. Okay, we can’t put this off any longer. Let’s look at the performance.

Data API Performance Metrics

Alright, so let’s just rip off the bandaid here. The performance is not good great. I added a few more rows to the table and ran a comparison of the Data API versus a MySQL connection (using the mysql package) in a VPC. Here’s what I got:

Selecting 175 rows via the DATA API versus a MySQL connection in a VPC

Update May 31, 2019: The original results above still apply for the native MySQL connection. For the DATA API results, the returned data is much smaller (though still more than necessary), but the speed improvement is HUGE! There seems to be a “cold start” like penalty for the first query, but subsequent queries are near or sub 100ms. Much better than before.

Update June 5, 2019: The smaller response sizes help reduce the latency a bit, but if you enable HTTP Keep Alive, subsequent queries (and even reused connections), have some really good latencies.

Using HTTP Keep Alive with the Data API

This was the same query run against the same cluster and table. You can see that the Data API took 204 ms ~100ms (updated) to query and return 175 rows versus the MySQL connection that only took 5 ms. Something to note here is that the 5 ms was after the function was warm and the initial MySQL connection was established. Obviously VPCs have a higher cold start time, so the first query will be a bit slower (about 150 ms plus the cold start). After that though, the speed is lightning fast. However, the Data API averaged over 200 ms every time approximately 100ms every time (updated) it ran, warm or not.

Also, the size of the responses were radically different. The Data API returned another monster JSON blob weighing in at 152.5 KBs 75 KBs (updated). The direct MySQL connection returned essentially the same data in under 30 KBs. I’m sure there will be optimizations in the future that will allow us to reduce the size of this response. There is a bunch of stuff in there that we don’t need.

Update May 31, 2019: A new parameter called includeResultMetadata that allows you to suppress the columnMetadata field in the response. Couple of things to note here. 1) this doesn’t really reduce the response size much, and 2) the results themselves are not mapped to column names, so without the columnMetadata, you need to map you’re columns to array index numbers. So, not super useful in my opinion. 🤷‍♂️

Next I tried some INSERTs. I ran 10 simple INSERT statements with just one column of data. Once again I compared the Data API to a MySQL connection in a VPC.

10 serial INSERTs via the DATA API versus a MySQL Connection in a VPC

Update May 31, 2019: I reran these test and the performance was improved quite a bit. Each insert took anywhere from 60ms to 150ms.

Once again, the direct MySQL connection blew away the Data API in terms of response times. Same caveats as before with these being warm functions, so the MySQL connection was already established and being reused. But as you can see, each Data API call suffers from the same high latency as the one before it. Which means, as is to be expected with an HTTP endpoint, that there is no performance gain by reusing the same const RDS = new AWS.RDSDataService() instance.

Another thing to note, however, is that the performance wasn’t impacted by more complex queries or larger data retrievals. The underlying MySQL engine performs as expected, so if AWS can fix this roundtrip latency issue, then hopefully all these performance issues go away.

Update May 31, 2019: There is a new method named batchExecuteStatement() that lets you use parameterSets to reuse the same query for multiple inserts. The benefit here is that you only need to make ONE HTTP call. I ran a few tests (inserting 10 records), and as expected, the performance was similar to a single query, around 100ms. Still not great, but definitely more efficient than sending them in one by one.

Tweaking the knobs

I’ve been waiting for HTTP endpoints for RDS for a loooong time, so I didn’t want to let a few bad experiments ruin my day. I decided to turn some knobs to see if that would affect the performance. First thing I did was turn up the memory on my function. Higher memory equals higher CPU and throughput (I think), so I gave that a try. Unfortunately, there was no impact.

Then I thought, maybe if I beef up the database cluster, it might shave off some milliseconds. This was obviously wishful thinking, but I tried it anyway. I cranked up my cluster to 64 ACUs and… nothing. 😖 Oh well, it was worth a shot.

Transaction Support! (added May 31, 2019)

The new version of the Data API has added Transaction Support! The process is a little more complicated than working with native MySQL, but it appears to work really well. From the docs:

You need to call beginTransaction and wait for a transactionId to be returned. It looks like this:

You then include this with all of your executeStatement calls and then call either commitTransaction() or rollbackTransaction().  Then you get a nice little message like this:

As you’ve probably surmised, transactions handled this way require multiple HTTP calls in order to complete, which based on the latency, probably means you don’t want to run them for synchronous operations. Here are some other helpful tips from the documentation:

  • A transaction can run for a maximum of 24 hours. A transaction is terminated and rolled back automatically after 24 hours.
  • A transaction times out if there are no calls that use its transaction ID in three minutes. If a transaction times out before it’s committed, it’s rolled back automatically.
  • If you don’t specify a transaction ID, changes that result from the call are committed automatically.

Also note that, by default, calls timeout and are terminated in one minute if it’s not finished processing. You can use the continueAfterTimeout parameter to continue running the SQL statement after the call times out.

What about security?

Update May 31, 2019: Good news! executeStatement and batchExecuteStatement still accepts strings, but it errors if you include multiple statements. You can also used named parameters now, so be sure to do that so all your values are escaped for you!

So another thing I noticed when I first looked at the docs, is that the sqlStatements parameter expects a string. Yup, a plain old string. Not only that, you can separate MULTIPLE SQL statements with semicolons! Did I mention this parameter only accepts a string? If you’re not sure why this is a big deal, read up on SQL Injection or have a look at my Securing Serverless: A Newbie’s Guide.

Don’t want to listen to me?  Fine, but you should definitely take Ory Segal’s advice. He’s one of those people that knows what he’s talking about.

But seriously, this is a huge security issue if you aren’t properly escaping values. The mysql package he referenced actually disables multiple statements by default because they can be so dangerous. Let’s hope that some additional features are added that will do some of the escaping for us.

Some final thoughts

This thing is still in beta, and it really shows. There is a lot of work to be done, but I have faith that the amazing team at AWS will eventually turn the Data API in to pure gold. The latency here seems to be entirely with the overhead of setting up and tearing down the VPC connections behind the scenes. DynamoDB is HTTP-based and has single digit latency, so I’m guessing that HTTP isn’t the major issue.

Anyway, here are a few of the things that I’d like to see before the Data API goes GA:

  • Increased performance: I’d much rather suffer through a few cold starts now and then to enjoy 5 ms queries than to suffer through 200 ms for every query. Right now, these speeds make it unusable for synchronous use cases. Update May 31, 2019: I still feel this way for synchronous use cases, but there could be some caching improvements to make this viable.
  • Response formatting: I get the value in returning the different data types, but it is overkill for 99% of queries. Besides simplifying that (and getting the JSON size down a bit), optionally returning the column information would be helpful too. I don’t need it most of the time. Update May 31, 2019: There is still way too much data coming over the wire. They need to cut this down.
  • Prepared queries: The current sqlStatements parameter is too dangerous. I know developers should take some responsibility here, but needing another library to escape SQL queries is unnecessary overhead. Some simply features of the mysql package (maybe a new params field that excepts an array and replaces ? in the queries) would go a long way.
    Update May 31, 2019: I doubt I was the only one who suggested this, but it looks like they implemented this with named parameters. Very cool.
  • Disable multiple statements by default: Having the ability to send multiple queries is really powerful (especially over HTTP), but it’s also super dangerous. It would be safer if you needed to expressly enable multiple statement support. Even better, require multiple statements to be sent in as an array.
    Update May 31, 2019: They did this too! Well, without my array idea. 😐
  • IAM Role-based access: The secrets manager thing is okay, but it would be better if we could access Aurora Serverless using just the IAM role. I know that Aurora Serverless doesn’t support that yet, but this would be a helpful addition.

I have to say that I really love this concept. Yes, I‘m was underwhelmed by the initial implementation, but again, it is still very early. When (and I’m confident that it is a when, not an if) the AWS team works through these issues, this will help dramatically with serverless adoption. There are still plenty of use cases for RDBMS, so making it easier to use them is a huge win for serverless.

Finally, since I’ve offered a lot of criticism, I figured I’d end this on a bit of a positive note. The latency is a killer for synchronous applications, BUT, even in its current state, I can see this being extremely useful for asynchronous workflows. If you are running ETLs, for example, firing off some bulk loads into your reporting database without needing to run the Lambdas in a VPC would be quite handy.

Update May 31, 2019: I’m really impressed by the updates that have been made. I do want to reiterate that this isn’t an easy problem to solve, so I think the strides they’ve made are quite good. I’m not sure how connection management works under the hood, so I’ll likely need to experiment with that a bit to measure concurrent connection performance.

What are your thoughts on the new (and improved) Aurora Serverless Data API? I’d love to know what you think. Hit me up on Twitter or share your thoughts in the comments below.

Tags: , , , ,

Did you like this post? 👍  Do you want more? 🙌  Follow me on Twitter or check out some of the projects I’m working on.

58 thoughts on “Aurora Serverless Data API: An (updated) First Look”

  1. This seems not to have a transaction model. Without that it will be useful for just a small selection is use cases where a RDBMS makes sense to be used without transactions, instead of using for example a nosql DB.

    1. I have a few more things I want to experiment with, including transactions and load testing to see how well it handles the underlying connection management. Did you try transactions?

  2. Thanks a lot. I’ve tried Aurora Serverless too and I am facing the same problem using the NodeJS SDK. Could you please tell me how you did the “Let’s repackage our function with the aws-sdk and try it again.” step?

    Thanks in advance

    1. Hi Sven,

      I just ran npm install aws-sdk and installed it as a regular dependency instead of a “dev dependency”. Some frameworks might not package it on deploy, so make sure you check the documentation so that it will be included in your node_modules directory.

      – Jeremy

    2. Yeah, thanks for your answer – that’s what I finally did ;-). I misunderstood your repackaging. Deploying the aws-sdk manually works fine, but of course makes my deployment package much bigger…let’s see how fast the lambda’s sdk in us-east-1 becomes up-to-date.


  3. thanks for the article / tutorial you saved me some time. Providing the correct IAM configuration for serverless is also super useful. I was actually thinking this could work for me even with the performance issues the lack of being able to run as a transaction is probably more of a problem.

  4. Thanks for the great article. Have been playing with it myself and I couldn’t agree more with the list of things that they have to improve. Response it is just two much and performance it is pretty low.
    I am still sticking with connection to db for now but building a wrapper on DB communication and expect to change only that, when this is fully ready.
    This is a great feature though! Not having to put my lambdas inside my VPC is such a lifesaver.

  5. Hi Jeremy,

    thanks a lot for your post! I am having trouble because the enable API checkbox is not showing up under Network & Security in Cluster Modify. I first had the Cluster in the wrong region but after migrating the cluster to US East 1 the checkbox is still not showing up. Any ideas what could be the problem?

    Thank you

    1. Hi Tobias,

      Probably a dumb question, but are you sure that you selected “Serverless” as your capacity type when you set up your database cluster? The Data API will only work with Aurora Serverless.

      – Jeremy

  6. Cool write up,

    I’m trying to get it working in Python but when I try and add the permissions for rds-data it tells me rds-data is an unknown service and it doesn’t seem to do anything. Am I missing anything?


  7. Hi Jeremy,

    Found your article when I was searching about the performance of Aurora Serverless. We are looking at using Aurora serverless for a web application in Production. The data is not going to be frequently accessed.
    So, I am worried about the cold start time, if it is going to affect the performance of the application when it is hit for the first time after inactivity.
    Your article was really helpful, I still think we should use Aurora serverless, but I thought it would be good to ask your opinion on performance as you have used it in production.


    1. Hi Madhu,

      You can disable the auto-pause feature, which will keep the cluster running 24/7. This will obviously increase the cost (about $86 per month at 2 ACUs), but will avoid the “cold start” issue you mentioned. I am running Aurora Serverless in product for a few applications and I have been very happy with the performance. There are a few limitations (see here: Aurora Serverless: The Good, the Bad and the Scalable), but it has done its job well for my use cases.

      Hope that helps,

  8. great analysis, Jeremy. When I read about the new Data API, I immediately came to your website hoping to find some update to your older article… and you never failed me. And surprisingly not many people have written about the Data API and it’s potential to serverless ecosystem.. Thanks and looking forward to more updates from AWS and yourself 🙂

  9. Are there any SQL clients that can connect using a HTTP API? Then I would only have to use a single protocol rather than using HTTP for my lambdas and TCP when doing explorative analysis using a typical desktop based client.

  10. This article is literally the best resource on the topic I could find anywhere. It was even updated 1 (!) day after the update in June. Just amazing, it saved me from going crazy 😀

    Very well written and in a way that is applicable for me as a reader 👌👌👌

  11. Hello Jeremy,

    Thanks for your great post, it was very helpful for my first experience with Aurora Serverless.
    But one weird thing is to process the response from Aurora. It has very annoying structure and is changing by version too.

    Could you share your experience how you are handling it appropriately in production and if you know any good library to handle it, let me know.

    1. Hi Anatolii,

      The response from the Data API has to accommodate many different languages (including typed ones), so I think that is why the structure contains the data types. The updated version of the Data API just launched last week, so I don’t know anyone that is using it in production yet. However, I have already built the Data API Client that you can use for Node.js. It handles all the parameterization, response parsing, and transaction management for you. I’m sure others will work on support for additional languages as well.

      Good luck with it!
      – Jeremy

    2. Hi Jeremy,

      Your data-api-client module will work for latest Aurora serverless version’s response structure?


  12. Oh Jeremy you saved my life and much more fun working with Aurora Serverless.
    This was the wrapper I was looking for exactly.
    Thanks a lot god!

  13. Jeremy –

    Great write up. Have you had much experience with Postgres Aurora Serverless? It’s in beta and we are testing it out. Just wondering what you thought about it.


    1. Hi Damian,

      I haven’t, but my understanding is that it works very similarly to the MySQL version. I’d love to hear your feedback on it.


  14. Hi Jeremy,

    Thanks for the great article.

    As you mention, making an HTTP request per SQL statement is a limitation. Besides, I found no documentation on how to perform more complex operations like table locks. Any thoughts on how to work around these issues?

  15. Hi Jeremy,

    Great article, thank you.

    Quick question: I see you have to connect using a secret, which is stored in secrets manager. It appears AWS charge $0.05/10,000 API calls for that. So, $5 per million. Assuming each call to the RDS Data API has to retrieve that secret via such an API call, does that mean each million queries costs $5 (on top of the RDS charges for I/O, connection units, and whatnot)? That seems a hidden cost, if so. Which is going to add up.

    I saw AWS have a cheaper parameter store but presumably that can’t be used 🙁

    1. Hi Greg,

      Sorry for the late response. If your save the RDS client between Lambda calls (make sure the initialize version is in the global scope), then it will reuse the credentials on warm invocations. You will only pay that $0.05/10,000 cost on cold starts. Typically cold starts are about 0.2% of total invocations, so that cost shouldn’t be very high.

      – Jeremy

  16. I’ve setup Aurora Serverless with the Data API using Cloudformation ( and by enabling the Data API manually in the AWS console. However, I keep getting this not so helpful response: “BadRequestException: Communications link failure\n\nThe last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.”

    1. Same thing here, when DB wakes up after being paused(which usually takes around 25sec). First call returns

      BadRequestException: Communications link failure
      The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.

      After that, all subsequent requests will succeed.

    2. I’m experiencing the same thing here. I’ve set the Aurora instance to pause when inactive after 5 minutes. I have a minimum ACU of 1 and max of 2.

      Every time I make a query when the database is paused I get this error, then subsequent calls work fine.

      BadRequestException: Communications link failure
      The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.

      Here are the logs of the server events

      Sun, 08 Mar 2020 17:10:49 GMT The DB cluster is being resumed.
      Sun, 08 Mar 2020 17:11:15 GMT The DB cluster is resumed.
      Sun, 08 Mar 2020 17:16:25 GMT The DB cluster is being paused.
      Sun, 08 Mar 2020 17:16:32 GMT The DB cluster is paused.

      Looks like it takes about 25 seconds (as Yuriy Husnay mentioned) before it has resumed fully. Are we meant to hold back for 25 seconds before making queries!?

      There must be something I’ve not set up correctly here. Any insights into this would be greatly appreciated.

    3. Sorry for the late reply. If you are using the database in a production environment, you need to uncheck “Pause compute capacity after consecutive minutes of inactivity” under the “Additional scaling configurations”.

  17. F***ing AWS and its limits.

    BadRequestException: Packet for query is too large (XXXXX > 16,384).

    Method executeStatement Request and Response have a limit of 16 kilobytes. The documentation is lying!

    * The response size limit is 1 MB or 1,000 records. If the call returns more than 1 MB of response data or over 1,000 records, the call is terminated.
    * sql – Length Constraints: Maximum length of 65536.

    1. You need to set the HTTP options in the AWS SDK:

      const https = require('https')

      const sslAgent = new https.Agent({
      keepAlive: true,
      maxSockets: 50, // same as aws-sdk
      rejectUnauthorized: true

      AWS.config.update({ httpOptions: { agent: sslAgent } })

  18. Very interesting, I had read your previous post on the subject and thanks for keeping this up to date, it’s very much useful to see how it evolves.

    Do you know if it’s possible to create an Aurora Serverless instance using the Serverless Framework? It’s one good use case to use Aurora from Lambda, and . much easier if there is no manual step. That could even become a Serverless Component.

    It’s still very Beta, but it looks promising indeed. It’s perfect for a dev/testing db because it costs nothing, but for production usage it’s not yet ready.

    1. Sorry for the late reply. Yes, you can create an Aurora Serverless cluster using the Resources section in your serverless.yml file. You would have to configure the CloudFormation, but you could use all the variable magic from the Serverless Framework to set up different stages and what not.

  19. Hey,

    This has been very helpful getting started. I have a question about the result set and one of your update comments.
    I need the column data and was wondering how you dealt with not having it returned.

    ” optionally returning the column information would be helpful too”

    Thank you

    1. Hi Steve,

      Sorry for the late reply. You can get the column data by setting includeResultMetadata to true. It will give you all the column info in a separate object.

      – Jeremy

  20. Thanks for the great article. I have managed to get the Data API working with PostgreSQL and Lambda. Now I am hitting some bumps regarding the max 1,000 records so I would like to try connecting to Lambda using psycopg2. Do you have any documentation/ example how you achieved the following with mysql?
    “MySQL connection (using the mysql package) in a VPC.” I would like to achieve the same, but than for PostgreSQL.
    Thanks in advance.

    1. @Martin Harmenzon have you succeded in using psycopg2?

      @Jeremy Daly have you tried lastly DATA API on PostgreSQL Data API?
      What about performance improvements? I’m evaluating between MySQL and PostgreSQL on Aurora Serverless

  21. Thank you for this great article. I appreciate the performance tests you made.
    200ms for a sql operation is still very long especially if a lambda which is billed per 100ms is waiting for the response. Most of my lambdas which are making requests to dynamodb are taking <100ms to finish. Using the data api would tripple the lambda cost.
    I am considering switching to a sql database. VPC seems to be the way to go for me also because lambda cold starts in vpc have been improved to <1s (

    Also rds proxies have been introduced which is still in preview and only available to "classic rds" not aurora serverless. It promises to be an alternative to data api if you are not using aurora serverless. It also allows lambdas to access rds without beeing in the same vpc. I haven't seen any performance metrics on it yet.

  22. AWSome post Jeremy!
    I’m a newbie building a hobby microproduct entirely serverless. nodejs, lambda, apigateway, Aurora(MySQL)…
    1) I didnt know about Data API – will try that right away
    2) I’m using “1 Capacity Unit” for Aurora and the time gap between “being resumed” and “resumed” is anywhere between 28 seconds to 58 seconds. While I dont care while I’m still playing with code, when I launch my MVP, my users wont forgive me.
    Here’s where I need some ideas
    1) Is there a way to “programmatically know” that my Aurora database is paused? Even if I did, how do I handle user experience?
    2) My application MVP wont even have a login. If it had one, I could make a dummy request to RDS on launch of login page and hope user takes some time to enter credentials, captcha,… (just to buy time)

  23. Just a note, to manage errors properly when using Lambda, here:
    catch(e) {

    we have to use:

    catch(e) {
    throw e

  24. Great Article, Jeremy! I was playing around with the Data API and wasn’t able to understand the differences between a MYSQL Connection Performance vs Data API’s Performance. Anyways, Thanks for your professional insight on the Data API.

  25. Creating an angular website for my father, I am using API gateway to query my aurora serverless MySQL rds through a lambda function. Is this the right way to create this application. I am very new to programming and will consider your opinion to be very helpful

  26. Hi Jeremy,
    Thanks for such a detailed & perfect article. Just one thing I need to confirm is that is it possible to pass multiple SQL statements in a single handler function? If it is not, can you provide any idea on how this can be achieved?

    Like currently it is:

    module.exports.fetchStats = async (event, context, callback) => {
    const sql = ‘SELECT * FROM stats WHERE user_id = UNHEX(REPLACE(:id, “-“,””))’;
    try {
    const params = {
    resourceArn: ‘arn:aws:rds****************’,
    secretArn: ‘*********************************’,
    database: ‘dev_db1’,
    continueAfterTimeout: true,
    includeResultMetadata: true,
    parameters: [{ ‘name’: ‘id’, ‘value’: { ‘stringValue’: ${req_id} } }]

    But I need to perform like:

    module.exports.fetchStats = async (event, context, callback) => {
    const sql1 = ‘SELECT * FROM stats WHERE user_id = UNHEX(REPLACE(:id, “-“,””))’;
    const sql2 = ‘SELECT * FROM users’;
    try {
    const params = {
    resourceArn: ‘arn:aws:rds****************’,
    secretArn: ‘*********************************’,
    sql: sql1, sql2,
    database: ‘dev_db1’,
    continueAfterTimeout: true,
    includeResultMetadata: true,
    parameters: [{ ‘name’: ‘id’, ‘value’: { ‘stringValue’: ${req_id} } }]

  27. Does anyone know how to test Aurora Serverless Postgres locally successfully? It would be nice to have local integration tests run instead of having to deploy the code to AWS and test it, that is quite ineffecient. I have tried local-data-api but had connectivity issues when using the AWS command line SDK and there is very little information online to help troubleshoot. I generally program in Node and the local-data-api is Python so no so easy to figure out why connections fail. I wish AWS would bring out local simulators to help build apps offline without the need to deploy every time.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.