How To: Manage RDS Connections from AWS Lambda Serverless Functions

Someone asked a great question on my How To: Reuse Database Connections in AWS Lambda post about how to end the unused connections left over by expired Lambda functions:

I’m playing around with AWS lambda and connections to an RDS database and am finding that for the containers that are not reused the connection remains. I found before that sometimes the connections would just die eventually. I was wondering, is there some way to manage and/or end the connections without needing to wait for them to end on their own? The main issue I’m worried about is that these unused connections would remain for an excessive amount of time and prevent new connections that will actually be used from being made due to the limit on the number of connections.

🧟‍♂️ Zombie RDS connections leftover on container expiration can become a problem when you start to reach a high number of concurrent Lambda executions. My guess is that this is why AWS is launching Aurora Serverless, to deal with relational databases at scale. At the time of this writing it is still in preview mode.

Update September 2, 2018: I wrote an NPM module that manages MySQL connections for you in serverless environments. Check it out here.

Update August 9, 2018: Aurora Serverless is now Generally Available!

Overall, I’ve found that Lambda is pretty good about closing database connections when the container expires, but even if it does it reliably, it still doesn’t solve the MAX CONNECTIONS problem. Here are several strategies that I’ve used to deal with this issue.

1. Implement a good caching strategy

As your application scales, use something like REDIS to cache common data queries. REDIS’s maxclient default is 65,000, far greater than the connection limits for RDS. You should also design your application to ONLY create a connection to RDS when you need to. It’s possible that your thousands of concurrent requests might not actually need a database connection if they get data from a cache or are performing another action. Your connect() call should always be INSIDE your handler and only called when your flow requires a connection.

2. Lower your connection timeouts

RDS doesn’t give your root account the SUPER privilege, but you can create “RDS DB Parameter Groups” that allow you to set the wait_timeout and interactive_timeout to values more appropriate for serverless applications (the default is 8 hours). I’ve set these to 10 minutes in certain implementations and it seems to work well. My code is always designed to reconnect if the connection drops, so even if “warm” containers lose their connection, this isn’t an issue.

3. Limit concurrent executions

If your database only supports 1,000 concurrent connections, limit your Lambda function’s concurrent connections to something less that that. This isn’t necessarily ideal, since it will impact your users during high traffic situations, but it will prevent the database from maxing out connections.

4. Limit connections per username

Another strategy is to set a resource limit on the number of connections a given db user can have. MAX USER_CONNECTIONS can be limited to a number less than the max connections allowed by the RDS instance. This will throw an error in your Lambda function, but it is possible for you to gracefully handle that error with a Too many connections response or something else. This is handy, because you can then…

5. Clear connections with a proxy function

Even without SUPER privilege, RDS does allow SHOW PROCESSLIST which lets you view all open connections. You could trigger a proxy Lambda function on a Too many connections error (or on a regular schedule) and have it inspect all the open connections. Then it can issue a CALL mysql.rds_kill(thread-ID) on sleeping connections with high time values.

6. Increase max_connections setting

As I mentioned earlier, RDS lets you create “RDS DB Parameter Groups” to adjust your database’s default settings. This can be used to set the max_connections value up to 16,000 connections!  This obviously has its tradeoffs, especially around memory and CPU consumption. If you are performing long-running or CPU intensive queries, make sure you size your instances to appropriately handle the load. There is a reason why the defaults are the defaults. Also, you can’t set this value with Aurora Serverless, so something to think about there.

7. Use the serverless-mysql NPM module

I found myself continually using the same strategies to manage RDS connections, so I finally decided to package them into an NPM module. This module not only handles connection management, but also throws in a bunch of extras, like transaction support, to make working with MySQL in your apps much easier. You can install it by running npm i serverless-mysql. The documentation and code is available on GitHub.

The bottomline

Serverless isn’t perfect…yet! 😉 However, using appropriate strategies like the ones mentioned above help to work around some of the current limitations. And don’t forget to check out my new NPM module: serverless-mysql.

Do you have other strategies for dealing with RDS connections in your serverless functions? Let me know in the comments.

Tags: , , , , ,


Did you like this post? 👍  Do you want more? 🙌  Follow me on Twitter or check out some of the projects I’m working on.

36 thoughts on “How To: Manage RDS Connections from AWS Lambda Serverless Functions”

  1. Hi Jeremy, Thanks a lot for sharing this amazing article, it ensured my observations and made me confident about my moves.

    I would like to discuss this point with you: “3. Limit concurrent executions”, what happens if the limit reached? would AWS queue the requests to run the lambda later or we just lost it?

    Best,

  2. How can your #1 suggestion be to use Redis when we are talking about serverless applications? Redis would require you to setup and maintain a server.

    1. Serverless is not about having no servers, it’s about handing off server management to the cloud provider. If you are using a relational database in a serverless application, you’ll need to either maintain that yourself, or use a managed service like RDS from AWS. When I implement caching, I typically use AWS Elasticache, a managed service that maintains my Redis cluster.

  3. Hi jeremy,

    Thank you for your post, your site is a great place to increase my knowledge about serverless.

    Do you have any post related as Kinesis ? did you use this service?

    Thank you!

    1. Hi Hugo,

      I’m glad you find the site useful. I don’t have any posts about Kinesis yet, but I do use the service for a number of applications. If I think of a good topic, I’ll be sure to write about it.

      Thanks,
      Jeremy

  4. Good work on that NPM module! A question about transactions though:
    How can I get the result (i.e. the id of the inserted row) of the first query, to use it in the following query?
    I used to do this with custom functions that returned promises:

    beginTransaction()
    .then(() => executeQuery(‘INSERT INTO table (foo) VALUES(?)’, [1])
    .then((r) => {
    console.log(r.insertId);
    return executeQuery(‘INSERT INTO table2 (bar), [r.insertId])
    });

    but with your .transaction() it’s not really clear how to do this… pls help!

    1. Hi Dominggus,

      Thank you for pointing this out! I actually didn’t account for this. The queries are all queued, so right now your example wouldn’t work with Serverless MySQL’s transaction method. I will add an issue to GitHub and add this capability in. You can still access the underlying connection by calling the getClient() method, which would allow you to work around it for now.

      Github issue here: https://github.com/jeremydaly/serverless-mysql/issues/1

      Thanks again,
      Jeremy

  5. Hi Jeremy,
    Your post has been very helpful for my lambda implementation. I have a question: Can I use your mysql NPM module for Postgres? My Postgres database is running in an EC2 instance under a VPC and my lambda makes query on that. The project is expected to have huge number of concurrent connections from postgres via server less lambda invocation, through an AWS API gateway.
    Any help/pointer would be very much appreciated.
    Thanks,
    Sanjoy

  6. Hi Jeremy,

    If you are starting new application now, would you consider switching to NoSQL or would you still use RDS?

    I see here a trade-off between handling max_connections number vs learning new way how to work with NoSQL.
    Even your solution serverless-mysql is not really solving the problem it is just lowering possibility of bottle-neck because of max_number_of_connections(don’t get me wrong, I don’t think its currently solvable problem since it is problem in design, your solution is the best we can get ATM). When it comes to NoSQL, DynomoDB sounds like it can handle everything, all we need to do is to create table per finite request that we will be use to get data and do the partitioning right.

    Best,
    Ivan Petrovic

    1. Hi Ivan,

      It depends on the problem space. I use DynamoDB in most circumstances, but certain applications can still benefit from relational databases, so I wouldn’t rule them out entirely. DynamoDB is a very powerful tool, and if you understand your access patterns, then I think the vast majority of applications could use NoSQL as the primary datastore. If you have other requirements, such as full-text search, then data will need to be replicated to something like Elasticsearch. Serverless applications that have more complex access patterns or additional “relational” requirements, can also use the CQRS pattern. This way you’re optimizing for scalable command processing, and can then replicate data to provide additional querying interfaces.

      Hope that helps,
      Jeremy

  7. Have a PostgreSQL version of that npm package? Would love to see that.

    PS – The serverless Aurora has some harsh limitations: MySql only, and a variant that doesn’t have support for a very current ANSI sql — no subqueries, no WITH, etc. Plus, the cold start time is horrible for web UI’s that are intermittently used. Tried to use it, but had to revert back to standard RDS instances.

    1. Hi Phil,

      There is not a PostgreSQL version of the npm package yet. I’ve had some requests for it, but the new Data API that the Aurora team is working on might eventually make it obsolete. Also, PostgreSQL support is coming for Aurora Serverless, so you should be able to use that soon. In terms of cold starts with Aurora Serverless, the system is not meant to have “pause compute capacity” enabled for production instances. It works great for development environments, but you need to run at least 2 ACUs and disable that feature in a production environment.

      – Jeremy

  8. Hi Jeremy,
    Great Post and very helpful.
    My question is since aurora serverless was released, do we still need to manage connection ourselves (like you did with serverless-mysql) ?

  9. Hi Jeremy,
    Thanks for your reply. Does aurora serverless supports continuous scaling, so that ideally connection limit wont be an issue ?

    1. Yes, it will scale up to 256 ACUs if you want it to. That has a total of 6,000 connections, making it likely not an issue. However, it scales based on load, not available connections.

  10. Hi Jeremy,
    I ran into another when using serverless-mysql.
    My db parameters need to read from AWS SSM parameter store. The request is async.
    If I initiate mysql outside main handler, how do I fetch these parameters with an async call. ?
    Thanks.

    1. Hi Dan,

      Unfortunately, you can’t perform an async call outside of your handler. However, you can define the MySQL configuration information INSIDE your handler by calling the config() method, and then connect. You’ll just have to create some logic in your handler to see if MySQL has been configured.

      Hope that helps,
      Jeremy

  11. Hi Jeremy, I like a lot what you are doing here. I’ve been using Lambdas since 2015, I was an early adopter and I’ve been discovering all the minor repairs .
    I had this problem with DB connections when I used SQL RDS DB for a webservice.
    What I did (and it keeps working), is opening a new connection on each query I execute and closing after it finishes. So if same Lambda execution does multiple queries, it will instantiate a new connection for each query and close it.
    To achieve this, i asign to each connection a new ID.
    I think its not a good aproach because i keep opening and closing a lot of connections, but at first glance, when Lambdas started to reuse the same connection (cause i didnt realize this could happen), and transactions started to fail, I came with this solution.
    After reading this post, I may change my aproach into a better one. Although if I could, I would really like to try Aurora Serverless through API.

  12. Hi Jeremy, your posts are all very interesting and useful to learn aws aurora serverless. Recently I am using lambda – aurora serverless rds. And sometime I observed my aurora serverless rds experience timeout(default timeout is set to 10s). I definitely confirm vpcs, subnets, inbound, outbound and all components has same vpcs, subnets. Even sometime It goes well without timeout. But once It had timeout, it experience timeout for a while(for hours). I cannot find the solution.
    After I read your posts, I am suspicious about ‘sleep’ connection(default time is 8 hours) in my db when I query ‘show processList’. Can this ‘sleep’ connection disrupt new connection in some case and cause timeout finally? serverless-mysql could solve this problem?

  13. Am I missing something stupid, because I get an ECONNRESET when I have two instances running trying to access the same DBS mySQL DB. Isn’t that exactly what this module is intended to prevent?

  14. I’m dealing with these challenges but I’m on RDS for SQL Server. There seem to be fewer options for me (can’t use Aroura Serverless, RDS Proxy, etc.). Do you have any resources or tips for solving this issue with SQL Server?

    1. I don’t think the package is compatible with any ORMs. Also, I always suggest that people don’t use ORMs as you lose a ton of control and generally end up with less efficient queries.

  15. Now that RDS Proxy is generally available, do you think there is a benefit to using it over the serverless-mysql package, or vice versa? Would it make sense to use them in combination?

    Thanks, Dan

    1. I think RDS Proxy is a great solution, but also adds cost. For most applications, I’ve found serverless-mysql to perform really well and handle spikey workloads without a problem. If you have consistently high traffic, RDS proxy might be the better solution. You could of course use them together as I think the transaction handling with it is much better than the raw mysql and mysql2 libraries.

  16. It’s impressive how bad lambdas can actually be. Plenty of implementations will open a new boto3 ssm connection every time and will open a new SQL connection and close it every time, maybe even repeatedly in loops! AWS set it up so you get hefty bills if your implementation is naive. It uses wall-clock time based billing, so if they can get everyone stalled waiting for connections to all their boto3 clients, using no CPU, it greatly increases how much they can over-commit their lambda worker VM hosts, while they bill as though the CPUs were pegged the whole time.

    I get NINE second delays when connecting to RDS sometimes.

    I would characterize lambda as “unbelievably slow”, unless you do it just right, and spend lots of time tuning it with explicit caching. Sometimes I ponder why anyone uses it at all.

    1. I don’t think RDBMS and serverless fit particularly well together, which is why we see things like RDS Proxy. 9 seconds seems quite high though, as I typically get new connections in about 150ms. Using the right strategies for connection reuse, and existing connections can see very low latencies (sub 15ms). I’m more concerned about the scaling (and of course the connection limits), but there are new solutions every day coming to address those.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.