How To: Manage Serverless Environment Variables Per Stage

I often find myself creating four separate stages for each ⚡ Serverless Framework project I work on: dev, staging, prod, and local. Obviously the first three are meant to be deployed to the cloud, but the last one, local, is meant to run and test interactions with local resources. It’s also great to have an offline version (like when you’re on a plane ✈ or have terrible wifi somewhere). Plus, development is much faster because you’re not waiting for round trips to the server. 😉

A really great feature of Serverless is the ability to configure ENVIRONMENT variables in the serverless.yml file. This lets us store important global information like database names, service endpoints and more. We can even reference passwords securely using AWS’s Service Manager Parameter Store and decode encrypted secrets on deployment, keeping them safe from developers and source repositories alike. 😬 Just reference the variable with ${ssm:/myapp/my-secure-value~true} in your configuration file.

Using STAGES and ENVIRONMENT variables together can create a really powerful workflow for you and your development team

I think sls invoke local -f myFunction -p /path/to/event.json is one of the most useful commands in my toolbox. Not only can you live test functions locally by simulating events, but you can completely manipulate the environment by passing in the -s flag with a stage name.

For example, if I was writing a script that interacts with a database (perhaps querying data for a report), I would most likely create a local database and point my MYSQL_HOST environment var to localhost (along with some other configs). Now running sls invoke local -f myDBFunction -p /path/to/event.json -s local would run my query against my local version. However, if I change my -s flag to dev, then I want my code to access the “dev” version of my database (which is perhaps in the cloud). This is useful for testing query and compatibility changes.

This is also great for letting you change other resources based on STAGE like SQS, S3 buckets, Dynamo DB tables, etc.

How do we configure our serverless.yml to do that?

Another great feature of the Serverless framework is your ability to “self-reference” variables within the serverless.yml file. This gives us the ability to use static (or even recursively referenced) values to set other values. I’m sure you’ve used this while naming functions, e.g. name: ${opt:stage}-myFunction You can also set a default value if the reference doesn’t exist, e.g. stage: ${opt:stage,'dev'}, which is incredibly handy. 👍

In our case, we want to provide a list of possible options based on the STAGE provided. This can be accomplished in a number of ways. The documentation even gives you the example of including a separate file based on the STAGE name, but it is even easier than that. All you need to do is create an object under your custom: variables and provide a value for each stage:

Now simply self-reference the correct object key in your environment: variables section:

And that’s it! Now whenever you use the -s local flag your database host will be “localhost”. When you change the stage flag, so too will your host value.

Below is a more complete example:

Where do we go from here?

This technique works for CI/CD systems as well. If your production environment is in a separate account, providing access to shared secrets will stay secure.

If you want to be able to access cloud services that are in a VPC, you can always create additional stages like dev_local. Then you could access remote resources through a VPN or use SSH tunnels to access resources behind a VPC. You might use port forwarding, for example, to direct MySQL traffic to localhost through to your VPC RDS instance.

If you want to save yourself from misspelling stage names, you can check out Serverless Stage Manager. This allows you to restrict the stage names used for full-stack and function deployments.

I hope you found this useful. Good luck and Go Serverless! 🤘🏻

Tags: , ,

Did you like this post? 👍  Do you want more? 🙌  Follow me on Twitter or check out some of the projects I’m working on. You can sign up for my WEEKLY newsletter too. You'll get links to my new posts (like this one), industry happenings, project updates and much more! 📪

Sign Up for my WEEKLY email newsletter

I respect your privacy and I will NEVER sell, rent or share your email address.

7 thoughts on “How To: Manage Serverless Environment Variables Per Stage”

  1. Thanks for a great post. One thing thou, you said that passwords for instance will stay secure. I tried to use SSM, the problem is that it’s hidden/encrypted in ssm manager, but when i use it in for instance serverless offline, i can console log the values and see for instance passwords. Am I doing something wrong?

    1. Hi Robert,

      You’re not doing anything wrong. The benefit of using built-in SSM support with Serverless is that your passwords are only available to properly credentialed IAM users. If the profile you are using has access to SSM, then you’ll be able to decrypt and view those passwords. However, this allows you to avoid checking code with clear text credentials into your code repository, preventing others from seeing them.

      In a production environment, I would suggest limiting SSM access to production credentials to a “production” IAM role. A CI/CD pipeline should be used to deploy code into this environment, so even you wouldn’t be able to access production passwords or systems from your local machine.

      Hope that helps,

    1. Hi Danish,

      It looks like you are missing the dollar signs ($) in front of your ENVIRONMENT variables. Try fixing that and see if you still have the issue.

      – Jeremy

    2. Thank You Jeremy for your time.
      dollar ($) sign is there, but still it does not work offline.

      name: aws
      runtime: python3.6
      stage: ${opt:stage,’dev’}

      # Environment Variables
      MYSQL_HOST: ${self:custom.mysqlHost.${self:provider.stage}}
      MYSQL_USER: ${self:custom.mysqlUser.${self:provider.stage}}
      MYSQL_PASSWORD: ${self:custom.mysqlPassword.${self:provider.stage}}
      MYSQL_DATABASE: ${self:custom.mysqlDatabase.${self:provider.stage}}
      MYSQL_PORT: ${self:custom.mysqlPort.${self:provider.stage}}

      – serverless-python-requirements
      – serverless-domain-manager
      – serverless-stage-manager


      basePath: ‘user’
      stage: ${self:provider.stage}

      fileName: requirements.txt
      dockerizePip: true

      – dev
      – staging
      – prod

      local: localhost
      dev: ${ssm:/myApp/database/dev/mysql-host~true} #get from ssm
      #staging: ${ssm:/myapp/staging/mysql-host} #get from ssm
      #prod: ${ssm:/myApp/database/prod/mysql-host~true} #get from ssm
      local: root
      dev: ${ssm:/myApp/database/dev/mysql-username~true} #get from ssm
      # staging: myapp_stag
      #prod: ${ssm:/myApp/database/prod/mysql-username~true} #get from ssm
      local: ” # No Password
      dev: ${ssm:/myApp/database/dev/mysql-password~true} #get from ssm
      # staging: ${ssm:/myapp/staging/mysql-password~true} #get from ssm (secure)
      #prod: ${ssm:/myApp/database/prod/mysql-password~true} #get from ssm
      local: myApp
      dev: ${ssm:/myApp/database/dev/mysql-dbname~true} #get from ssm
      # staging: myapp_staging
      #prod: ${ssm:/myApp/database/prod/mysql-dbname~true} #get from ssm
      local: ‘3306’
      dev: ‘3306’
      staging: ‘3306’
      prod: ‘3306’

  2. I do something similar, but I organize my custom environment config in a JSON file. I also DON’T check this file into version control. I do this for database, API keys, etc.

    “local”: {
    “database”: {
    “host”: “…”,
    “user”: “…”,
    “password”: “…”,
    “database”: “…”

    “dev”: {
    “database”: {
    “host”: “…”,
    “user”: “…”,
    “password”: “…”,
    “database”: “…”

    “production”: {
    “database”: {
    “host”: “…”,
    “user”: “…”,
    “password”: “…”,
    “database”: “…”

    DbHost: ${}
    DbUser: ${self:custom.env.database.user}
    DbPassword: ${self:custom.env.database.password}
    DbDatabase: ${self:custom.env.database.database}

    env: ${file(env.json):${self:provider.stage}}

  3. I’ve taken a similar approach with SSM, but I skip the user of ENV variables completely. I use a fixed Parameter naming convention based on the alias. I then wrote a simple library to help load. Some example parameter names are:


    I extract the Lambda alias from the context (in this case prod or test), and load the correct config via SSM API. I then control access to each via IAM.

    const env = context.invokedFunctionArn.split(“:”).pop(); //my library will validate this and set a default

    Names: [
    “/” + env + “/db/host”
    WithDecryption: true

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.