How To: Manage Serverless Environment Variables Per Stage

I often find myself creating four separate stages for each ⚡ Serverless Framework project I work on: dev, staging, prod, and local. Obviously the first three are meant to be deployed to the cloud, but the last one, local, is meant to run and test interactions with local resources. It’s also great to have an offline version (like when you’re on a plane ✈ or have terrible wifi somewhere). Plus, development is much faster because you’re not waiting for round trips to the server. 😉

A really great feature of Serverless is the ability to configure ENVIRONMENT variables in the serverless.yml file. This lets us store important global information like database names, service endpoints and more. We can even reference passwords securely using AWS’s Service Manager Parameter Store and decode encrypted secrets on deployment, keeping them safe from developers and source repositories alike. 😬 Just reference the variable with ${ssm:/myapp/my-secure-value~true} in your configuration file.

Using STAGES and ENVIRONMENT variables together can create a really powerful workflow for you and your development team

I think sls invoke local -f myFunction -p /path/to/event.json is one of the most useful commands in my toolbox. Not only can you live test functions locally by simulating events, but you can completely manipulate the environment by passing in the -s flag with a stage name.

For example, if I was writing a script that interacts with a database (perhaps querying data for a report), I would most likely create a local database and point my MYSQL_HOST environment var to localhost (along with some other configs). Now running sls invoke local -f myDBFunction -p /path/to/event.json -s local would run my query against my local version. However, if I change my -s flag to dev, then I want my code to access the “dev” version of my database (which is perhaps in the cloud). This is useful for testing query and compatibility changes.

This is also great for letting you change other resources based on STAGE like SQS, S3 buckets, Dynamo DB tables, etc.

How do we configure our serverless.yml to do that?

Another great feature of the Serverless framework is your ability to “self-reference” variables within the serverless.yml file. This gives us the ability to use static (or even recursively referenced) values to set other values. I’m sure you’ve used this while naming functions, e.g. name: ${opt:stage}-myFunction You can also set a default value if the reference doesn’t exist, e.g. stage: ${opt:stage,'dev'}, which is incredibly handy. 👍

In our case, we want to provide a list of possible options based on the STAGE provided. This can be accomplished in a number of ways. The documentation even gives you the example of including a separate file based on the STAGE name, but it is even easier than that. All you need to do is create an object under your custom: variables and provide a value for each stage:

Now simply self-reference the correct object key in your environment: variables section:

And that’s it! Now whenever you use the -s local flag your database host will be “localhost”. When you change the stage flag, so too will your host value.

Below is a more complete example:

Where do we go from here?

This technique works for CI/CD systems as well. If your production environment is in a separate account, providing access to shared secrets will stay secure.

If you want to be able to access cloud services that are in a VPC, you can always create additional stages like dev_local. Then you could access remote resources through a VPN or use SSH tunnels to access resources behind a VPC. You might use port forwarding, for example, to direct MySQL traffic to localhost through to your VPC RDS instance.

If you want to save yourself from misspelling stage names, you can check out Serverless Stage Manager. This allows you to restrict the stage names used for full-stack and function deployments.

I hope you found this useful. Good luck and Go Serverless! 🤘🏻

Tags: , ,


Did you like this post? 👍  Do you want more? 🙌  Follow me on Twitter or check out some of the projects I’m working on.

11 thoughts on “How To: Manage Serverless Environment Variables Per Stage”

  1. Thanks for a great post. One thing thou, you said that passwords for instance will stay secure. I tried to use SSM, the problem is that it’s hidden/encrypted in ssm manager, but when i use it in for instance serverless offline, i can console log the values and see for instance passwords. Am I doing something wrong?

    1. Hi Robert,

      You’re not doing anything wrong. The benefit of using built-in SSM support with Serverless is that your passwords are only available to properly credentialed IAM users. If the profile you are using has access to SSM, then you’ll be able to decrypt and view those passwords. However, this allows you to avoid checking code with clear text credentials into your code repository, preventing others from seeing them.

      In a production environment, I would suggest limiting SSM access to production credentials to a “production” IAM role. A CI/CD pipeline should be used to deploy code into this environment, so even you wouldn’t be able to access production passwords or systems from your local machine.

      Hope that helps,
      Jeremy

    1. Hi Danish,

      It looks like you are missing the dollar signs ($) in front of your ENVIRONMENT variables. Try fixing that and see if you still have the issue.

      – Jeremy

    2. Thank You Jeremy for your time.
      dollar ($) sign is there, but still it does not work offline.

      provider:
      name: aws
      runtime: python3.6
      stage: ${opt:stage,’dev’}

      # Environment Variables
      environment:
      MYSQL_HOST: ${self:custom.mysqlHost.${self:provider.stage}}
      MYSQL_USER: ${self:custom.mysqlUser.${self:provider.stage}}
      MYSQL_PASSWORD: ${self:custom.mysqlPassword.${self:provider.stage}}
      MYSQL_DATABASE: ${self:custom.mysqlDatabase.${self:provider.stage}}
      MYSQL_PORT: ${self:custom.mysqlPort.${self:provider.stage}}

      plugins:
      – serverless-python-requirements
      – serverless-domain-manager
      – serverless-stage-manager

      custom:
      customDomain:

      basePath: ‘user’
      stage: ${self:provider.stage}

      pythonRequirements:
      fileName: requirements.txt
      dockerizePip: true

      stages:
      – dev
      – staging
      – prod

      mysqlHost:
      local: localhost
      dev: ${ssm:/myApp/database/dev/mysql-host~true} #get from ssm
      #staging: ${ssm:/myapp/staging/mysql-host} #get from ssm
      #prod: ${ssm:/myApp/database/prod/mysql-host~true} #get from ssm
      mysqlUser:
      local: root
      dev: ${ssm:/myApp/database/dev/mysql-username~true} #get from ssm
      # staging: myapp_stag
      #prod: ${ssm:/myApp/database/prod/mysql-username~true} #get from ssm
      mysqlPassword:
      local: ” # No Password
      dev: ${ssm:/myApp/database/dev/mysql-password~true} #get from ssm
      # staging: ${ssm:/myapp/staging/mysql-password~true} #get from ssm (secure)
      #prod: ${ssm:/myApp/database/prod/mysql-password~true} #get from ssm
      mysqlDatabase:
      local: myApp
      dev: ${ssm:/myApp/database/dev/mysql-dbname~true} #get from ssm
      # staging: myapp_staging
      #prod: ${ssm:/myApp/database/prod/mysql-dbname~true} #get from ssm
      mysqlPort:
      local: ‘3306’
      dev: ‘3306’
      staging: ‘3306’
      prod: ‘3306’

  2. I do something similar, but I organize my custom environment config in a JSON file. I also DON’T check this file into version control. I do this for database, API keys, etc.

    env.json:
    {
    “local”: {
    “database”: {
    “host”: “…”,
    “user”: “…”,
    “password”: “…”,
    “database”: “…”
    }
    },

    “dev”: {
    “database”: {
    “host”: “…”,
    “user”: “…”,
    “password”: “…”,
    “database”: “…”
    }
    },

    “production”: {
    “database”: {
    “host”: “…”,
    “user”: “…”,
    “password”: “…”,
    “database”: “…”
    }
    }
    }

    serverless.yml:
    provider:
    environment:
    DbHost: ${self:custom.env.database.host}
    DbUser: ${self:custom.env.database.user}
    DbPassword: ${self:custom.env.database.password}
    DbDatabase: ${self:custom.env.database.database}

    custom:
    env: ${file(env.json):${self:provider.stage}}

  3. I’ve taken a similar approach with SSM, but I skip the user of ENV variables completely. I use a fixed Parameter naming convention based on the alias. I then wrote a simple library to help load. Some example parameter names are:

    /test/db/host
    /test/db/user
    /prod/db/user

    I extract the Lambda alias from the context (in this case prod or test), and load the correct config via SSM API. I then control access to each via IAM.

    const env = context.invokedFunctionArn.split(“:”).pop(); //my library will validate this and set a default

    ssm.getParameters(
    {
    Names: [
    “/” + env + “/db/host”
    ],
    WithDecryption: true
    }
    )…

  4. In another comment you said:

    “In a production environment, I would suggest limiting SSM access to production credentials to a “production” IAM role.”

    How is this production IAM role applied to all of the lambdas in production?

    1. This was in the context of a CI/CD pipeline. So you would need to have a deployment role that could grant SSM parameter permissions to the IAM roles created for your production Lambda functions. The production secrets would then only be accessible from Lambdas deployed through your pipeline.

  5. Thanks Jeremy! great post. but I have a problem

    I’m using an encrypted ssm password (aws ssm managed key) as a global environment in a Lambda . but when I try to decrypt using below code it gives me this error

    “The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.”

    My IAM has full access to KMS
    can you tell me what I’m doing wrong here ?

    private static async Task DecodeEnvVar()
    {
    // Retrieve env var text

    // Convert text to bytes
    var encryptedBytes = Convert.FromBase64String(“AQICAHjslFdkixyTeDxLUdRp/wWHXK2+46eTqhGoMwya7OJPvwHfV4+7OfopqMRNiZkjMU5kAAAAZjBkBgkqhkiG9w0BBwagVzBVAgEAMFAGCSqGSIb3DQEHATAS4wEQQMS/I0Y0sNALm8IofpAgEQgCOAnAEVX1Y1+JOaOHmISihYObdMwNMm3FR40ntEJeG1J46gGg==”);
    // Construct client
    using (var kmsClient = new AmazonKeyManagementServiceClient())
    {
    MemoryStream ciphertextBlob = new MemoryStream(encryptedBytes);
    // Write ciphertext to memory stream

    DecryptRequest decryptRequest = new DecryptRequest()
    {
    CiphertextBlob = ciphertextBlob
    };
    DecryptResponse response = await kmsClient.DecryptAsync(decryptRequest);
    Console.WriteLine(“This is decypted message” + response.Plaintext);

    using (var plaintextStream = response.Plaintext)
    {
    // Get decrypted bytes
    var plaintextBytes = plaintextStream.ToArray();
    // Convert decrypted bytes to ASCII text

    return Encoding.UTF8.GetString(plaintextBytes);
    }
    }
    }

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.