Why are my environment variables replaced during Serverless Framework deployments?

A common question on the Serverless Framework forums goes something like Why are my environment variables replaced during Serverless deployments? or How can I stop Serverless from replacing environment variables during deployment?. In this article I'm going to provide you with some techniques to help mitigate the problem.

To understand what is happening and how to mitigate the problem you need to know that the Serverless Framework is an abstraction layer on top of CloudFormation. Serverless takes the functions section of your serverless.yml and expands it into a full CloudFormation template creating additional resources as required and making sure they are all connected correctly. By building Serverless on top of CloudFormation it removes the complexities around managing change sets but it also means that Serverless has all of the limitations of CloudFormation and that is the root cause of this problem.

The CloudFormation definition for a Lambda function is:

Type: "AWS::Lambda::Function"
Properties:
  Code: Code
  DeadLetterConfig: DeadLetterConfig
  Description: String
  Environment: Environment
  FunctionName: String
  Handler: String
  KmsKeyArn: String
  Layers:
    - String
  MemorySize: Integer
  ReservedConcurrentExecutions: Integer
  Role: String
  Runtime: String
  Timeout: Integer
  TracingConfig: TracingConfig
  VpcConfig: VPCConfig
  Tags: Resource Tag

The Environment is a set of key/value pairs that provide the environment variables you want set once CloudFormation has completed the update. CloudFormation doesn't provide a way to indicate which:

  • existing variables should be removed
  • existing variables should be replace
  • new variables should be added

It simply replaces the existing set of environment variables with the new set.

While you can't stop overwriting of values there are a number of strategies you can apply to reduce the problem.

I've previously written about using per stage environment variables with the Serverless Framework. There are two variations on this approach depending on whether you want a single file with one section for each stage or one file for each stage. Regardless of your preferred approach the idea is to load the environment variables in a way that they can have different values for each stage.

Since writing that article AWS has introduced the SSM Parameter Store and Serverless now supports retrieving values from it during deployment. Using ${ssm:/path/to/param} has become my preferred approach to storing environment specific variables such as third party API keys and secrets.

Another useful approach is using environment variables like ${env:APIKEY} in your serverless.yml.

These approaches can be used in combination. For example: You could start by using a file that contains one key for each stage that you deploy to.

custom:
  stage: "${opt:stage, self:provider.stage}"

environment: ${file(env.yml):${self:custom.stage}}

In that file you can set the value for each stage using the most appropriate method:

default_env: &default_env
  DB_NAME: "my-db"

dev:
  <<: *default_env
  DB_USERNAME: "my-username"
  DB_PASSWORD: "my-password"

staging:
  <<: *default_env
  DB_USERNAME: ${env:DB_USERNAME}
  DB_PASSWORD: ${env:DB_PASSWORD}

production:
  <<: *default_env
  DB_USERNAME: ${ssm:/path/to/DB_USERNAME}
  DB_PASSWORD: ${ssm:/path/to/DB_PASSWORD}

In my example the development database username and password are hard coded. On the staging server I'm going to use environment variables to set them. For deployment to production I'm going to use secrets stored in the SSM Parameter Store.

While these techniques won't stop Serverless from overwriting your environment variables during deployment they may help you better manage them so that each stage gets the value it requires.