Storing Secrets with AWS ParameterStore

Secrets management is a constant topic for debate in tech and security circles, even more so for users of cloud providers. There are solutions like Hashicorp Vault, Sneaker, and Credstash (even a locked down S3 bucket) that we have looked at using at Unbounce. Each solution has its own level of complexity to setup and maintain. All of these solutions suffer from the same problem, which I like to call "Step 0" or, how the heck do I manage the master key that unlocks everything? At some point in the encryption process, trust has to be established and that is the point where encryption cannot be used (Step 0).

Since Unbounce is primarily in the Amazon Web Services (AWS) world, there is a new-comer to this game, a relatively unknown service that provides secrets management and a potential solution to the "Step 0" problem: ParameterStore.

The goal of this article is to show how ParameterStore works, and how to leverage its use on your AWS servers to comply with security best practices.

Enter SSM Parameter Store

ParameterStore lives within the Simple Systems Manager service, which is under the rather large umbrella of EC2. It holds "data" (for lack of a better word) as plaintext strings, encrypted strings, or arrays of strings. An EC2 server can be given IAM permissions to retrieve one or more parameters from the ParameterStore. If those parameters are encrypted, and the EC2 server is given IAM permissions to decrypt the given KMS keypair, the parameters are decrypted on-the-fly and passed to the server.

SSM is the service, and SSM requires the use of an agent to initiate communication between the server and AWS. However, ParameterStore does not use the agent and, instead, lives in its own API and can be accessed like most other AWS services (i.e. HTTP endpoint).

The best part of ParameterStore is that Step 0, or trust, is already established by the use of IAM permissions given to servers. This immediately solves the problem of bootstrapping a server to use a secrets management solution. Also, because EC2 servers use instance profiles (which, in turn, use the Metadata service), IAM credentials are rotated in regular, short intervals to reduce exposure and tampering.

Tradeoffs

One downside of this approach to be aware of is the use of IAM instance profiles. Due to the way that instance profiles hang off of autoscaling groups, you cannot remove the permission to access a resource (e.g. ParameterStore) or else new instances created through autoscaling scale-out actions won't have access to provide runtime configuration of the EC2 instance.

The other downside is the UI for ParameterStore. Using the console you can only select the default KMS keypair, but using the CLI you can select a custom keypair. Also, CloudFormation support for ParameterStore does not exist at this time, my guess is because then CloudFormation would be storing secrets and CloudFormation is not an encrypted data storage solution. My advice is to use the CLI for setting up ParameterStore entries, and layer orchestration scripts on top to help with maintenance.

The last downside is that this is not a junk drawer service. That is, ParameterStore is not meant to store various data types or large amounts of data. If this is required, encrypt this data, store it in S3 and store the decryption key in ParameterStore (or in KMS itself as a data key, that is the point of it).

The main advantage is that you don't have to maintain DynamoDB tables or master keys, or write any decryption code. Another advantage is that the data is available per-region (be aware of that for multi-region setups) and it is very fast to decrypt.

The last advantage I will detail here (there are many more) is that each secret can be scoped based on environment, project, or purpose. This prevents test systems from decrypting production secrets, or a compromised project stealing secrets from other projects. It also means that certain team members can modify secrets but not decrypt them should the threat model require that type of access control.

Preparing for ParameterStore Usage

The first thing to do when preparing to use ParameterStore is to plan how your secrets are architected. This includes designing how your projects access secrets, how they are deployed, and in what states of liveness.

For the purpose of this article, we make the following design decisions:

  1. projects are assumed to be local to a company and have one or more stages of liveness (aka environments).
  2. projects may need secrets that are available to everyone (NewRelic?)
  3. projects may need secrets that are available to just that project
  4. projects may need secrets that are only available to that environment
  5. resources will live in the same AWS region and account

You will also need to reconfigure how your application loads secrets at runtime. This is beyond the scope of the article, but so long as the AWS SDK is being used, this is relatively simple to accomplish.

Applications will need to contact the AWS ParameterStore service and decrypt the parameters at runtime. While it could save those plaintext credentials in the system, keeping them decrypted in RAM only (and never touch the filesystem) can meet some compliance control requirements (e.g. FedRamp).

The last thing to prepare for is cost. This article walks through the creation of custom KMS keys and other resources that may cost money (either short-term or long-term). Please be mindful and do appropriate research for budgeting when applying the techniques in this article.

Building the Encryption Structure

Create some custom KMS keys

  • shared – environment- and project independent (i.e. open to everyone)
  • production – environment-specific, project-independent
  • projectA – environment-independent, project-specific
  • projectA/production – environment- and project-specific

The / delimiter is important and is used as an organization tool to separate different types of assets. It is your choice whether to use this convention or not.

Key policy should give administrator access to specific people. These are the people that manage the key and may encrypt the contents of parameters. It is best practice to keep this list to a minimum. Likewise, keep the list of people that have decrypt privileges to a minimum as well. During off-boarding of personnel, best practice is to rotate the secret that was known by the person leaving the company/project, so the less people that know it the smaller the rate of change.

Instead of doing this manually, a CloudFormation stack can be created to create and manage the lifecycle of these keys.

---
AWSTemplateFormatVersion: "2010-09-09"
Description: "KMS key for secrets management (see Parameters for more info)"

Parameters:
  KeyNamespace:
    Type: "String"

Resources:
  KmsKeyAlias:
    Type: "AWS::KMS::Alias"
    Properties:
      AliasName: !Sub "alias/${KeyNamespace}"
      TargetKeyId: !Ref KmsKey
  KmsKey:
    Type: "AWS::KMS::Key"
    Properties:
      Description: !Sub "Manages secrets in the ${KeyNamespace} namespace"
      Enabled: true
      EnableKeyRotation: true
      KeyPolicy:
        Version: "2012-10-17"
        Id: "KeyPolicyForKMS"
        Statement:
          - Sid: "Enable IAM User Permissions"
            Effect: "Allow"
            Principal:
              AWS: !Sub "arn:aws:iam::${AWS::AccountId}:root"
            Action: "kms:*"
            Resource: "*"

Outputs:
  KmsKeyId:
    Description: "ID of the KMS key"
    Value: !Ref KmsKey
    Export:
      Name: !Sub "kms:key:${KeyNamespace}:id"
  KmsKeyArn:
    Description: "ARN of the KMS key"
    Value: !GetAtt KmsKey.Arn
    Export:
      Name: !Sub "kms:key:${KeyNamespace}:arn"

This can be repeated in multiple stacks, one for each type of key, by giving different values to the KeyNamespace input parameter. The stack output will be used in other stacks when the ARN of the KMS key is required. For the purposes of this article, we'll use the following placeholder values.

The stack exports we will be using further in this article are named as:

  • kms:key:shared:arn
  • kms:key:production:arn
  • kms:key:projectA:arn
  • kms:key:projectA-production:arn

and have the following placeholder IDs for use in AWS CLI commands:

  • shared – 2d1d8811-c02d-44e5-85d5-4b0427a1cd35
  • production – 5c869290-cd15-460a-84b4-8782e30c2599
  • projectA – 143b635a-3859-4b89-b751-2d7e428193cb
  • projectA/production – e4ec31af-2276-43d1-8e90-e932d816005e

Creating the Secrets Payload

Create some new parameters via the CLI. Note that the values here are all placeholders and do not represent real credentials.

macbook$ # setup some env vars for easier reference later
macbook$ kms_shared="2d1d8811-c02d-44e5-85d5-4b0427a1cd35"
macbook$ kms_production="5c869290-cd15-460a-84b4-8782e30c2599"
macbook$ kms_projectA="143b635a-3859-4b89-b751-2d7e428193cb"
macbook$ kms_projectA_production="e4ec31af-2276-43d1-8e90-e932d816005e"
macbook$ # upload secrets to ParameterStore -- NB: !!fake values!!
macbook$ aws ssm put-parameter \
> --name "shared.newrelic_key" \
> --type SecureString \
> --description "NewRelic API key" \
> --value "2039402939302293029230923093" \
> --key-id "$kms_shared"
macbook$ aws ssm put-parameter \
> --name "projectA.shared.database_username" \
> --type SecureString \
> --description "Database connection username" \
> --value "db_user" \
> --key-id "$kms_projectA"
macbook$ aws ssm put-parameter \
> --name "projectA.production.database_password" \
> --type SecureString \
> --description "Database connection password" \
> --value "d)Dmfj@)(dJF!hp" \
> --key-id "$kms_projectA_production"

The . is used in the parameter name for organization. It also helps when building the Resources block of an IAM policy because wildcards can be used (e.g. shared.*) versus enumerating all parameters.

Note that there is no parameter encrypted using the production KMS key, even though the EC2 instances will be given Decrypt permissions. Perhaps the project's code does not need any secrets from the KMS key. This is fine, as the server will be future-proofed against any new shared production secrets without having to modify the IAM policy (which may set off alerts for auditing or further scrutiny).

Creating the Infrastructure

Many articles show infrastructure being created manually. However, manually configuring or setting up infrastructure is prone to mistakes and drift. Instead, this article creates the infrastructure in CloudFormation to prevent these issues from happening. As well, this infrastructure can be stored in version control and have peer review applied to it.

---
AWSTemplateFormatVersion: "2010-09-09"
Description: "Simple autoscaling EC2 service using secrets management"

Parameters:
  AmiId:
    Type: "AWS::EC2::Image::Id"
  EnvironmentName:
    Type: "String"
  InstanceType:
    Type: "String"
  KeyPairName:
    Type: "AWS::EC2::KeyPair::KeyName"
  ProjectName:
    Type: "String"
  SecurityGroupId:
    Type: "AWS::EC2::SecurityGroup::Id"
  SubnetId:
    Type: "AWS::EC2::Subnet::Id"

Resources:
  InstanceIamRole:
    Type: "AWS::IAM::Role"
    Properties:
      AssumeRolePolicyDocument:
        Statement:
          - Action:
              - "sts:AssumeRole"
            Effect: "Allow"
            Principal:
              Service:
                - "ec2.amazonaws.com"
      Path: !Sub "/${ProjectName}/${EnvironmentName}/"
      Policies:
        - PolicyName: "secrets-management"
          PolicyDocument:
            Version: "2012-10-17"
            Id: "AllowAccessToParameters"
            Statement:
              - Sid: "AllowAccessToGetParameters"
                Effect: "Allow"
                Action: "ssm:GetParameters"
                Resource:
                  - !Sub "arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/shared.*"
                  - !Sub "arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${EnvironmentName}.*"
                  - !Sub "arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${ProjectName}.shared.*"
                  - !Sub "arn:aws:ssm:${AWS::Region}:${AWS::AccountId}:parameter/${ProjectName}.${EnvironmentName}.*"
              - Sid: "AllowAccessToDecryptParameters"
                Effect: "Allow"
                Action: "kms:Decrypt"
                Resource:
                  - Fn::ImportValue:
                      !Sub "kms:key:shared:arn"
                  - Fn::ImportValue:
                      !Sub "kms:key:${EnvironmentName}:arn"
                  - Fn::ImportValue:
                      !Sub "kms:key:${ProjectName}:arn"
                  - Fn::ImportValue:
                      !Sub "kms:key:${ProjectName}-${EnvironmentName}:arn"
  InstanceProfile:
    Type: "AWS::IAM::InstanceProfile"
    Properties:
      Path: !Sub "/${ProjectName}/${EnvironmentName}/"
      Roles:
        - !Ref InstanceIamRole
  LaunchConfiguration:
    Type: "AWS::AutoScaling::LaunchConfiguration"
    Properties:
      AssociatePublicIpAddress: true
      BlockDeviceMappings:
        - DeviceName: "/dev/sda1"
          Ebs:
            DeleteOnTermination: true
            VolumeSize: 20    # GB
            VolumeType: gp2   # SSD
      IamInstanceProfile: !Ref InstanceProfile
      ImageId: !Ref AmiId
      InstanceType: !Ref InstanceType
      KeyName: !Ref KeyPairName
      SecurityGroups:
        - !Ref SecurityGroupId
      UserData:
        "Fn::Base64": !Sub |
          #!/bin/bash

          # assume AMI loaded with this script
          set-hostname "${ProjectName}-ec2"

          # assume AMI preloaded with cfn-tools
          cfn-signal -e $? --stack "${AWS::StackName}" --resource "AutoScalingGroup" --region "${AWS::Region}"
  AutoScalingGroup:
    Type: "AWS::AutoScaling::AutoScalingGroup"
    CreationPolicy:
      ResourceSignal:
        Count: 1
        Timeout: "PT15M"
    Properties:
      DesiredCapacity: 1
      LaunchConfigurationName: !Ref LaunchConfiguration
      MaxSize: 5
      MinSize: 0
      VPCZoneIdentifier:
        - !Ref SubnetId

This stack contains the minimum AWS resources necessary for this article. It is ephemeral, elastic, self-healing, and can access secrets in a least-privilege manner. The entire stack is self-contained and scoped to exactly the project and environment required for its operation. Another thing to note is that another stack, with different input parameters, will be assigned a different permission structure and will not be able to access the secrets from other projects or environments. This is key to a successful access control structure.

Now launch the stack using the AWS CLI. In normal practice, I recommend using an orchestration tool (see my article on using Ansible for this purpose), but that is beyond the scope of this article so a manual solution is provided below.

$ aws cloudformation create-stack \
> --stack-name 'foo' \
> --template-body file://template.yml \
> --parameters ParameterKey=EnvironmentName,ParameterValue=production ParameterKey=ProjectName,ParameterValue=projectA ... \
> --capabilities CAPABILITY_IAM \
> --region us-east-1

Wait for the stack to complete and the autoscaling group to successfully launch the EC2 server. Using cfn-signal assures us that the instance is ready for use when the CloudFormation stack is in CREATE_COMPLETE status. It is an optional tool, but it greatly helps to validate EC2 instance health within an autoscaling group.

Login to the server and retrieve the credentials stored in ParameterStore. We are assuming here that SSH has already been configured with the correct user to login with.

$ ssh ec2-1-2-3-4.compute-1.amazonaws.com
... standard SSH messaging ...

projectA-ec2$ aws ssm get-parameters --names "shared.newrelic_key" "projectA.shared.database_username" "projectA.production.database_password" --region us-east-1
{
    "InvalidParameters": [],
    "Parameters": [
        {
            "Type": "SecureString",
            "Name": "projectA.production.database_password",
            "Value": "AQECAHgy4QYf4pdHC+u/4naTnL84uqU+GUKGaJxu6iNJejVYEwAAAG0wawYJKoZIhvcNAQcGoF4wXAIBADBXBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDDWKkf1rCMWDY64vFwIBEIAq3SKLG19B2N9sNVKHCJG7046PcJx4p4ZSSsKoGltaBiDNtyF9cyxaWjlV"
        },
        {
            "Type": "SecureString",
            "Name": "projectA.shared.database_username",
            "Value": "AQECAHgpfV7TSxkGxcRbwUZizH+Ip8D3RDXVW4IbTzMY/VpE0gAAAGUwYwYJKoZIhvcNAQcGoFYwVAIBADBPBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDDStd04EGoaSOvyytgIBEIAi15J0DrtBQ2hXghYRtIJprnI3WZiyiVAimKoyASXu4QjYlQ=="
        },
        {
            "Type": "SecureString",
            "Name": "shared.newrelic_key",
            "Value": "AQECAHglbhdVMBon892lWUjQB3GGs2l9QVmmWI+RpeuWf060PQAAAHoweAYJKoZIhvcNAQcGoGswaQIBADBkBgkqhkiG9w0BBwEwHgYJYIZIAWUDBAEuMBEEDEKqvoFoVBw/I7878wIBEIA3ZrhTLJk+HDVV2XOCcuM+sDbAL+1TLOjMGLvW6ugYCZiMQefhir3o0YbXnDLuYlACVneRthD03Q=="
        }
    ]
}

projectA-ec2$ aws ssm get-parameters --names "shared.newrelic_key" "projectA.shared.database_username" "projectA.production.database_password" --region us-east-1 --with-decryption
{
    "InvalidParameters": [],
    "Parameters": [
        {
            "Type": "SecureString",
            "Name": "projectA.production.database_password",
            "Value": "d)Dmfj@)(dJF!hp"
        },
        {
            "Type": "SecureString",
            "Name": "projectA.shared.database_username",
            "Value": "db_user"
        },
        {
            "Type": "SecureString",
            "Name": "shared.newrelic_key",
            "Value": "2039402939302293029230923093"
        }
    ]
}

Et voila! No master keys were known or used by the end user. The end user was not required to establish trust. Yet the secrets are transmitted to the machine securely. If you don't specify the --with-decryption flag the secrets are provided in their encrypted state. The second command shows how AWS can be auto-decrypt these values for you.

While the JSON output is not useful for configuring applications as-is, it displays the potential when an application is built with the AWS SDK to handle the output of the SSM get-parameters command. The application, during startup, could configure itself and never store the decrypted secrets on disk, only holding them in memory. Since this is dependent on the implementation of each project, the JSON output above will have to suffice.

Conclusion

Admittedly, that was a lot of build-up for a rather anti-climatic ending, but that is the entire goal of adding security such as ParameterStore to your AWS infrastructure. Secrets management is not about configuring and maintaining clusters or handing off key management to applications, it mustt be simple and easily understood so that anyone can spend an hour to understand all its capabilities. We use AWS to outsource our management of services that are beyond our core business offering, so why not hand secrets management to them as well?

What about the pricing? While the server pricing will depend on what type of instance is being run, and whether spot pricing is used, the KMS cost is easy to calculate. This article uses 4 keys, which are $1 each per month. Adding in annual key rotation and that price increases by $1/month per year for each key (e.g. $4/mo first year, $8/mo second year). The added cost is because the old key material is retained to aid in decrypting old secrets. Your compliance needs or threat model may necessitate enabling key rotation. If you have more projects or additional environments, each of those will add to the overall cost. KMS can get expensive in a large company or those with microservices, but it is relatively cheap considering the security benefits being offered.

One final note about the use of CloudFormation in this article. No internal references or account-specific credentials are in any of these templates. This is the beauty of using CloudFormation to manage your infrastructure. You can utilize pseudo-parameters to mask identifying details (e.g. Account ID, Region) without having to add redactions. Also, cleaning up is as simple as deleting the stacks, taking about 3 minutes from start to finish.

If you have any questions about ParameterStore or any other AWS tooling for infrastructure or security, please feel free to contact me.