Modern infrastructure management revolves around secrets to ensure high environmental security. While most organizations are adopting infrastructure as code, managing secrets becomes challenging for technical teams.
In this post, we will explore multiple ways of managing the secrets with Terraform code.
We will cover:
Terraform secrets are sensitive data, such as API keys, passwords, and database connection strings, used to configure and manage your infrastructure. These secrets are crucial for accessing and authenticating to various services and systems within your infrastructure. Terraform offers many different methods for managing these secrets, such as using environment variables, leveraging secret management tools like HashiCorp Vault and AWS Secrets Manager, or encrypting sensitive data.
Where are the secrets used in Terraform?
Secrets protect sensitive information about the organization’s infrastructure and operations. This includes system passwords, encryption keys, APIs, service certificates, and other forms of confidential data. Secrets secure such information by preventing unauthorized access, data breaches, or critical security incidents.
Secrets are used in various phases of Terraform provisioning for activities like:
- Securing access to services provided by cloud platforms such as AWS, Azure, and Google Cloud
- Securing access to active databases that contain sensitive data, such as customer information, financial records, etc.
- Setting up authentication through API keys, OAuth tokens, and SSL certificates to allow the user access to applications
- Setting up access to network components such as routers, switches, and firewalls
Terraform uses secrets to automate infrastructure provisioning activities similar to the ones listed above. The secrets are also stored/managed in state files referenced by Terraform for its workflow.
Secrets are used to set up authentication and authorization to cloud resources. If secrets (e.g., access keys) are compromised, the cloud resources are exposed to potential security breaches. Terraform developers must understand the nature of secrets to take adequate measures to protect them.
Access to every cloud provider platform varies. For example, both AWS and Google Cloud provide a range of cloud services and resources to build and deploy applications in the cloud. To interact with AWS/Google Cloud cloud platforms, we use Terraform code and provide credentials to authenticate access to the cloud APIs. For AWS, these credentials are in the form of access keys and secrets. For Google Cloud, these credentials are included in service account key files.
Below is an example of configuring AWS credentials in Terraform code. We have used the variables aws_access_key
and aws_secret_key
to pass the AWS access key and secret values.
provider "aws" {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
region = var.aws_region
}
Similarly, the provider block for Google Cloud looks like below. The variable gcp_credentials_file
stores the value of the GCP service account key file path.
provider "google" {
credentials = file(var.gcp_credentials_file)
project = var.gcp_project_id
region = var.gcp_region
}
Terraform config files and state files become vulnerable if they contain access keys and secrets in plain text. Such situations should be avoided at all costs.
Here is an overview of why it’s recommended to avoid storing secrets in Terraform config and state files:
- Anyone with access to the version control system can access that secret.
- Every host associated with the version control system stores a copy of the secret. Anyone using the host server can easily find and misuse the secrets.
- Every running software gets access to read the secret written in plain text.
- It is difficult to audit who has access to the secrets and who uses them.
In short, storing secrets in plain text in the config and state files raises significant potential security risks.
Here are alternative methods of storing and managing secrets in the config and state files:
- Use input variables to store secrets and then reference them in configuration files. The values are passed during runtime. However, this is not the best way to manage secrets.
- Use Terraform’s built-in capability to mask the values of any resource or variable. Marking input variables as “sensitive” redacts the secrets being output on any console.
- Use environment variables to store secret values. However, this means anyone who has access to the host will also have access to these environment variables.
- Use external data sources to fetch secrets from external sources at runtime. For example, integrating with Vault/Secret manager applications helps to securely fetch secrets during runtime.
But also, when using automation and CI/CD pipelines to run Terraform, you can use a tool such as Spacelift to manage your state for you. The Terraform state management features in Spacelift are extremely important for maintaining secure and reliable infrastructure deployments. Spacelift now allows you to access that remote state and manipulate it as needed. You can read more about the external state access in the documentation.
Let’s see some examples:
By default, the terraform.tfstate file gets auto-generated in plain text after a terraform command runs. We use the remote backend to collaborate on Terraform projects along with securely storing the Terraform state file. It is possible to securely store sensitive data like access keys and secrets, along with other state information, ensuring that the sensitive data is not exposed to unauthorized parties.
Some security features to look out for in a remote backend are:
- Encryption: A built-in encryption feature helps provide an additional layer of security.
- Version control: To enable tracking of changes to infrastructure and roll-back capability.
- Access control: To align with the principle of least privilege and controlling access to the information stored in state files at a very granular level.
- Backup and recovery: In case of infrastructure failure, it should be possible to quickly recover and reinstate the state files to minimize the impact on infrastructure provisioning.
Here are the general steps to configure a secure remote backend in Terraform:
- Step 1: Choose a secure backend provider.
- Step 2: Create a backend configuration with the details of the backend provider and storage location.
- Step 3: Initialize the backend. This also migrates any pre-existing state information to the remote backend.
- Step 4: Create and apply the Terraform code.
- Step 5: Store the Terraform state in the remote backend.
This helps improve the security of the information managed in state files and ensures the integrity of the infrastructure.
In Terraform, we use environment variables to store and configure variables needed for our configuration to provision desired infrastructure components.
We specifically use environment variables with the prefix TF_VAR_<variable_name>
to define them as Terraform variables. During runtime, the values of the input variable (variable_name
) are referenced from the environment with a corresponding variable with TF_VAR_
prefix.
Learn more in our Terraform environment variables tutorial.
Let’s use an example of creating an RDS database instance that needs to set username and password attributes. Additionally, Terraform needs a couple of variables to access the AWS platform: access key, secret key, and region.
We begin by creating variables for these secrets:
#Define variables for secrets
variable "username" {
type = string
}
variable "password" {
type = string
}
#Define variables for AWS access key, secret key, and region
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "aws_region" {}
Next, we set the corresponding environment variables in the format – TF_VAR_variable_name
, where “variable_name” is the name of the input variables we defined in the previous step.
You can do this by running the commands below in the terminal with appropriate values.
export TF_VAR_aws_access_key=<access_key_value>
export TF_VAR_aws_secret_key=<secret_key_value>
export TF_VAR_aws_region=<region>
export TF_VAR_username=<username_value>
export TF_VAR_password=<password_value>
Next, we define the aws_db_instance
resource in the Terraform configuration that uses these secrets.
Also, note that we have defined the provider block with corresponding input variables:
provider "aws" {
access_key = var.aws_access_key
secret_key = var.aws_secret_key
region = var.aws_region
}
# Create an AWS DB instance resource that requires secrets
resource "aws_db_instance" "mydb" {
allocated_storage = 10
db_name = "mydb"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
username = var.username
password = var.password
parameter_group_name = "mydb.mysql5.7"
skip_final_snapshot = true
}
Here, we used environment variables to store AWS keys and accessed them using TF_VAR_
variables and similar behavior for the database username and password. The Terraform code above picks up the secrets automatically. This technique does not store secrets in plain text in the code.
Terraform hosts are the virtual or physical machines that serve as the deployment targets for infrastructure code implementation. They are used to provision and manage resources on cloud providers and on-premises infrastructure, ensuring consistency and reproducibility of the infrastructure.
Securing the Terraform host is crucial because:
- As the Terraform CLI is installed in the host, it must be protected against unauthorized access.
- It stops unwanted Terraform code execution that can impact infrastructure security.
- It protects Terraform config and state files against data leakage.
- It ensures compliance requirements are met, such as HIPAA and PCI-DSS.
- It protects against insider threats who have access to the infrastructure.
Setting and securing the Terraform host manually is complex and time-consuming. Additional costs may be involved, such as investing in security software, hiring security professionals, addressing newly identified vulnerabilities, etc. Failure to secure the host leads to data loss, security breaches, and compliance violations.
File encryption is an effective technique to store and manage access keys and secrets. Terraform users can use this technique to encrypt sensitive information stored in the config and state files. This technique relies on:
- Encrypting the secrets
- Storing the cipher text in a file
- Checking that file into the version control
The most common solution is to store the keys in a key service provided by a cloud provider such as Azure Key Vault, AWS KMS, or Google Cloud KMS.
These key services avoid putting off the issue until later by relying on human memory: in this case, our ability to memorize a password that gives us access to our cloud provider (or perhaps we store that password in a password manager and memorize the password to that instead).
Securing Terraform secrets with AWS KMS
AWS KMS is Amazon’s key management service that encrypts sensitive data stored in Terraform config and state files. To implement AWS KMS in the RDS database example discussed previously, create a file with credentials as content in key-value format.
For the sake of this example, we have given it the name creds.yml
.
username: username
password: password
Create a KMS key (default symmetric) in AWS, and use the below command to create an encrypted file to store and check in to VCS, the credentials from creds.yml.
aws kms encrypt \
--key-id <alias>OR<arn> \
--region eu-central-1 \
--plaintext fileb://creds.yml \
--output text \
--query CiphertextBlob > creds.yml.encrypted
Once the creds.yml.encrypted file is checked into VCS and cloned on another system, the credentials are fetched using the data store for aws_kms_secrets
, as shown below. Use a local variable to decrypt the value from the cipher text stored in the creds.yml.encrypted file.
data "aws_kms_secrets" "creds" {
secret {
name = "dbexample"
payload = file("${path.module}/creds.yml.encrypted")
}
}
locals {
db_creds = yamldecode(data.aws_kms_secrets.creds.plaintext["dbexample"])
}
Once the secret’s credentials are decrypted, use them to set the database resource credentials, as shown in the configuration below.
resource "aws_db_instance" "mydb" {
allocated_storage = 10
db_name = "mydb"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
username = local.db_creds.username
password = local.db_creds.password
parameter_group_name = "mydb.mysql5.7"
skip_final_snapshot = true
}
Although this is a very secure way of managing sensitive information, the process of encrypting files has certain drawbacks:
- Every time the credentials are updated, we have to encrypt the file locally and check the same into VCS.
- We need to take added precautions while re-encrypting the file.
- Using file encryption adds steps in setting up and managing the encryption keys and files.
Secret stores are secure solutions for storing and managing secrets and keys like the AWS Secrets Manager. They are dedicated databases designed specifically to store secrets securely and stop unauthorized access.
Here are a few popular, secret stores:
- AWS Secrets Manager
- HashiCorp Vault
- AWS Param Store
- Google Cloud Secret Manager
Using AWS Secrets Manager for Terraform secrets
Let’s see an example using AWS Secrets Manager:
Create a secret to store our database credentials in the AWS Secrets Manager. We have named this secret dbcreds
. It is of the “others” type and stores two key-value pairs namely: db_username
and db_password
, as shown below.
Note that we have used the same encryption key used in the previous example to encrypt these secrets in Secrets Manager.
To let Terraform get secrets from the AWS Secrets Manager we have to create data sources as below.
The aws_secretsmanager_secret
fetches the secret data, but we cannot use the same to read secret values. We have created another data source named aws_secretsmanager_secret_version
to read the values using the jsondecode() function.
data "aws_secretsmanager_secret" "dbcreds" {
name = "dbcreds"
}
data "aws_secretsmanager_secret_version" "secret_credentials" {
secret_id = data.aws_secretsmanager_secret.dbcreds.id
}
Back in our database resource configuration, change the username and password values as shown below.
# Create an AWS DB instance resource that requires secrets
resource "aws_db_instance" "mydb" {
allocated_storage = 10
db_name = "mydb"
engine = "mysql"
engine_version = "5.7"
instance_class = "db.t3.micro"
username = jsondecode(data.aws_secretsmanager_secret_version.secret_credentials.secret_string)["db_username"]
password = jsondecode(data.aws_secretsmanager_secret_version.secret_credentials.secret_string)["db_password"]
parameter_group_name = "mydb.mysql5.7"
skip_final_snapshot = true
}
One advantage of using the secrets_manager data source in Terraform is that it automatically marks these values being read as sensitive. Thus, the CLI output or state files do not expose the secrets.
The plan output below confirms the same.
...
+ multi_az = (known after apply)
+ name = (known after apply)
+ nchar_character_set_name = (known after apply)
+ network_type = (known after apply)
+ option_group_name = (known after apply)
+ parameter_group_name = "mydb.mysql5.7"
+ password = (sensitive value)
+ performance_insights_enabled = false
+ performance_insights_kms_key_id = (known after apply)
+ performance_insights_retention_period = (known after apply)
+ port = (known after apply)
+ publicly_accessible = false
+ replica_mode = (known after apply)
+ replicas = (known after apply)
+ resource_id = (known after apply)
+ skip_final_snapshot = true
+ snapshot_identifier = (known after apply)
+ status = (known after apply)
+ storage_throughput = (known after apply)
+ storage_type = (known after apply)
+ tags_all = (known after apply)
+ timezone = (known after apply)
+ username = (sensitive value)
+ vpc_security_group_ids = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
As seen in the previous example, Terraform automatically masks the values that are inherently secret. Terraform “knows” which data sources or attributes are secrets as these modules and resources are defined by them.
However, in the case of custom resources and attribute values, where input variables are used to provide sensitive information during runtime, we need to let Terraform know about the sensitivity of those variables.
Let’s return to the very first example where we used TF_VAR_
environment variables to set values for AWS Access and Secret keys.
The variables are defined below.
#Define variables for AWS access key, secret key, and region
variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "aws_region" {}
The corresponding TF_VAR_
variables were set as environment variables. In this case, Terraform does not “know” about these variables holding secret values.
If we define an output variable to output the values of these keys, it will NOT mask the same.
output "accesskey_value" {
value = var.aws_access_key
}
output "secret_value" {
value = var.aws_secret_key
}
This is also true when these values are stored in state files. To prevent this happening, declare an additional attribute named “sensitive = true” to mask these values.
Updated variables as shown below.
variable "aws_access_key" {
sensitive = true
}
variable "aws_secret_key" {
sensitive = true
}
output "accesskey_value" {
value = var.aws_access_key
sensitive = true
}
output "secret_value" {
value = var.aws_secret_key
sensitive = true
}
Output of the Terraform plan command after marking these variables as sensitive:
Role-based access control can be easily implemented using Terraform. Depending on your cloud provider you will create different kinds of resources, but if we are talking about AWS, roles, policies, role_policies, users, and groups will be the resources you will use the most.
You should create different roles for different environments and accounts and implement the principle of least privilege. Regularly audit the role permissions to ensure they meet these criteria and make the changes accordingly.
Let’s look at a sample AWS role and policy that will be used for deploying your Terraform code:
resource "aws_iam_role" "multi_service_role" {
name = "ec2-lambda-role"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = [
"ec2.amazonaws.com",
"lambda.amazonaws.com"
]
}
}
]
})
}
resource "aws_iam_role_policy_attachment" "power_user" {
role = aws_iam_role.multi_service_role.name
policy_arn = "arn:aws:iam::aws:policy/PowerUserAccess"
}
In the example above, we have created a role that can be assumed by EC2 and Lambda services and has the PowerUserAccess policy that gives permissions to most AWS services. In practice, you will need to create policies that give you the exact level of permissions your role requires, and based on this you will implement the least privilege principle.
The best way to manage Terraform secrets is to use a specialized secrets management solution — such as Vault, OpenBao, or AWS Secrets Manager — alongside an infrastructure orchestration platform like Spacelift. This prevents secrets from being exposed in plain text or version control while enforcing best practices for authentication and access control.
By integrating your secrets manager with your orchestration platform via OIDC or another mechanism that enables dynamic credential generation, you can ensure secrets are accessed securely. Additionally, enforcing SSO and MFA within your orchestration platform strengthens security by limiting unauthorized access.
To minimize risk, secrets should be automatically rotated at regular intervals, and all sensitive data should be encrypted both at rest and in transit.
Spacelift is an API and Security-first product, and it has its own Terraform provider for spinning up Spacelift resources.
Use Spacelift’s policy as a code engine, and implement policies that restrict certain resources or certain parameters of resources, require multiple approvals for runs, and control what happens when you open a pull request or when you merge it.
By using contexts, which are reusable logical containers, for your environment variables, mounted files, and lifecycle hooks, you ensure that your workflow is predictable, thus keeping security issues at a minimum.
You can also leverage private workers to create isolated runners for sensitive environments and implement drift detection and remediation.
Check out this article to learn more about what makes Spacelift secure. You can try Spacelift for free by creating a trial account or booking a demo with one of our engineers.
Managing secrets in Terraform code is not complex when you use the right technique. In this blog post, we discussed the approaches below:
- Not storing secrets in plain text in the config and state files
- Using a secure remote backend
- Using environment variables
- Encrypting files with KMS, PGP, or SOPS
- Using secret stores like Key Vault, and AWS Secrets Manager
- Masking variables while displaying through CLI
The techniques above help ensure sensitive data is stored securely and protected from unauthorized access. Several methods exist to store and manage sensitive data. However, choose a technique appropriate to the nature of the secrets and take adequate measures to protect them.
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.
Terraform Management Made Easy
Spacelift effectively manages Terraform state, more complex workflows, supports policy as code, programmatic configuration, context sharing, drift detection, resource visualization and includes many more features.