Nowadays, the need for robust security measures cannot be overstated. As more and more organizations move their workloads to the cloud, the risk of data breaches and unauthorized access has grown exponentially. This is where Vault comes to the rescue.
In this post, we will explore how easy it is to configure and use Vault with Spacelift, and how you can take advantage of it inside your workflow.
Vault has become the de-facto standard for managing and securing data like API keys, tokens, and passwords. It is designed to address the challenges associated with securing secrets in a modern, dynamic environment.
In mid-sized and big organizations, secrets are usually spread across multiple platforms and tools, making it really difficult to manage and — most important — secure them. Vault’s capabilities solve this problem of “secret sprawl.”
It provides a comprehensive solution for managing secrets, but it requires careful configuration and management in order to harness all the features it offers. It does come with risks, and misuse could lead to exposure of sensitive data.
Some of Vault’s key features are:
- Secure Secret Storage → Gives you the ability to store secrets in a secure way. All secrets are encrypted before they are written to persistent storage, and even if an attacker gains access to the raw storage, it isn’t enough to crack them.
- Dynamic Secrets → Unlike a basic secret in which you have to put data into the store yourself, dynamic secrets can be generated on-demand for some systems. For example, if your application needs access to AWS service, it can ask Vault for AWS credentials, and Vault will generate credentials granting access to that service, which it will revoke after the TTL expires.
- Revocation → You can revoke not only a single instance of a secret but a tree of secrets (e.g. all secrets of a particular type)
- Secret Renewal → Secrets in Vault have a lease associated with them, at the end of which, Vault will automatically revoke those secrets. These leases can be renewed via the API.
- Data Encryption → Data can be encrypted and decrypted by Vault without storing it.
- Integrations → Vault integrates with many services, including AWS, Azure, GCP, Spacelift, Kubernetes, and even databases.
Installing Vault on the Spacelift Runner
In this section, we will explore two ways of installing Vault on the Spacelift Runner. This is useful if you want to use Vault from the CLI in your Spacelift workflow. You can either install it:
- during a before_init hook
- by building a custom Docker image
To install Vault in a before_init hook, paste the following code:
wget -O vault.zip https://releases.hashicorp.com/vault/1.13.1/vault_1.13.1_linux_amd64.zip
unzip vault.zip
mkdir /home/spacelift/bin
mv vault /home/spacelift/bin
export PATH=$PATH:/home/spacelift/bin
It should look similar to this:
To create a DockerFile with Vault installed, use the following repository, or build your own image based on this DockerFile:
FROM golang:1.20-alpine3.16 as builder
WORKDIR /app
RUN apk add --no-cache git make bash && \
git clone https://github.com/hashicorp/vault.git && \
cd vault && \
go mod download
# Build the Vault binary
RUN cd /app/vault && \
make bootstrap && \
make dev
FROM public.ecr.aws/spacelift/runner-terraform:latest as spacelift
COPY --from=builder /app/vault/bin/vault /usr/local/bin/vault
If you want to use the docker image, you have to build it and push it to a supported registry. You can also use the image I’ve already pushed in Docker hub: flaviuscdinu93/spacelift_vault:amd.
After that, add it to your stack, as in the following screenshot:
Dynamic Credentials via OIDC
Spacelift allows you to set up dynamic credentials for Vault via OIDC. Configuring access involves establishing a role within Vault that designates which Spacelift runs are permitted to access specific Vault secrets. This procedure can be executed using either the Vault CLI or Terraform.
In this example, we will use the cli. First, we need to authenticate to Vault. To do that from the CLI we need to:
export VAULT_ADDR=”your_vault_address”
vault login
Provide your token when asked for it. You will receive the following message if the authentication is successful:
“Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run “vault login”
again. Future Vault requests will automatically use this token.”
As I am using HCP vault, I also have to export a vault namespace and in my case this will be admin; the default value is root:
export VAULT_NAMESPACE=admin
Now, we need to enable the JWT authentication method as shown below:
vault auth enable jwt
Next, you need to add the configuration for your Spacelift account as an identity provider. This will take two parameters:
- bound_issuer → the URL of your Spacelift account
- oidc_discovery_url → the URL of the OIDC discovery endpoint for your Spacelift account (it is usually the same as the URL of your Spacelift account)
After you get those values, run this command:
vault write auth/jwt/config \
bound_issuer="https://bound_issuer_url" \
oidc_discovery_url="https://oidc_discovery_url"
In the next step, you will create a policy used to define what Spacelift runs can access which vault secrets — something similar to this:
vault policy write demo-policy - <<EOF
path "secrets/*" {
capabilities = ["read"]
}
EOF
You will need to change the path to suit your needs. In the above example, I am giving read access to all the secrets in the secrets path.
As with all policies, you need to have a role to which this is attached. To do that, run the following command:
vault write auth/jwt/role/demo-role -<<EOF
{
"role_type": "jwt",
"user_claim": "iss",
"bound_audiences": "spacelift_account",
"bound_claims": { "spaceId": "root" },
"policies": ["demo-policy"],
"ttl": "10m"
}
EOF
Above, you should only change the bound_audiences and the bound_claims parameters:
- bound_audiences should be your spacelift account without https://
- bound_claims can be a numer of things, but I believe you are going to use the spaceId parameter the most. For other options, you can check out all supported claims in Spacelift here.
Using Vault from the Terraform Provider
If the Vault instance is public, it is fairly easy to connect to it using the Terraform provider. If you have set up dynamic credentials following the example above, the only thing you need to do is set up the TERRAFORM_VAULT_AUTH_JWT environment variable to the Spacelift OIDC token either directly on the stack or in the context.
Vault offers other authentication methods, which you can find in Terraform’s documentation.
The provider configuration should be similar to this:
provider "vault" {
address = var.vault_address
skip_child_token = true
auth_login_jwt {
role = "demo-role"
}
}
To set up the TERRAFORM_VAULT_AUTH_JWT environment variable, simply leverage a before_init hook like this:
Vault examples with Spacelift
In my vault account, I have created a secret that holds a vpc cidr_block in the secrets/vpc path. I want to use this vpc_cidr block in my Terraform configuration. One of the examples created leverages the vault provider, and the other will use the CLI to get that value and save it to an environment variable.
The example code can be found here. It will simply create a vpc and a subnet in AWS.
The tf_vault folder will leverage the vault provider to get the value, so in this case, the only thing we need to do in a before_init hook is to export the TERRAFORM_VAULT_AUTH_JWT environment variable to use the SPACELIFT_OIDC_TOKEN, as we’ve already set up dynamic credentials via OIDC.
For this to work for you, ensure you change the vault_address terraform variable to your vault account and set up your AWS credentials. After that, you are good to go:
Note the values for the cidr_blocks are sensitive.
It’s a little trickier to use the second approach by leveraging the CLI. In the first approach, we didn’t even need to have Vault installed on the runner image, but now we do. We also need to export a couple of other variables to ensure that everything is working properly.
The code example can be found in the tf_env_vars folder. I will be doing everything in a before_init hook to make it easier to follow.
After the installation of Vault as shown above, you need to:
export VAULT_ADDR=your_vault_address
export VAULT_NAMESPACE=admin
export VAULT_TOKEN=$(vault write -field=token auth/jwt/login role=demo-role jwt=$SPACELIFT_OIDC_TOKEN)
export TF_VAR_vpc_cidr=$(vault kv get -field=cidr_block secrets/vpc)
We will export the Vault server address and Vault namespace and after that, we will generate the token based on JWT to leverage the dynamic credentials we’ve set up before.
Finally, we will get the cidr block secret from Vault and export it as a terraform environment variable.
In the before_init hook, the whole process is similar to what you see above.
After running a plan, the output will be:
As we’ve saved the vpc_cidr block in an environment variable, the values in the plan will not be sensitive anymore.
In this post, we’ve explored how to create an integration between Vault and Spacelift. By leveraging dynamic credentials via OIDC and using the Terraform provider directly, you won’t need to install Vault at all on your Spacelift runner.
On the other hand, if you just need to take some values from Vault and provide them as input variables, and you don’t want to use the Terraform provider, you will need to install Vault on the runner, but your Terraform code will be simpler.
Harnessing a secure secret storage tool like Vault and incorporating it in your Spacelift workflow streamlines secret handling, reinforces security, and makes it much easier to maintain compliant infrastructure. To find out more ways Spacelift makes managing IaC easier, schedule a demo with one of our engineers or start a free trial.
The most flexible management platform for Infrastructure as Code
Spacelift is a sophisticated SaaS product for Infrastructure as Code that helps DevOps develop and deploy new infrastructures or changes quickly and with confidence.