As your Terraform projects grow, so does the complexity of the code, and you find yourself in a position where you struggle with code duplication and need more robust configurations. Using Terraform with Spacelift solves all these issues and more, but if you are already using Terragrunt, we’ve also got you covered with our native integration.
Spacelift is designed to work seamlessly with your existing tool stack, so adding Terragrunt won’t require many changes to your workflow. You will still get all the benefits that Spacelift offers for other backends, and you can easily automate the deployment of complex Terraform deployments managed by Terragrunt, achieving a more elevated workflow.
With Spacelift’s native support, apart from running terragrunt init/plan/apply, you can also use the run-all option, which can be helpful in scenarios where organizations rely on this in their current process and are unable to do a full migration yet.
Having Terragrunt’s full support and Spacelift’s stack dependencies, you now get two distinct mechanisms for complex multi-stack deployments.
Let’s use an example to see how Terragrunt works with Spacelift. The code can be found on GitHub, and it contains the following Terragrunt configurations:
- config1: aws_vpc creation
- config2: aws_subnet creation
- config3: aws_security_group creation
The second and the third configurations will have a dependency on the first one.
To create a Terragrunt stack, use the same process as for creating any other stack, choosing the Terragrunt backend, as shown in the screenshot below:
Terragrunt backend options:
- Use Run All → Whether to use terragrunt’s run-all functionality
- Smart Sanitization → Enable/Disable Spacelift’ Smart Sanitization
- Terragrunt Version → Select Terragrunt version
- Terraform Version → Select Terraform version
A run can be triggered easily afterward, and because config2 and config3 depend on config1, they will be split into two groups: config1 will be created first, and config2 and config3 will be created in parallel after config1 is done.
To demonstrate this, a plan policy was added to the workflow to check if some specific tag keys are present for the resources. If those tag keys are missing, the plan will be automatically denied.
package spacelift
# This example plan policy enforces specific tags are present on your resources
#
# You can read more about plan policies here:
# https://docs.spacelift.io/concepts/policy/terraform-plan-policy
# List of required tag keys
required_tags := {"Name", "env", "owner"}
deny[sprintf("resource %q does not have all suggested tags (%s)", [resource.address, concat(", ", missing_tags)])] {
# Getting all terragrunt configurations
terragrunt_configuration := input.terragrunt[_]
# Getting all terragrunt resources
resource := terragrunt_configuration.resource_changes[_]
# Getting all the resources’ tags
tags := resource.change.after.tags
# Creates a set that checks if tags in the required_tags variable
# are not tags for each of the resource.
missing_tags := {tag | required_tags[tag]; not tags[tag]}
# Getting the number of missing tags
count(missing_tags) > 0
}
# Learn more about sampling policy evaluations here:
# https://docs.spacelift.io/concepts/policy#sampling-policy-inputs
sample = true
No tags have been defined for the resources initially, so the policy will automatically fail the run once attached to stack:
I’ve added the tags for the resources to get past the plan policy, so let’s see how the Terragrunt plan looks:
# aws_vpc.this will be created
+ resource "aws_vpc" "this" {
+ arn = (known after apply)
+ cidr_block = "10.0.0.0/16"
—---------------
+ tags = {
+ "Name" = "vpc1"
+ "env" = "dev"
+ "owner" = "config1"
}
+ tags_all = {
+ "Name" = "vpc1"
+ "env" = "dev"
+ "owner" = "config1"
}
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ vpc_id = (known after apply)
# aws_subnet.this will be created
+ resource "aws_subnet" "this" {
+ arn = (known after apply)
—---------------
+ tags = {
+ "Name" = "subnet1"
+ "env" = "dev"
+ "owner" = "config2"
}
+ tags_all = {
+ "Name" = "subnet1"
+ "env" = "dev"
+ "owner" = "config2"
}
+ vpc_id = "fake-vpc-id"
}
Plan: 1 to add, 0 to change, 0 to destroy.
# aws_security_group.this will be created
+ resource "aws_security_group" "this" {
+ arn = (known after apply)
—---------------
+ tags = {
+ "Name" = "security_group1"
+ "env" = "dev"
+ "owner" = "config3"
}
+ tags_all = {
+ "Name" = "security_group1"
+ "env" = "dev"
+ "owner" = "config3"
}
+ vpc_id = "fake-vpc-id"
}
After the run completes, all the resources can been seen in the above view with all their changed parameters.
Because this is the first run, you will see that both aws_subnet and aws_security_group have a value of fake-vpc-id for their vpc_id parameters.
Whenever you are working with Terragrunt, as a best practice, you need to ensure that all the outputs passed from one configuration to another have mock outputs defined. The values of these mock outputs will be used for the initial stack creation or whenever you are adding new outputs just for the plan to pass. In the end, the real values will be used.
Below is an example of how the dependency to config1 was made with a mock value defined for an output called “vpc_id” in the terragrunt.hcl file.
dependency "config1" {
config_path = "../config1"
mock_outputs = {
vpc_id = "fake-vpc-id"
}
}
This also means that you need to define an output called “vpc_id” in the first Terragrunt configuration.
Another consideration is that Terragrunt’s current nature makes it impossible to relate state files in a consistent manner, so Spacelift is unable to manage state files for Terragrunt stack whilst being certain that all state files are handled correctly. However, it is possible to easily configure one backend block and store the state in your own backend. For example, storing the state files in an AWS S3 backend using a terragrunt.hcl file will be similar to:
remote_state {
backend = "s3"
config = {
bucket = "bucket_name"
key = "config1/terraform.tfstate"
region = "eu-west-1"
encrypt = true
dynamodb_table = "dynamodbtable"
}
}
You can read more about how to configure an S3 remote backend in Terraform.
While using multiple projects in Terragrunt, you may have configurations that use the same name for resources and outputs, which will make it hard to identify which configuration has defined them. This is not a problem with the Spacelift integration because we prepend their addresses with the projects they originated from, so you can quickly identify where issues are occurring and control them better.
Spacelift’s native integration with Terragrunt gives you the flexibility to take advantage of Terragrunt’s features while elevating your workflow with all the capabilities of the platform.
By using the run-all option, you can deploy multiple configurations in a single stack, while Terragrunt manages your dependencies and executes non-dependent ones in parallel.
If you want to see this in action and discover how flexible our platform is, create a free account or book a demo with one of our engineers.
Terraform Management Made Easy
Spacelift effectively manages Terraform state, more complex workflows, supports policy as code, programmatic configuration, context sharing, drift detection, resource visualization and includes many more features.