Terraform provides a well-defined and concise way to deploy infrastructure resources and changes. The typical workflow involves manual steps and checks that aren’t easily scalable and depend on human intervention to complete successfully.
This article will look into different approaches to Terraform automation and the pros and cons of each one. We will discuss practical strategies to automate Terraform deployments and provision infrastructure in an automated fashion.
Terraform is one of the most prominent tools in the Infrastructure as Code space. Part of its success lies in the straightforward and easy-to-operate workflow it provides. If you aren’t familiar with Terraform, check the various Terraform articles on Spacelift’s blog to get an idea.
The core Terraform workflow consists of three concrete stages. First, we generate the infrastructure as code configuration files representing our environment’s desired state. Next, we check the output of the generated plan based on our manifests. After carefully reviewing the changes, we apply the plan to provision infrastructure resources.
Typically, this workflow involves some manual steps that are easily automatable. For example, using the Terraform CLI we have to run the command terraform plan
to check the effect of our newly prepared configuration files. Similarly, we have to execute the command terraform apply
to propagate the changes to the live environment.
If you want to have all of the important Terraform commands in one place, take a look at our Terraform Cheat Sheet.
For individual contributors and small teams, operating Terraform with the typical workflow and with manual steps to plan and apply the code might work perfectly fine. When teams get bigger, though, and we want to scale Terraform’s usage across organizations, we quickly reach bottlenecks and issues.
As Terraform’s usage across teams matures, adding some kind of Terraform deployment automation is beneficial. Let’s look into some of the approaches to running Terraform in automation.
1) Automating Terraform Orchestration with Custom Tooling
Some teams continue running Terraform locally, but they add custom tooling, pre-commit hooks, and wrappers (e.g., Terragrunt) to enhance the core Terraform workflow. There are different wrapper tools to choose from that provide extra functionalities, such as keeping your configuration DRY, managing remote state, and managing different environments. Other teams prefer writing their own custom wrapper scripts to prepare Terraform working directories according to some standards.
This semi-manual approach is more native to the core Terraform workflow and allows direct access to output and running operational commands (e.g., terraform import). On the other hand, since it involves manual steps, it is error-prone and requires human intervention. One more thing to note is that usually, this approach requires privileged access to the underlying infrastructure provider as well as the Terraform state file, which might be a security risk.
One step ahead, some teams develop their own custom platform on top of Terraform manifests that allow end users to provision infrastructure resources via tweaking some configuration changes on a UI.
This approach abstracts any unnecessary level of detail and Terraform-specific knowledge from end users and allows them to manage infrastructure on demand without another team blocking them. On the flip side, this path requires substantial engineering effort to develop a useful platform and adds a maintenance overhead to the platform team.
2) Building infrastructure provisioning pipelines
The most common approach for running Terraform in automation is to build infrastructure provisioning pipelines with a CI/CD tool. With this method, teams can enforce best practices, add tests and checks, and integrate the Terraform workflow to any CI/CD tool they are familiar with.
Building infrastructure delivery pipelines in CI/CD tools brings several challenges as it needs to adjust the core Terraform workflow for non-interactive environments.
The first step for automating Terraform deployments is to embrace Infrastructure as Code and GitOps and store your manifests in the version control system of your choice. Having versioned repositories as the source of truth for automating infrastructure delivery is a core requirement.
Next, the typical Terraform workflow is adjusted for running in remote environments. Since the run might be triggered in ephemeral environments, we have to initialize the Terraform working directory, run any custom checks or tests and produce a planned output for changing resources.
A common tactic is integrating these steps as part of every proposed code change (e.g., Pull Requests). Once other team members review the proposed changes and find the produced plan acceptable, they can approve and merge the Pull Request. Merging new code to the branch that is considered the source of truth triggers a terraform apply to provision the latest changes.
Tips for building your own Terraform automation around delivery pipelines:
- Your code should be stored in a version control system.
- Leverage the
-input=false
flag to disable interactive prompts. The command line, environment variables, or configuration files should provide any necessary input. - Use a backend that supports remote Terraform State to allow runs on different machines and state locking for safety against race conditions.
- Prepare an environment to run Terraform with any dependencies pre-installed. To avoid downloading the provider plugins every time with the init command, use the flag
-plugin-dir
to provide the path of preconfigured plugins on the automation system. - To allow changing the default backend configuration to deploy with different permissions or to different environments, you can utilize the
-backend-config=path
flag when initializing. If you only need to run checks on the Terraform files that don’t require initializing the backend (e.g., terraform validate), consider using the flag-backend=false
. - Integrate Terraform formatting, validating, linting, checking policies, and custom testing to the CI/CD pipelines to ensure your code conforms to your organization’s standards.
- Usually, CI/CD pipelines run on distributed systems. To ensure that we will apply the correct plan, we can output the plan to a file and package the whole terraform working directory after each plan. These artifacts will be stored somewhere to be fetched by the apply step to avoid accidentally applying different changes to the ones reviewed.
- Optionally, use the flag
-auto-approve
to apply the changes without human intervention. - Use environment variables prefixed with
TF_VAR_
to pass any necessary values using the CI/CD tool mechanisms. - Set the environment variable
TF_IN_AUTOMATION
to indicate that Terraform is running in automation mode. This adjusts the output of some commands to avoid outputting messages that are misleading in an automation environment.
Integrating the Terraform workflow in CI/CD infrastructure provisioning pipelines is a great way to automate infrastructure delivery. Running Terraform in CI/CD automation eliminates the need for people’s privileged access, enforces a consistent workflow and way of working, and removes any human intervention.
On the other hand, we have to take into account considerations for running on distributed systems, spend substantial engineering time and put a lot of effort into building a custom pipeline that satisfies our team’s needs
A more robust approach to automating your Terraform workflows end-to-end would be to use Spacelift, a collaborative infrastructure delivery tool. Spacelift provides a more mature way of automating the whole infrastructure provisioning lifecycle. Its flexible and robust workflow allows teams to get up to speed quickly and collaborate efficiently.
Spacelift connects directly to the version control system of your choice and provides a truly GitOps native approach. It can support setups with multiple repositories or massive monorepos and leverages the APIs of the VCS provider to give you visibility.
The Spacelift runners are fully customizable Docker containers. This allows teams to enhance the Terraform workflow with custom providers, linters, security tools, and any other custom tooling you might see fit.
Spacelift has a built-in CI/CD functionality for developing custom modules allowing teams to incorporate testing, checks, and linting early into the development phase of modules. Another benefit of using Spacelift is its flexible workflow management. It provides a policy-based process to handle dependencies between projects and deployments with Trigger Policies.
Sometimes, the actual state of live environments drifts from the desired state, a concept known as configuration drift. Spacelift can assist you in automatically detecting and, if desired, reconciling configuration drift.
Spacelift provides a plethora of Policies to allow teams to define and automate rules governing the infrastructure as code. By utilizing Open Policy Agent, users can create their own custom policies and ensure the compliance of Terraform resources.
Check out the Documentation and start automating your infrastructure delivery easily!
In this blog post, we discussed different approaches and strategies to automate Terraform deployments and provision infrastructure in an automated fashion. We looked into the typical Terraform workflow and saw how we can enhance it with orchestration tools. Finally, we saw how Spacelift greatly assists us in bringing our Terraform automation to the next level.
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.
Thank you for reading, and I hope you enjoyed this article as much as I did.
Automate Terraform Deployments with Spacelift
Automate your infrastructure provisioning, build more complex workflows based on Terraform using policy as code, programmatic configuration, context sharing, drift detection, resource visualization, and many more.