Terraform source code should live in a repository. This is close to a universal truth for source code development in general. The most common kind of repository is a git repository, but other similar technologies exist.
There are two common strategies for organizing Terraform code into repositories:
- Use a one-to-one mapping between Terraform configuration (or root module) and git repository.
- Use a single repository for most (or all) Terraform configurations. This is known as a monorepo.
In this blog post, we will explain a monorepo and how it differs from using multiple repositories in Terraform. We will also discuss how to manage Terraform in a monorepo environment.
What we’ll cover:
Monorepo is short for monolithic repository. The idea with a monorepo is to store all the source code related to multiple projects, applications, and systems in the same git repository.
In the context of Terraform, this means keeping the source code for multiple Terraform root modules (Terraform configurations) in the same repository. You might also keep shared Terraform modules for common infrastructure components in the same repository.
A famous example of a monorepo is Google. Google keeps much of its codebase in a single repository. At Google’s scale, this type of repository can present performance challenges. However, in general, a monorepo is a valid and straightforward strategy for source code management and does not require any special tooling.
A monorepo does not have to contain all source code or all your Terraform configurations. In a sense, you have a monorepo once you have more than one Terraform configuration in the same repository. This means you could have multiple monorepos in your environment.
In the rest of this blog post, we will assume that a Terraform monorepo means you have all, or most, of your Terraform configurations stored together in the same repository.
What is the difference between mono and multi-repo Terraform?
You can manage Terraform in a monorepo with multiple Terraform configurations in the same repository, or use multiple repositories, commonly with one Terraform configuration per repository.
From a repository management perspective, you will have more to manage in a multi-repo environment. In a monorepo, it is easier to reuse code and build common tooling that benefits all Terraform configurations.
A striking difference between mono and multi-repo Terraform is that when you add Terraform modules for common infrastructure components to the monorepo, you will not be able to version your Terraform modules in the same way as when you use separate repositories for each module.
In this section, we will review the challenges of managing Terraform in monorepo environments. Some of these are similar to what you would encounter in Terraform multi-repo environments, but require special attention due to the specific monorepo structure.
For the purpose of this discussion, we will assume that the Terraform monorepo environment is hosted as a GitHub repository. Most of the comments are also valid for other version-control systems, with minor differences in the technical details.
We will also assume that the Terraform workflows (plan and apply) run in this GitHub repository using GitHub Actions. Where appropriate, we will discuss how an infrastructure provisioning system (e.g., Spacelift) changes this.
Terraform code structure
A Terraform configuration is a collection of related .tf files in a single directory. This is also known as a root module. One root module equals one Terraform state file. From the root module, you can reference other Terraform files in the form of Terraform modules.
This means that to use a monorepo for multiple Terraform configurations, we need to structure the code into multiple directories.
For a given Terraform configuration (named app1) that we want to provision across three different application environments (dev, stage, and prod), one suggested directory structure is as follows:
.
└── app1
├── main.tf
├── outputs.tf
├── variables.tf
├── dev
│ └── terraform.tfvars
├── prod
│ └── terraform.tfvars
└── stage
└── terraform.tfvars
The Terraform code for this application is identical across the three different application environments. Differences between the three environments are configured using variables provided in the .tfvars files. To provision a given environment, you would issue a command similar to the following:
$ terraform apply -var-file=dev/terraform.tfvars
You don’t need to place .tfvars files in separate directories for each environment. It is a good practice to collect environment-specific files in their own directory to keep the root directory (app1) clean.
Different Terraform configurations (e.g., different applications) should be separated into different directories. Extending the above example with another Terraform configuration named app2 gives us the following directory structure:
.
├── app1
│ ├── main.tf
│ ├── outputs.tf
│ ├── variables.tf
│ ├── dev
│ │ └── terraform.tfvars
│ ├── prod
│ │ └── terraform.tfvars
│ └── stage
│ └── terraform.tfvars
└── app2
├── compute.tf
├── main.tf
├── network.tf
├── dev
│ └── terraform.tfvars
└── prod
└── terraform.tfvars
As you can see, the structures of app1 and app2 do not have to be similar. The two applications are different Terraform root modules.
Terraform modules for common infrastructure components should be placed in a separate directory at the root level or in a directory nested under the root.
An example of how Terraform modules fit together with the root modules discussed above is shown in the following directory structure (for brevity, the full depth of the tree is not shown):
.
├── configurations
│ ├── app1
│ └── app2
└── modules
├── compute
└── virtual-network
Dependency and state management
You need to deal with four different types of dependencies in a Terraform monorepo environment. These are not exclusive to monorepo environments, but it is important to understand them in this context.
These four dependencies are:
- Terraform state
- Terraform modules
- Terraform providers
- Shared infrastructure
Each Terraform configuration requires its own state configuration placed in a backend block. An example of a backend block using AWS S3 as the state backend looks like this:
terraform {
backend "s3" {
bucket = "spacelift"
key = "app1/prod/terraform.tfstate"
region = "eu-north-1"
}
}
Instead of hardcoding each argument in the backend block, you can partially configure the backend in code, and provide the missing values during terraform init
. An example of how to achieve this is to configure a simplified backend block like this:
terraform {
backend "s3" {
bucket = "spacelift"
region = "eu-north-1"
}
}
Then, in the automation workflows, provide the missing value (i.e., key) using the -backend-config
flag. An example of doing this in GitHub Actions is shown below:
on: push
jobs:
terraform:
# … initial steps omitted
steps:
- run: |
terraform plan \
-backend-config="key=app1/prod/terraform.tfstate"
If we configure this GitHub Actions workflow with input variables for application name and environment, we can avoid hardcoding these details in the workflow:
on: push
jobs:
terraform:
# … initial steps omitted
steps:
- run: |
terraform plan \
-backend-config="key={{ inputs.app}}/{{ inputs.env }}/terraform.tfstate"
You can now reuse the same GitHub actions workflow for any Terraform configuration in your Terraform monorepo. Make sure the correct state file is used in each case.
State management could easily be abstracted away through the use of an infrastructure automation platform (e.g., Spacelift or HCP Terraform). With these platforms, you could rely on the built-in state management and skip adding backend blocks to the code.
The second dependency to consider is Terraform modules.
A risk you have to consider when using shared Terraform modules in a monorepo is that when a module is updated, each Terraform configuration that uses this module is affected. This means any change to the module code must be thoroughly tested. The Terraform test framework can be used to set up tests for module inputs and outputs, as well as plan and apply tests, to make sure they work as intended.
If you need to individually version shared Terraform modules, you will need to use separate git repositories for these or publish them to some other consumable location.
The third dependency we deal with is Terraform providers. Providers are not shared like Terraform modules. Each Terraform root module should configure the required providers in its Terraform block.
An example of a Terraform root module that uses the AWS provider is shown below:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 6.3"
}
}
}
Remember that all provider version constraints (from the root module and any external module) will be considered when you run terraform init
. If a version that fulfills all conditions is found, it will be used; otherwise, an error will occur. To handle these situations, the application team should either update their own code to support the required Terraform provider version or submit a change proposal for the Terraform module in question.
You will also need to consider how each Terraform configuration authenticates to the targeted provider environments (e.g., AWS). To authenticate from Terraform on GitHub to AWS, you can use OIDC and set up individual access to AWS accounts from specific GitHub environments. Details on how to do this are outside the scope of this blog post.
The last dependency we might have to deal with is shared infrastructure. You should ideally try to minimize the amount of shared infrastructure, but in some cases, it is warranted. One example is a shared networking infrastructure, including firewalls and other components, that is centralized in this network.
There are two main approaches to sharing infrastructure between Terraform configurations:
- Using Terraform data sources
- Sharing state files
Using Terraform data sources is the preferred method, as this requires no access to other state files. However, it does mean you create implicit dependencies between Terraform configurations. If a resource you read using a data source changes or disappears, you would not know this until you run your next terraform apply
.
Sharing resources through state files is not ideal because you need to provide read access to your entire state file for the teams that need access to it. This is true even if they only need access to read a single output value from your state file.
If you decide to use this method anyway, you can achieve this using the terraform_remote_state
data source:
data "terraform_remote_state" "networking" {
backend = "s3"
config = {
bucket = "spacelift"
key = "networking/prod/terraform.tfstate"
region = "eu-north-1"
}
}
With Spacelift, you can configure stack dependencies to pass data between different Terraform configurations. This is a better approach because you do not have to share the entire state file, and you can be notified when a value you depend on has changed.
Access management
Access management is important for any repository structure you use. Monorepos raise specific challenges for access management.
Multiple developers are expected to work in the same repository. You need a way to ensure that not every developer can change any code.
On GitHub, you can use teams, repository rulesets, environments, and more to achieve robust access management. Note that all developers working in the repository will have (at least) read access to the repo. If this is unacceptable for any reason, you will need to split your monorepo into multiple repositories.
With a third-party automation platform (e.g., Spacelift), you can more easily achieve robust access management that is hard to achieve on GitHub. However, you will still need to set up rules for how changes can be pushed to your GitHub monorepo.
Automation workflows
The primary automation tool on GitHub is GitHub Actions. You can build generic Terraform workflows that can be reused for all Terraform configurations. You should aim to have workflows that cover the most common use cases (e.g., running through a normal Terraform workflow, running Terraform tests, and more).
Each Terraform configuration will still need its own specific workflow file, which is the entrypoint for its automation. This workflow can reference other reusable workflows.
If you are using an infrastructure automation platform (e.g. Spacelift or HCP Terraform), the automation workflows are mostly handled for you. You can configure the platform to trigger a terraform plan
and terraform apply
when code is committed to a specific directory in your monorepo.
Handling application secrets
Application secrets must be handled securely, regardless of the type of repository setup you are using. However, in a monorepo environment, it is crucial to ensure that secrets are not available to applications other than those that need them.
On GitHub, you can use environment secrets to separate secrets from each other logically. You can enable a given team to use a given environment where their secrets are available. This team will execute their GitHub Actions workflow in the context of their environment(s) and thus get access to their secrets.
You will also benefit from using a third-party infrastructure automation platform like Spacelift here. You can configure your stacks with the required secrets, which will not be accessible from other locations.
You can achieve the same result if you are using HCP Terraform, where you can configure variables and secrets for each workspace individually. You could also integrate your workspace with HashiCorp Vault to generate short-lived credentials during terraform plan
and apply
.
A key advantage of using Terraform in a monorepo environment is that all the source code is available to all developers working in the same repository. You can easily see how other teams have configured their infrastructure. This allows you to start configuring your own infrastructure and learn from others quickly.
Other advantages include:
- The auditing process is easier when all your infrastructure is configured in one location. You don’t need to hunt down all Terraform configurations with the risk of forgetting a few repos.
- Reusing common code and workflows is easier. Keeping Terraform modules in the same repository as the Terraform configurations that use them simplifies module discovery. There’s no need to hunt down which modules exist and where they can be downloaded.
- Using one or a few repositories for Terraform means a lower administrative burden for your git administrators.
- Keeping all Terraform code collected means you get better visibility into what types of resources you are using, which modules are used the most, what Terraform providers are in use, and more. You can start scraping your repositories for this information and build dashboards to visualize your Terraform estate.
One of the biggest disadvantages of using Terraform in a monorepo environment is the complexity of managing repository permissions. When you use multiple repositories, you can easily assign permissions for the teams that own the repositories and allow each team to have full admin permissions for their specific repos.
You can still allow other teams to read all repositories and even contribute changes through structured pull-request workflows.
A few other disadvantages are:
- Managing module upgrades can be complex if you keep your Terraform modules in the same repositories as the Terraform configurations that use them. You can’t version the modules in the same way you can when they are managed in their own separate repository. Introducing a single bad module update in a monorepo can negatively impact multiple Terraform configurations across multiple application environments.
- Having many teams with easy access to the same Terraform modules working in the repository could result in numerous proposed changes to these shared modules. Each team will have its own needs that might be incompatible with those of other teams. This problem also occurs when you use separate git repositories for each external module, but it is multiplied in a monorepo environment.
- In a large git repository with multiple teams working concurrently, you might reach concurrency limits on CI/CD pipelines. For instance, if you use GitHub-managed workers for GitHub Actions, the number of workflow runs that can execute at the same time is limited. You can bypass this using your own workers.
Most best practices that are valid for Terraform in general remain relevant in a Terraform monorepo environment. In this section, we will highlight a few best practices that are especially important in these types of environments.
1. Use Terraform modules for common infrastructure components
You can implement common infrastructure components in the form of Terraform modules and make these available conveniently in the same monorepo as the Terraform configurations.
The benefit of keeping all of your Terraform root modules and other infrastructure modules in the same repository is that you have visibility into what is available. You can also see how other teams are using the same modules to learn how they work. The platform engineering team also gets a great overview of where each Terraform module is used.
This ease of management and usage comes at a cost. You will not be able to version your Terraform modules in the same way you do when you use individual repositories for each module.
2. Isolate Terraform state files
If you are using a single Terraform backend for state storage, you need to have a good naming convention for your Terraform state files. The convention should prohibit accidental name collisions and clearly indicate what Terraform configuration it belongs to.
An example of this is if you have the following directory structure of your Terraform monorepo:
.
└── configurations
├── app1
│ ├── dev
│ ├── prod
│ └── stage
├── app2
│ ├── dev
│ ├── prod
│ └── stage
└── app3
├── dev
└── prod
You could use a Terraform state backend structure that mirrors the monorepo structure.
The following output shows the content of an AWS S3 backend for the directory structure shown above (the output is truncated to only show the relevant parts):
$ aws s3 ls --recursive s3://spacelift-terraform-state-eu-north-1
2025-07-14 19:38:56 4886 configurations/app1/dev/terraform.tfstate
2025-07-14 19:39:15 17916 configurations/app1/prod/terraform.tfstate
2025-07-14 19:39:07 5972 configurations/app1/stage/terraform.tfstate
2025-07-14 19:39:30 3827 configurations/app2/dev/terraform.tfstate
2025-07-14 19:39:51 38270 configurations/app2/prod/terraform.tfstate
2025-07-14 19:39:38 34443 configurations/app2/stage/terraform.tfstate
2025-07-14 19:40:02 1748 configurations/app3/dev/terraform.tfstate
2025-07-14 19:40:10 5244 configurations/app3/prod/terraform.tfstate
If you need to isolate the state files even further, you could set up multiple state backends. This strengthens the separation of Terraform state files, but introduces the additional overhead of managing multiple state backend locations.
3. Implement robust access management
Access management in a monorepo environment for Terraform could be complicated. You need to assign permissions on a file or directory level in the repository. How you achieve this is based on the features of the version-control system you are using.
On GitHub, you can set up repository rulesets and branch protection rules to configure how changes can be introduced to specific branches. You can also connect workflows with specific environments and configure rules for these environments, e.g., who can approve a change that targets a specific environment.
Use the access management support your version-control system offers.
4. Implement standardized Terraform workflows
Most teams working in the Terraform monorepo environment will need the same automation workflows. It is a bad idea to make each team reinvent the Terraform automation wheel.
On GitHub, these workflows can be standardized as reusable GitHub Actions workflows that you reference from other workflows. This allows each team to build workflows for their environments with customizations specific to their needs while reusing the common workflow steps from the reusable workflows.
Typical processes you should standardize include installing the Terraform binary, formatting the Terraform source code, running through plan and apply operations, handling the state file, and more.
5. Reuse common Terraform tooling
When you work in a Terraform monorep environment, most teams will encounter the same challenges and have the same Terraform needs. An example of this could be the need to create documentation of Terraform modules or to perform security scans of the Terraform source code.
You should standardize a set of tools that covers the needs of the development teams. Make these tools available as part of the standardized Terraform workflows (see the previous section).
On GitHub, you can set up your own GitHub Action runners where you install the common set of tooling beforehand. This speeds up workflow runs, which saves time for your developers and costs for your organization.
If you use a third-party infrastructure automation platform (e.g., Spacelift), some of these tooling needs are built into the platform. One example is policy as code (e.g., OPA on Spacelift) and integrations with third-party tools (e.g., Infracost integration as a run task on HCP Terraform).
Terraform is really powerful, but to achieve an end-to-end secure Gitops approach, you need to use a product that can run your Terraform workflows. Spacelift takes managing Terraform to the next level by giving you access to a powerful CI/CD workflow and unlocking features such as:
- Policies (based on Open Policy Agent) – You can control how many approvals you need for runs, what kind of resources you can create, and what kind of parameters these resources can have, and you can also control the behavior when a pull request is open or merged.
- Multi-IaC workflows – Combine Terraform with Kubernetes, Ansible, and other infrastructure-as-code (IaC) tools such as OpenTofu, Pulumi, and CloudFormation, create dependencies among them, and share outputs
- Build self-service infrastructure – You can use Blueprints to build self-service infrastructure; simply complete a form to provision infrastructure based on Terraform and other supported tools.
- Integrations with any third-party tools – You can integrate with your favorite third-party tools and even build policies for them. For example, see how to integrate security tools in your workflows using Custom Inputs.
Spacelift enables you to create private workers inside your infrastructure, which helps you execute Spacelift-related workflows on your end. Read the documentation for more information on configuring private workers.
You can check it out for free by creating a trial account or booking a demo with one of our engineers.
Managing Terraform in a monorepo environment offers both benefits and challenges.
A monorepo for Terraform is a repository with more than one Terraform root module. It often contains tens or even hundreds of Terraform root modules, and multiple teams work concurrently in it.
Key design decisions for a monorepo for Terraform include:
- How should you structure the Terraform code? This involves having a strategy for placing Terraform root modules and shared infrastructure modules in the repo.
- How should you handle dependencies? Dependencies come in four variants: state, providers, modules, and shared infrastructure.
- How should you handle access management? This is especially important when multiple developers work in the same repository. You need to utilize the access management features of your version-control system and set up a golden way of working.
- How should you build reusable automation workflows that benefit all the teams working in the monorepo environment?
- How should you handle application secrets? You should avoid giving everyone with access to the repository read access to secrets.
A significant advantage of a monorepo is that your source code is available for every developer to see, which benefits everyone. You can learn from how other teams have solved problems similar to yours.
Best practices for Terraform in monorepo environments are similar to best practices for Terraform in general, but extra care should be taken when handling Terraform modules, state files, and access management. You should use the shared code environment and standardize on solutions for problems that each team faces (e.g., automation workflows and state management).
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.
Frequently asked questions
How do I safely isolate Terraform state for multiple environments inside a single monorepo?
To safely isolate Terraform state for multiple environments in a monorepo, use separate directories and backend configs per environment. Each environment (e.g., dev, staging, prod) should have its own folder with a main.tf, unique backend block (e.g., with separate key paths), and isolated state location. You can also use terraform.workspace
if needed.
What’s the best way to version and release individual Terraform modules when they all live in one repository?
The best approach is to version each Terraform module independently using Git tags with a directory-based naming convention. For example, tag versions like network/v1.2.0
or modules/network/v1.2.0
to clearly associate each tag with its module path. Then use git tag
and git diff
to track changes per module.
When should I migrate from a multi-repo setup to a Terraform monorepo (and vice-versa)?
Switch to a monorepo when teams share modules, follow unified workflows, or need centralized policies. It simplifies coordination and reduces duplication. Use multi-repos when teams need independence, deploy separately, or want isolated pipelines. It improves focus, reduces blast radius, and scales better with large teams.
Automate Terraform deployments with Spacelift
Automate your infrastructure provisioning, build more complex workflows based on Terraform using policy as code, programmatic configuration, context sharing, drift detection, resource visualization, and many more.