Terraform

Terraform Data Sources – How They Are Utilized

Terraform data sources

Terraform has redefined the way we manage and provision our cloud resources. With its declarative approach, Terraform enables us to define our desired infrastructure state and effortlessly bring it to life across various cloud providers. Data sources are a fundamental concept that enables Terraform to gather information from existing resources and incorporate it into the configuration. 

In this post, we delve into their significance, mechanics, and applications, uncovering how they enhance the precision and adaptability of infrastructure management.

We will cover:

  1. What is a data source in Terraform?
  2. How to use data sources in Terraform?
  3. Terraform data source examples
  4. Refreshing data sources

What is a data source in Terraform?

Data sources in Terraform are used to get information about resources external to Terraform, and use them to set up your Terraform resources. For example, a list of IP addresses a cloud provider exposes. Data sources serve as a bridge between the current infrastructure and the desired configuration, allowing for more dynamic and context-aware provisioning.

Organizations – especially the ones who have begun IaC adoption – often provision their infrastructure via multiple processes. They typically have a pre-existing set of cloud resources and architectures deployed. When new cloud components are being developed using IaC, data sources assist in querying various attributes of the pre-existing cloud resources. This is very useful from the point of view of IaC adoption and dynamic infrastructure provisioning.

Even when all the infrastructure components are being developed using IaC, the components are independently developed based on logical separation.

For example, a particular IaC repository may have resource declarations for database and storage, while others may have declarations for compute resources. Thus, multiple infrastructure components are provisioned using separate Terraform projects. 

In such cases as well, data sources play a crucial role in sharing the resource information, which is made available only after provisioning is complete.

For example, compute resources may depend on database connection string so that the applications deployed on them can perform I/O operations. The database connection string, credentials, and similarly other such resource attributes are made available using data sources.

Before we move on with the use case examples, let us understand some differences between data sources and a few other aspects to avoid confusion.

What is the difference between variables and data in Terraform?

Terraform variables and data sources serve distinct purposes within the infrastructure provisioning process. Variables are used to parameterize your Terraform configurations. They allow us to define reusable values that can be customized for different environments or deployments. Variables are typically used to make our configurations more flexible and easier to manage by centralizing values that might change based on the context.

Data sources are used to retrieve information from external systems or existing resources and incorporate that information into our configuration. Data sources provide dynamic and context-aware attributes that can be used within our resource definitions. They help us make your configurations more intelligent and adaptable by leveraging information from the real world.

What is the difference between import and data source in Terraform?

The “import” command in Terraform allows us to bring existing resources under Terraform management. It helps us incorporate resources that were created outside of Terraform into our Terraform state so that we can manage them using Terraform moving forward.

A data source is used to query information from external systems or existing resources and incorporate that information into our Terraform configuration. It provides dynamic attributes that can be used to make our configurations context-aware and responsive to changes. Data sources only query for the information and do not move the resources under Terraform management.

What is the difference between resources and data sources in Terraform?

Resources in Terraform represent the infrastructure components we want to create, manage, or delete. They define the desired state of a particular resource type, such as virtual machines, databases, networks, and more. Resources are the building blocks of our infrastructure and directly interact with our cloud provider’s APIs to create or modify actual resources.

Data sources are used to query information from existing resources or external systems. They provide a way to fetch specific attributes or data that we need to incorporate into our Terraform configuration. Data sources do not create or manage resources; they retrieve information to inform the configuration of your resources.

Terraform data sources vs. locals

Local variables in Terraform are used to store and reuse values within our Terraform configuration. They provide a way to keep our configuration DRY (Don’t Repeat Yourself) by defining a value in one place and referencing it in multiple locations. They are defined within a module or a single file to store values such as strings, numbers, lists, or maps, thus making our configuration more readable and maintainable.

Data sources are used to query external systems or existing resources for information that we need to incorporate into our configuration. They provide dynamic attributes that can enrich our configuration with real-world data. Data sources retrieve real-world data, such as information about existing resources, cloud provider metadata, or external system details.

How to use data sources in Terraform?

To query the data from the pre-existing real-world resources in Terraform, we use a special data resource, as shown below.

Here, the provider_type represents the type of the object we want to query.

For example, for querying information about VPCs, this provider_type would be “aws_vpc”, as seen in the documentation.

data "provider_type" "name" {
  # Configuration options for the data source (filters)
}

The name is chosen by us to differentiate between various data sources being queried and when referring them elsewhere in the configuration. Within this resource block, we provide multiple configuration options that help us specify filter conditions to fetch the right data. 

For example, we may have multiple VPCs provisioned in our AWS account, but we are interested in a specific VPC. Assuming we know the ID of that VPC, we can provide the same using the “id” attribute. We can then use the data returned by this data source – information about the VPC – to configure other resources in our configuration.

The documentation in the Terraform registry maintains the list of all the data sources along with configuration options per resource, which is used to fetch the information. Let us explore various use cases of using data source resource. 

Terraform data source examples

All the examples discussed in this post are available in this GitHub repository.

Example 1: Using data sources to access external resource attributes

Let us assume we need to fetch some details like bucket id, ARN, region, and domain name of a specific S3 bucket from our AWS account. To get the same, we use the “aws_s3_bucket” data resource and provide the name of the bucket in the “bucket” attribute, as shown below.

data "aws_s3_bucket" "existing_bucket" {
  bucket = "sumeet.life"
}

I have a bucket named “sumeet.life” in my AWS account. It also has the static website hosting enabled to host a basic static website.

When this Terraform configuration with appropriate provider settings is initialized and enabled, Terraform reads the information from AWS and makes it available in the “data.aws_s3_bucket.existing_bucket” variable.

To access the desired information, let us create a few output variables like so:

output "bucket_id" {
  value = data.aws_s3_bucket.existing_bucket.id
}

output "bucket_arn" {
  value = data.aws_s3_bucket.existing_bucket.arn
}

output "bucket_region" {
  value = data.aws_s3_bucket.existing_bucket.region
}

output "bucket_domain_name" {
  value = data.aws_s3_bucket.existing_bucket.bucket_domain_name
}

As far as the attributes listed above are concerned, they are available for all the buckets, even if website hosting is not enabled.

The bucket_domain_name attribute is returned by AWS by simply joining the two strings – “bucket_name” and “s3.amazonaws.com”.

However, there are certain attributes that are only available for static site hosting enabled S3 buckets like:

// Will be printed only for S3 buckets with Static website hosting enabled
output "bucket_endpoint" {
  value = data.aws_s3_bucket.existing_bucket.website_endpoint
}

If the static site hosting is not enabled for a given S3 bucket, it will simply not return the endpoint attribute.

Running terraform apply command returns and displays all the output variables, as shown below.

.
.
.
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes


Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

bucket_arn = "arn:aws:s3:::sumeet.life"
bucket_domain_name = "sumeet.life.s3.amazonaws.com"
bucket_endpoint = "sumeet.life.s3-website.eu-central-1.amazonaws.com"
bucket_id = "sumeet.life"
bucket_region = "eu-central-1"

Example 1 source

Example 2: Using data from remote state files

This special data source is used to read information from remote state files. When infrastructure is managed using Terraform configuration, Terraform stores the information about managed resources in a state file remotely. The “terraform_remote_state” data source is used to query information from this state file.

Let us assume that the VPC and networking configuration is being managed in a different Terraform project, and its state file is being managed in the S3 bucket. We want to consume the information about this VPC, its subnets, and maybe security groups in our Terraform configuration. To do this, we first declare the data resource as below.

data "terraform_remote_state" "network" {
  backend = "s3"
  config = {
    bucket = "tfremotestate-vpc"
    key = "state" // Path to state file within this bucket
    region = "eu-central-1" // Change this to the appropriate region
  }
}

Here we specify the type of remote backend to be queried along with the configuration options. It should be noted that only the information exposed in the “output” variable can be accessed using this data source. The VPC project has to explicitly declare output variables so that they are available in our project.

Note: the source code for VPC configuration is available in this GitHub repository.

To access the information like subnet ids, and subnet names, we declare the output variables as shown below. These variables essentially access the output variables exposed by the VPC project from their state file in the format below.

data.terraform_remote_state.network.outputs.<output_variable_name>
output "pub_subnet_id" {
  value = data.terraform_remote_state.network.outputs.subnet_pub_id
}

output "pri_subnet_id" {
  value = data.terraform_remote_state.network.outputs.subnet_pri_id
}

output "pub_subnet_name" {
  value = data.terraform_remote_state.network.outputs.subnet_pub_name
}

output "pri_subnet_name" {
  value = data.terraform_remote_state.network.outputs.subnet_pri_name
}

When this Terraform configuration is initialized and applied, the subnet information is printed as below.

.
.
.
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes


Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

pri_subnet_id = "subnet-03a33174741fd9cb4"
pri_subnet_name = "my-private-subnet"
pub_subnet_id = "subnet-0fad314b1f1e652bc"
pub_subnet_name = "my-public-subnet"

It is important to note that the usage of this data source is discouraged due to various security concerns, one of which we will see in the next example. The main reason is that the user credentials used to access these output variables inherently have access to the complete state file, which may contain sensitive information.

Instead, it is recommended to explicitly publish the data in a separate application (like Consul) or use data sources as demonstrated in Example 1.

Example 2 source

Read also how to build AWS VPC with Terraform.

Example 3: Using data sources to access sensitive data from remote state

As mentioned in the previous example, to further demonstrate why using terraform_remote_state is not a good way to access data – in this example we access the sensitive information exposed in Terraform state files.

Even when variables in Terraform configuration are marked as sensitive, they are not secure in transit or at rest. These sensitive values are stored and transmitted in plain text. The sensitive attribute is only used to mask these values when being printed in logs.

Let us assume that the VPC information we are trying to access has a CIDR range exposed which is considered as sensitive information. When we try to print the state information of this VPC, from the VPC project Terraform CLI, it masks the CIDR range, as shown below.

terraform state show aws_vpc.sl_vpc
# aws_vpc.sl_vpc:
resource "aws_vpc" "sl_vpc" {
    arn                                  = "arn:aws:ec2:eu-central-1:532199187081:vpc/vpc-0cb1aa34b173b1bb6"
    assign_generated_ipv6_cidr_block     = false
    cidr_block                           = (sensitive value)
    default_network_acl_id               = "acl-0cec824a344faea90"
    default_route_table_id               = "rtb-05fc6e24d9eb665b4"
    default_security_group_id            = "sg-0cca1301b62307e97"
    dhcp_options_id                      = "dopt-9a479ff0"
.
.
.

However, when we try to access the same from our project using terraform_remote_state data source, it is printed in plain text.

output "vpc_cidr" {
  value = data.terraform_remote_state.network.outputs.vpc_cidr
}

Initialize and run terraform apply.

Apply complete! Resources: 0 added, 0 changed, 0 destroyed.

Outputs:

pri_subnet_id = "subnet-03a33174741fd9cb4"
pri_subnet_name = "my-private-subnet"
pub_subnet_id = "subnet-0fad314b1f1e652bc"
pub_subnet_name = "my-public-subnet"
vpc_cidr = "192.168.0.0/16"

Sensitive values are not completely secure and need a more holistic approach to manage the security around them.

Example 3 source

Example 4: Using data sources to access AWS secrets

Secrets Manager is a service provided by AWS to manage sensitive data. It stores the sensitive variables securely using encryption, and makes them available to various services for automation purposes.

In Terraform configurations, these secrets are accessed using data sources.

In this example, we provision an RDS Postgres database instance and set its master username and password attributes by fetching those values from the Secrets Manager service. The data resources below fetch the secret values and make them available for the RDS configuration.

data "aws_secretsmanager_secret" "mydb_secret" {
  arn = "arn:aws:secretsmanager:eu-central-1:532199187081:secret:pg_db-dKuRrd"
}

data "aws_secretsmanager_secret_version" "mydb_secret_version" {
  secret_id = data.aws_secretsmanager_secret.mydb_secret.id
}

We then use these secret variables to set the username and password in the aws_db_instance resource as below.

Here we are using jsondecode() Terraform function to decode the secret string, and use the username and password attributes to access the corresponding values.

resource "aws_db_instance" "my_db" {
  allocated_storage = 20
  storage_type = "gp2"
  engine = "postgres"
  engine_version = "13.3"
  instance_class = "db.t2.micro"
  username = jsondecode(data.aws_secretsmanager_secret_version.mydb_secret_version.secret_string).username
  password = jsondecode(data.aws_secretsmanager_secret_version.mydb_secret_version.secret_string).password
  skip_final_snapshot = true


  tags = {
    Name = "ExampleDB"
  }

  tags_all = {
    Environment = "Development"
  }
}

This is a secure way to manage secrets as they are not made available in plaintext anywhere in the provisioning process. To further prove this point, let us also try to output these values using output variables as below.

output "dbusername" {
  value = jsondecode(data.aws_secretsmanager_secret_version.mydb_secret_version.secret_string).username
  sensitive = true
}

output "dbpassword" {
  value = jsondecode(data.aws_secretsmanager_secret_version.mydb_secret_version.secret_string).password
  sensitive = true
}

Initialize and apply this Terraform configuration.

As we can see, although we are trying to print these secret values in the output variables, they are still masked.

.
.
.
aws_db_instance.my_db: Still creating... [3m50s elapsed]
aws_db_instance.my_db: Still creating... [4m0s elapsed]
aws_db_instance.my_db: Still creating... [4m10s elapsed]
aws_db_instance.my_db: Still creating... [4m20s elapsed]
aws_db_instance.my_db: Still creating... [4m30s elapsed]
aws_db_instance.my_db: Creation complete after 4m38s [id=db-3SOTY37ZFWDT4DXLBUZEK2SSGI]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Outputs:

dbpassword = <sensitive>
dbusername = <sensitive>

Example 4 source

Example 5: Creating dynamic configuration with data sources

Terraform data sources can retrieve critical details from various cloud providers, like cloud provider metadata, databases, or APIs, enabling the configuration to adapt to real-world changes in the current Terraform project.

By seamlessly integrating these data sources, Terraform deployments become more flexible and responsive. This approach streamlines the management of resources in a dynamic manner.

In our example, we know that the VPC and networking configurations are managed in a different Terraform project. Let us assume that our Terraform configuration is expected to provision and manage compute resources. We have been tasked to create an EC2 instance in each subnet of the VPC.

To do this, we first use a data source to get all the relevant subnets for the given VPC as shown below.

data "aws_subnets" "my_subnets" {
  filter {
    name = "vpc-id"
    values = ["vpc-0cb1aa34b173b1bb6"]
  }
}

Next, we use aws_instance resource along with Terraform for_each meta-argument to dynamically create EC2 instances in the appropriate subnets.

Depending on the subnets retrieved from the above data source, the corresponding number of EC2 instances will be created, and at the same time, they will be placed in each subnet. 

resource "aws_instance" "my_vm_2" {
  for_each = toset(data.aws_subnets.my_subnets.ids)
  ami = var.ami //Ubuntu AMI
  instance_type = var.instance_type

  subnet_id = each.key

  tags = {
    Name = var.name_tag,
  }
}

This introduces the dynamic nature of data sources in a way that if more subnets are added to the VPC in their project, an appropriate number of EC2 instances will be added to them in the next Terraform apply run of our current project.

Initialize and apply this Terraform configuration, and verify the same by logging on to the AWS web console.

terraform test data source
terraform print data source value

Check out also how to create an AWS EC2 instance using Terraform.

Optionally, we can also use terraform_remote_state data source to query the state file of the VPC project.

The information about the subnet is exposed by a Terraform output variable  that provides a list of subnet IDs (list(string)), as shown below.

output "subnets" {
  value = [aws_subnet.public_a.id, aws_subnet.private_a.id]
}

Modify the configuration for aws_instance to adopt this data source in the for_each meta-argument.

resource "aws_instance" "my_vm" {
  for_each = toset(data.terraform_remote_state.network.outputs.subnets)
  ami = var.ami //Ubuntu AMI
  instance_type = var.instance_type

 subnet_id = each.key

  tags = {
    Name = var.name_tag,
  }
}

This would get the same result, making the code dynamic.

Example 5 source

Example 6: Managing resource dependencies with data sources

Data sources indirectly help manage resource dependencies. If the data being queried by data sources does not exist, then the resource that is dependent on the same will not be created.

In the previous example, when we query for the subnets using aws_subnets data source, the VPC id provided in the filter is a real VPC – it exists. Thus it returns all the subnets (2) belonging to this VPC, and two EC2 instances are created.

However, if we provide a VPC ID that does not exist, then these resources will simply not be created.

Modify the data source as shown below.

data "aws_subnets" "my_subnets" {
filter {
name = "vpc-id"
//values = ["vpc-0cb1aa34b173b1bb6"]
values = ["vpc-xxx"] // vpc id that does not exist
}
}

Try to run the terraform plan command to the rest of the Terraform configuration and see the output.

The output should look like this:

terraform plan
data.aws_subnets.my_subnets: Reading...
data.aws_subnets.my_subnets: Read complete after 0s [id=eu-central-1]

No changes. Your infrastructure matches the configuration.

Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.

Data sources thus inherently manage the dependencies for creating dynamic resources dependent on the existence of the data.

Example 6 source

Example 7: Validate inputs with data sources

This is an extension of the observation made in the previous example.

In the previous example, we used a meta-argument for_each, which works on the input received from the data source. If there is no data returned (no valid filter applied), then resource creation is simply skipped without throwing any error.

However, if we don’t want this to be ignored and instead throw an error during the plan stage, we can use a data source between the variable and EC2 instance resource block. In the example below, we have defined a variable to provide an invalid AMI value to aws_instance resource block. 

variable "instance_ami" {
  description = "Ubuntu AMI in Frankfurt region."
  //default = "ami-065deacbcaac64cf2"
  default = "ami-xxx" // This does not exist
}


resource "aws_instance" "myec2" {
  ami = var.instance_ami
  instance_type = "t2.micro"
}

The Terraform plan output does not throw any error.

.
.
.
+ user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.

However, it does throw an error when we proceed to apply this configuration since the given AMI does not exist.

In certain scenarios, it might be desirable for this configuration to throw an error in the planning phase itself for various reasons. This is achieved using data sources.

Adding the data source block below validates the existence of the given AMI value in the specified region in the provider.

data "aws_ami" "selected" {
most_recent = true


filter {
name = "image-id"
values = [var.instance_ami]
}
}

The terraform plan output now:

.
.
.
+ user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = (known after apply)
    }

Plan: 1 to add, 0 to change, 0 to destroy.
│ Error: Your query returned no results. Please change your search criteria and try again.
│   with data.aws_ami.selected,
│   on main.tf line 9, in data "aws_ami" "selected":
9: data "aws_ami" "selected" {

Example 7 source

Example 8: Managing configuration drift

Configuration drift refers to the gradual misalignment between the intended state of an IT infrastructure and its actual state over time. It occurs in dynamically evolving environments, where manual changes, updates, or unauthorized modifications accumulate on various resources. 

Configuration drift can lead to inconsistencies, security vulnerabilities, and operational inefficiencies as the system deviates from its expected configuration.

This article about Terraform drift detection describes how to automatically detect configuration drifts and restore the infrastructure state to what was originally intended using Spacelift.

Data sources also help in managing configuration drifts in their own ways. To understand it, we first have to understand what configuration drift means in this context. The two scenarios below are considered relevant sources of configuration drift in this case.

Unplanned changes

If the infrastructure is changed due to some reason (human error), then the next Terraform apply run restores it to the original state. This is a general Terraform behavior and also applies to data sources.

Planned changes

If the infrastructure depends on the terraform_remote_state data source of another Terraform project, and the configurations in the source Terraform project are changed in a planned manner, then the current configuration should automatically adapt to those changes. This is an example of configuration drift caused due to planned changes.

Consider the following configuration. Here, our configuration depends on:

  1. Terraform remote state to get the security group information
  2. AWS Subnets data source, which queries for subnets directly from AWS

It creates EC2 instances in all subnets returned from the data source query and associates all those instances with a security group returned from a remote state.

data "terraform_remote_state" "network" {
  backend = "s3"
  config = {
    bucket = "tfremotestate-vpc"
    key = "state" // Path to state file within this bucket
    region = "eu-central-1" // Change this to the appropriate region
  }
}

data "aws_security_group" "my_sg" {
  id = data.terraform_remote_state.network.outputs.my_sg
}

data "aws_subnets" "my_subnets" {
  filter {
    name = "vpc-id"
    values = ["vpc-0cb1aa34b173b1bb6"]
  }
}

resource "aws_instance" "my_vm_2" {
  for_each = toset(data.aws_subnets.my_subnets.ids)
  ami = var.ami //Ubuntu AMI
  instance_type = var.instance_type

  subnet_id = each.key
  vpc_security_group_ids = [ data.aws_security_group.my_sg.id ]

  tags = {
    Name = var.name_tag,
  }

  depends_on = [ data.aws_subnets.my_subnets ]
}

To introduce unplanned changes in the infrastructure provisioned using the above configuration, let us login to the AWS management console and manually change the Security Group for one of the instances.

Original SG:

terraform data source filter

Manually drifted to:

terraform aws data source

Now if we run the terraform plan, it highlights the change to revert this drift as shown in the output below.

Run terraform apply to revert this configuration drift.

Terraform will perform the following actions:

  # aws_instance.my_vm_2["subnet-03a33174741fd9cb4"] will be updated in-place
  ~ resource "aws_instance" "my_vm_2" {
        id                                   = "i-0d2b9f95d2ef084ec"
        tags                                 = {
            "Name" = "My EC2 Instance"
        }
      ~ vpc_security_group_ids               = [
          - "sg-0791de242fd459699",
          + "sg-07ce474d21edd6e54",
        ]
        # (28 unchanged attributes hidden)

        # (8 unchanged blocks hidden)
    }

Plan: 0 to add, 1 to change, 0 to destroy.

As part of planned changes, let us assume that the team responsible to manage our VPC and networking configuration decided to add one more private subnet in their configuration. As per our requirement – “An EC2 instance needs to be provisioned in each subnet”.

Since we are using aws_subnet data source “which queries for all the subnets which exist in a given VPC”, if we run terraform plan, it automatically detects the creation of additional subnet and provides a plan to add an EC2 instance as shown below. 

.
.
.
        }
      + tenancy                              = (known after apply)
      + user_data                            = (known after apply)
      + user_data_base64                     = (known after apply)
      + user_data_replace_on_change          = false
      + vpc_security_group_ids               = [
          + "sg-07ce474d21edd6e54",
        ]
    }

Plan: 1 to add, 0 to change, 0 to destroy.

This makes sure that the implementation of requirements is consistent without any drifts.

Example 8 source

Refreshing data sources

By default, Terraform will refresh all data sources before creating a plan. You can also explicitly refresh all data sources by running terraform refresh.

Occasionally you’ll have data sources that change very often and would like to keep your resources in sync with those changes. The easiest way to achieve this is to just run Terraform every few minutes or so, do a refresh, and apply all resulting changes.

You could also use something akin to Spacelift’s Drift Detection to automate this process and make sure it doesn’t interfere with your manual Terraform executions.

Key points

In this post, we have discussed in depth about data sources in Terraform and explored various use cases to leverage the same. By integrating external information from various sources, such as cloud providers’ APIs, databases, and other systems, Terraform data sources enable configurations to be dynamically shaped according to real-time requirements. This capability ensures that infrastructure remains in sync with the ever-changing landscape of modern IT environments, reducing manual intervention and minimizing configuration drift.

Then if you need any help managing your Terraform infrastructure, building more complex workflows based on Terraform, and managing AWS credentials per run, instead of using a static pair on your local machine, check out Spacelift. It supports policy as code, programmatic configuration, context sharing, drift detection, and many more great features right out of the box. One of the things we’re most proud of at Spacelift is the deep integration with everyone’s favorite version control system – GitHub. You can check it for free, by creating a trial account.

Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.

Manage Terraform Better and Faster

If you are struggling with Terraform automation and management, check out Spacelift. It helps you manage Terraform state, build more complex workflows, and adds several must-have capabilities for end-to-end infrastructure management.

Start free trial

The Practitioner’s Guide to Scaling Infrastructure as Code

Transform your IaC management to scale

securely, efficiently, and productively

into the future.

ebook global banner
Share your data and download the guide