Terraform has had ways to validate variables for a long time. This has worked well (somewhat) but definitely did not provide the features needed to truly validate your deployments. One reason variable validation fell so short is that it only worked on variables!
Since variables are pretty limited by only values you can define before deployment, this did not allow for much flexibility when it came to things like validating that an AMI from a data source is the correct AMI. Anything you don’t know before the deployment basically has no way to validate without external tools…until now!
Enter “preconditions” and “postconditions” as of Terraform v1.2.0!
Learn how to upgrade Terraform to the latest version with our tfenv tutorial.
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6
Terraform preconditions are custom conditions that are checked before evaluating the object they are associated with. Precondition checks can be set on resources, data sources, and outputs, ensuring custom error messages can be shown before the values are used in an apply. They are useful in situations where you want to invalidate data such as syntax issues for an ip address, or you want to ensure that a certificate is in the correct state.
Terraform postconditions are similar to Terraform preconditions, but they are used after evaluating the object they are associated with. They can also use the self-object to refer to the instances’ attributes. Postconditions work on resources and data sources, and ensure custom error messages can be shown after the resource apply is done, or after a datasource is read. Postcondition failures prevent changes to other resources that depend on the failing resource – if a vpc is created and it doesn’t have the dns_support enabled, you can set up a postcondition to ensure the subnet is not going to be created.
Both preconditions and postconditions are defined using a pre/post condition block. The main difference between them is when they are evaluated, and the objects in which they can exist. Both preconditions and postconditions help with validating information from different terraform components.
Now, I could just re-read the documentation to you here in this post, but I’d prefer to actually show you some examples of how this works! If you’d like to follow along, ensure your terminal has appropriate permissions to create a VPC in AWS and has Terraform installed.
First, let’s add some code to our main.tf file:
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-west-1"
}
data "aws_region" "current" {}
variable "cidr_block" {
type = string
default = "10.0.0.0/16"
}
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
tags = {
Name = "main"
}
}
output "owner_id" {
value = join("", slice(split("", aws_vpc.main.owner_id), 8, 12))
}
This code will:
- Configure your AWS provider to use the us-west-1 region,
- Utilize a data source to access the name of the current region,
- Configure a cidr_block variable to a default value of 10.0.0.0/16,
- Deploy a standard VPC,
- Output the last four digits of your AWS Owner ID as obtained from the owner_id attribute of your VPC.
Do not apply yet. We’re going to be doing some plans and adding more code before we apply!
Run a terraform init
to initialize your Terraform directory.
Once you’ve done that, go ahead and run a terraform plan
to ensure everything works and you don’t have any typos that the init may have missed.
Let’s now add a postcondition to verify the region is either “us-west-1” or “us-west-2”. You could obviously use a validation within the variable, but that can be a little tough to follow, especially when using modules. You may want one variable file and enforce the variables in your scripts. Using custom conditions is how we do that!
Custom conditions are essentially four parts (if you count the lifecycle block).
lifecycle
blockprecondition
orpostcondition
blockcondition
error_message
lifecycle {
postcondition {
condition = contains(["us-west-1", "us-west-2"], self.name)
error_message = "Region needs to be us-west-1 or us-west-2!"
}
}
As you can see, we’ve included those four parts in the code snippet. You can add multiple conditions to each lifecycle block, which we’ll do very soon.
For this current postcondition, you may notice that we used self.name
to refer to the name of the region. With postconditions, you can utilize self
to access its own attributes.
However, you cannot do this with preconditions. A precondition cannot access self
because the self
attribute is not defined until after a terraform plan
and a precondition validates before the plan is finished. There is much more information on this within the docs.
Let’s verify that this new condition works. Go ahead and run a terraform plan
first, and let’s take a look at the truncated output:
data.aws_region.current: Reading...
data.aws_region.current: Read complete after 0s [id=us-west-1]
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_vpc.main will be created
+ resource "aws_vpc" "main" {
...
}
Plan: 1 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ owner_id = (known after apply)
As you can see, everything works perfectly! Now, let’s change the region to “us-east-1”, and see what happens:
provider "aws" {
region = "us-east-1"
}
And run a terraform plan
:
data.aws_region.current: Reading...
data.aws_region.current: Read complete after 0s [id=us-east-1]
╷
│ Error: Resource postcondition failed
│
│ on main.tf line 16, in data "aws_region" "current":
│ 16: condition = contains(["us-west-1", "us-west-2"], self.name)
│ ├────────────────
│ │ self.name is "us-east-1"
│
│ Region needs to be us-west-1 or us-west-2!
Aha! Our condition worked and prevented us from deploying to the incorrect region! Go ahead and change the region back to “us-west-1”.
For easier Terraform management, you can also check out Spacelift – a sophisticated and compliant infrastructure delivery platform. Spacelift can help you with building more complex workflows based on Terraform and has the flexibility to integrate with any third-party tool you want. You can test drive it for free by going here and creating a trial account.
Now, let’s address the VPC resource and see if we can prevent it from deploying with the wrong CIDR Block.
Add the following to your aws_vpc
resource:
lifecycle {
precondition {
condition = cidrnetmask(var.cidr_block) == "255.255.0.0"
error_message = "Expecting a /16 for this VPC!"
}
}
}
Run a terraform plan
to ensure there are no typos. Everything should come out fine.
Now, as you noticed, we used a precondition
here. Since we are referring to a variable that is already defined, this works perfectly fine. If you were to try to use self.cidr_block
instead of var.cidr_block
it would not work.
Remember, self
can’t be accessed in a precondition, so we access the variable instead of the VPC cidr_block attribute. Preconditions are generally recommended to be used with “assumptions”. This means that the individual resource requires the condition to validate in order for it to work. You typically would use a postcondition with a “guarantee”. In other words, other resources rely on that condition in order to work properly. You can find more about assumptions and guarantees in the documentation.
Now that we’ve configured this precondition, let’s really test it! Change the default of var.cidr_block
to 10.0.0.0/24
like so:
variable "cidr_block" {
type = string
default = "10.0.0.0/24"
}
After you have made the change, run a terraform plan
:
data.aws_region.current: Reading...
data.aws_region.current: Read complete after 0s [id=us-west-1]
╷
│ Error: Resource precondition failed
│
│ on main.tf line 34, in resource "aws_vpc" "main":
│ 34: condition = cidrnetmask(var.cidr_block) == "255.255.0.0"
│ ├────────────────
│ │ var.cidr_block is "10.0.0.0/24"
│
│ Expecting a /16 for this VPC!
Bam! Our precondition failed just as expected! Go ahead and change that CIDR back to 10.0.0.0/16
and let’s continue.
The final item we want to validate goes hand-in-hand with our region validation. We want to validate now that the VPC is deployed to the right account. Obviously, deploying to the wrong account can be disastrous, and I’ve absolutely seen it happen!
As you can see in the boilerplate code you’re using, there is an output that displays the last four digits of your account number. Now, if you run a terraform plan
, you can see that the owner_id
attribute is listed as (Known after apply)
. Because of this, a precondition is not going to work! To successfully validate the owner_id
attribute, we actually have to apply, which means we’ll be using a postcondition. Let’s first run a terraform apply -auto-approve
and take a look at the last four digits of our Owner ID:
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Outputs:
owner_id = "2295"
As you can see, the last four digits of my Owner ID are “2295”. Since the owner_id
attribute is a string, make sure you include quotes any time you’re referencing it. Let’s create a postcondition to check to ensure the last four digits of the owner_id
attribute are “2295”.
Add this (with the correct last four digits of your Owner ID) within the lifecycle
block, directly under the closing brace of your precondition
block:
postcondition {
condition = join("", slice(split("", self.owner_id), 8, 12)) == "2295"
error_message = "You deployed to the wrong account!"
}
Now that you have done that, run another terraform apply -auto-approve
to ensure it works.
Now, let’s change the last four digits to something that doesn’t match. Obviously, you could just deploy to another account, but this might be a little easier. I’ll just change mine to “2296”:
postcondition {
condition = join("", slice(split("", self.owner_id), 8, 12)) == "2296"
error_message = "You deployed to the wrong account!"
}
Once you have done that, go ahead and run a terraform plan
:
│ Error: Resource postcondition failed
│
│ on main.tf line 38, in resource "aws_vpc" "main":
│ 38: condition = join("", slice(split("", self.owner_id), 8, 12)) == "2296"
│ ├────────────────
│ │ self.owner_id is "*******2295"
│
│ You deployed to the wrong account!
Wait! Derek, you told us this wasn’t going to validate using a plan! Well, there’s a small catch here. Since the apply has already been run, the state file already has the proper owner_id
evaluated, so it can actually run the validation here. Go ahead and:
- Change the Owner ID to its correct value,
- Run a
terraform destroy
, - Change the Owner ID to its incorrect value,
- Re-run the
terraform plan
,
and let’s see what happens:
Plan: 1 to add, 0 to change, 0 to destroy.
There we go! Now it doesn’t work. Understanding these small nuances is very important to ensure you are validating properly!
Now, go ahead and run a terraform apply -auto-approve
and let’s watch it break:
│ Error: Resource postcondition failed
│
│ on main.tf line 38, in resource "aws_vpc" "main":
│ 38: condition = join("", slice(split("", self.owner_id), 8, 12)) == "2296"
│ ├────────────────
│ │ self.owner_id is "034858642295"
│
│ You deployed to the wrong account!
There we go!
Alright, go ahead and fix your Owner ID and run a terraform destroy -auto-approve
and enjoy the fact that you now understand Terraform’s custom conditions!
Manage Terraform Better and Faster
If you are struggling with Terraform automation and management, check out Spacelift. It helps you manage Terraform state, build more complex workflows, and adds several must-have capabilities for end-to-end infrastructure management.