Guide to Balancing Speed and Control in DevOps

➡️ Download

Terraform

HashiCorp Configuration Language (HCL): Overview & Tutorial

hcl terraform

🚀 Level Up Your Infrastructure Skills

You focus on building. We’ll keep you updated. Get curated infrastructure insights that help you make smarter decisions.

The official HashiCorp Configuration Language (HCL) repository on GitHub defines HCL as follows:

“HCL is a toolkit for creating structured configuration languages that are both human- and machine-friendly, for use with command-line tools.”

This is not how you commonly think about HCL, whose primary use case is for writing configuration files for Terraform and OpenTofu.

In this blog post, we will discuss HCL, its main components, how to write HCL configuration files, use cases, and best practices. The focus will be on using HCL for Terraform and OpenTofu.

What we’ll cover:

  1. What is the HashiCorp Configuration Language (HCL)?
  2. HCL language basic structure and syntax
  3. Use cases for HCL
  4. How to create a simple HCL configuration file
  5. HCL best practices
  6. What is the difference between HCL and YAML?

What is the HashiCorp Configuration Language?

HashiCorp Configuration Language (HCL) is a domain-specific language used to define infrastructure as code, primarily in tools like Terraform and OpenTofu. It is designed to be human-readable while enabling structured data generation. HCL supports complex data types, interpolation, and module composition, making it both flexible and easy to understand.

HCL is declarative, meaning that you use HCL to declare the desired state of the infrastructure you are configuring. You use references for objects to state how different pieces are related. 

The order of your HCL code does not matter. The following two locals blocks are equivalent from HCL’s perspective (read more about what blocks are in the next section):

# locals block 1
locals {
  name     = "Spacelift"
  greeting = "Hello, ${local.name}!"
}

# locals block 2
locals {
  greeting = "Hello, ${local.name}!"
  name     = "Spacelift"
}

This contrasts with imperative languages, where the order of the code is important. Take the following two code snippets in Python, where the first snippet works as intended, but the other snippet generates an error:

# code snippet 1
name = "Spacelift"
print(f"Hello, {name}!")

# code snippet 2
print(f"Hello, {name}!")
name = "Spacelift"

HCL is not a full-fledged programming language, so its applications are limited. However, Terraform has been around for more than ten years (and OpenTofu for almost two years), so its usefulness is evident.

Given a Terraform configuration with many dependencies between different objects, Terraform will calculate the order in which each object must be created. You might read about this as the construction of a DAG (Directed Acyclic Graph) representing your resource. 

In this phase, Terraform will discover any dependency loops or other issues in your configuration. Discussing DAGs further is beyond the scope of this blog post.

HCL versions

HCL has two main versions: HCL1 and HCL2. HCL1 was used in early Terraform releases, while HCL2, introduced in Terraform 0.12, adds better type handling, expression support, and features like for_each and dynamic blocks. HCL2 is more robust and expressive and is now the standard for Terraform.

HCL language basic structure and syntax

HCL is a simple language. There is no complex code for configuring an AWS EKS cluster (managed Kubernetes on AWS), but it is difficult to figure out how to configure it securely to fit into your environment.

There are three main concepts of HCL:

  • Values and expressions
  • Arguments
  • Blocks

Other significant concepts include:

  • Functions
  • Loops and conditionals

According to the documentation in the official HCL repository on GitHub, HCL has two main concepts: attributes and blocks. However, the official Terraform documentation cites arguments and blocks as the two main elements.

1. Values and expressions

There are three basic types of values in HCL:

  • Strings (e.g. "Hello, Spacelift!")
  • Numbers (a single type for both integers and floating point numbers, e.g., 42 and 3.14)
  • Booleans (true or false)

There are also composite types:

  • Maps and objects (key/value structures where the keys are strings)
  • Lists and tuples (a collection of values of the same type or different types)
  • Sets (an unordered collection of unique values of a given type)

Values can be simple (using one of the basic or composite types above) or expressions.

An expression evaluates to a basic or composite type. Expressions can use functions (see the section on functions below), such as string interpolation, loops, references to other objects, etc.

A few examples of expressions:

  • Mathematical expressions: 41 + 1
  • String interpolation with a variable reference: "Hello, ${var.name}"
  • A function expression: join("-", ["one", "two", "three"])
  • A data source attribute reference: data.aws_availability_zones.all.names

2. Arguments

An argument in HCL is the assignment of a value (or expression) to a named entity.

The most common arguments are resource arguments, which assign a value to an attribute of a resource.

A simple example is when you create a local_file resource using the local provider:

resource "local_file" "settings" {
  filename = "settings.txt"
  content  = "Hello, Spacelift!"
}

Two arguments are configured:

  • filename = "settings.txt"
  • content = "Hello, Spacelift!"

Argument values can be basic types, composite types, or any type of expression. Here are two variants of the previous example using other types of expressions for the filename argument value:

# Using string interpolation and variables to build the filename value
resource "local_file" "settings" {
  filename = "${var.filename}.${var.file_ending}"
  content  = "Hello, Spacelift!"
}

# Copying the name of a different local_file resource
resource "local_file" "settings" {
  filename = local_file.backup.filename
  content  = "Hello, Spacelift!"
}

3. Blocks

A generic block in HCL has the form:

block_type "label_1" "label_2""label_n" {
  # block body
}

Each block type may have a varying number of labels. For Terraform and OpenTofu, the number of labels is 0, 1, or 2. The combination of block type and all labels must be unique for a given Terraform configuration.

Blocks can contain arguments or other blocks.

The common block types in Terraform/OpenTofu are:

  • A variable block is used to configure an input for the Terraform configuration. The variable block has a single label for the symbolic name of the variable used to reference it in other parts of your configuration with the syntax var.<symbolic name>.
variable "name" {
  type = string
}
  • A locals block is used to configure local values. The locals block has no labels. Local values are often used to transform or combine other values and to be able to reuse a value in multiple places easily. You can use any number of locals blocks in your configuration. Each locals block can contain any number of local values.
locals {
  step1 = ["one", "two", "three"]

  # convert to ["ONE", "TWO", "THREE"]
  step2 = [for n in local.step1: upper(n)]

  # convert to [3, 3, 5]
  step3 = [for n in local.step2: length(n)]
}
  • A resource block is used to configure a resource (e.g., an Amazon S3 bucket). The resource block has two labels: the type of the resource and the symbolic name used to reference it in other parts of your configuration.
resource "aws_s3_bucket" "backup" {
  bucket_prefix = "my-bucket"
}
  • A data block represents a data source. This is a common way to reference existing infrastructure created in some other way (e.g., through ClickOps). A data source can also represent something else, e.g., availability zone names in an AWS region. The data block has two labels, one for the type of data source and one for a symbolic name used to reference it in other parts of your configuration.
data "aws_availability_zones" "available" {
  state = "available"
}
  • A module block creates an instance of a module, which is simply a way to package a Terraform configuration into a reusable format. The single label the module block uses to represent its symbolic name is used to reference the module in other parts of your configuration.
module "bucket" {
  source = "./modules/s3"
}
  • An output block is used to output values from your Terraform configuration or your module. These values can then be read and used in other parts of your configuration or outside Terraform. The output block has a single label representing the output name.
output "bucket_name" {
  value = aws_s3_bucket.backup.bucket
}
  • The terraform block can be used to configure the version of Terraform your configuration requires, the provider versions this configuration uses, and where you will store the state file. This block has no labels.
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~>5.61.0"
    }
  }
}
  • A provider block configures specific settings for an instance of a Terraform provider. These settings could include what AWS region you want to target with the AWS provider. The block has a label representing the provider name.
provider "aws" {
  region = "eu-west-1"
}

Each of the blocks discussed above has many other configuration possibilities. 

4. Functions

Terraform comes with a large number of built-in functions to simplify writing dynamic Terraform configurations.

In this section, we look at some common functions and how they can be used. 

The terraform console command illustrates how the functions work. Commands run in the Terraform console are prefixed with a “>“.

  • String functions are used to manipulate string values. An example is the join() function, which concatenates a list of strings into a single string:
> join(":", ["aws", "arn", "iam", "", "123456789012", "role/my-role"])
"aws:arn:iam::123456789012:role/my-role"

The similar split() function splits a string into component parts:

> split(":", "aws:arn:iam::123456789012:role/my-role")
tolist([
  "aws",
  "arn",
  "iam",
  "",
  "123456789012",
  "role/my-role",
])

The string functions category is one of the largest function categories. You can find the full list in the documentation.

Read more: Terraform Strings: Interpolation, Multiline & Built-in Functions

  • IP networking functions are used to calculate appropriate CIDR ranges for subnets and IP addresses for hosts in a subnet. The cidrsubnet() function calculates the CIDR range for a subnet:
> cidrsubnet("10.100.128.0/17", 6, 10)
"10.100.148.0/23"

The cidrhost() function calculates the IP address for a host from a given CIDR block:

> cidrhost("10.100.148.0/23", 257)
"10.100.149.1"
  • File system functions are used to open and read the contents of files from the local filesystem. The file() function opens a single file and returns the content as a string:
> file("hcl.json")
<<EOT
{
    "shortName": "hcl",
    "longName": "HashiCorp Configuration Language"
}
EOT

If you want to read a JSON file and process it using Terraform, you can combine the use of the file() function with the jsondecode() function:

> jsondecode(file("hcl.json"))
{
  "longName" = "HashiCorp Configuration Language"
  "shortName" = "hcl"
}

The output from jsondecode() is a valid HCL object.

If you are doing complex data manipulations in your Terraform code, check if one of the many available functions will help simplify your code.

5. Loops and conditionals

HCL is not a traditional programming language. However, its declarative nature means it shares similarities with functional programming languages.

For conditional expressions, you can use the ternary expression:

locals {
  instance_size = var.environment == "dev" ? "t3.small" : "t3.large"
}

No specific loop block repeats something multiple times. However, you have several options for doing loops.

You can use list comprehension or map comprehension expressions to do inline loops. Here is an example of a list comprehension expression where each value of a list of numbers is multiplied by two:

locals {
  values  = [1, 2, 3, 4, 5]

  # double each value using a list comprehension expression
  doubles = [for v in local.values: v * 2]
}

Another type of loop occurs when you create multiple copies of a resource. Two options for doing this are count and for_each.

Here is an example using the count meta-argument:

resource "local_file" "settings" {
  count = 10

  content  = "This is file ${count.index}"
  filename = "settings-${count.index}.txt"
}

This resource block will produce ten local file resources using a single resource block.

The for_each meta-argument takes a map or a set of strings as input and creates a resource for each map key or each set item. Here is an example of a map input:

locals {
   availability_zones = {
     "eu-west-1a" = {
       ami           = "ami-0df368112825f8d8f" 
       instance_type = "t3.small"
     },
     "eu-west-1b" = {
       ami           = "ami-0ce8c2b29fcc8a346"
       instance_type = "t3.micro"
     },
     "eu-west-1c" = {
       ami           = "ami-09de149defa704528"
       instance_type = "t3.medium"
     }
   }
 }

 resource "aws_instance" "servers" {
   for_each = local.availability_zones
   
   ami               = each.value.ami
   instance_type     = each.value.instance_type
   availability_zone = each.key
 }

The elements discussed in this section allow you to build anything permissible in Terraform and OpenTofu.

Use cases for HCL

HCL was originally built as a language for building other languages, but it has become more or less synonymous with Terraform. We will now discuss common use cases for HCL.

Terraform/OpenTofu

In Terraform and OpenTofu, it is common to use HCL to configure the infrastructure you want to provision and combine the tool with providers to make it happen.

HCL has become increasingly popular on GitHub. According to the 2024 State of the Octoverse report, HCL is the fourth fastest-growing language on GitHub after Python, TypeScript, and Go. This is thanks to the widespread use of Terraform and OpenTofu.

HashiCorp Vault/Consul/Nomad/Boundary

HCL appears in most other HashiCorp products.

In Vault, you write policies using HCL. Policies in Vault are attached to a token, and they determine what the token is allowed to do. Here is a simple example of a policy:

path "secrets/data/prod/db" {
  capabilities = ["read", "list"]
}

In Nomad, the Kubernetes alternative from HashiCorp, you use HCL to configure job specifications (similar to pod manifests in Kubernetes).

For many of the HashiCorp tools (e.g., Consul, Vault, Nomad, and Boundary), you use HCL in the configuration files for the server instances for each product.

How to create a simple HCL configuration file

Terraform and OpenTofu are HCL’s primary use cases, so we will use this section to create a simple configuration file in HCL.

For simplicity, we will work in a single file named main.tf in this example, but you can split your configuration into multiple files for readability and structure if you want. Terraform/OpenTofu reads all files in a directory automatically and treats them as a single configuration file.

Note that the file ending for Terraform configuration files is .tf and not .hcl, even if we write HCL code. For OpenTofu, you will also see the .tofu file ending.

The goal of this simple configuration file is to provision an Amazon S3 bucket for object storage on AWS.

In the main.tf file, add a terraform block where you configure what Terraform providers your configuration will use. In this case, we will use the AWS provider:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.60.0"
    }
  }
}

We will create the S3 bucket in a specific AWS region. We do not want to hardcode the value of the region name in our configuration, so we create a variable to hold the value of the region and allow consumers of our Terraform configuration to specify their desired region name:

variable "aws_region" {
  type        = string
  description = "AWS region name"
  default     = "eu-west-1"
}

We gave the variable a default value. If the consumer of this configuration does not provide their own value for the AWS region, then the default value will be used.

Next, we add a provider block to configure the AWS provider:

provider "aws" {
  region = var.aws_region
}

In the provider configuration, we reference the variable we created earlier using the var.aws_region syntax.

We should also add details about how we authenticate the AWS provider to allow it to provision resources in our AWS account. However, for this simple example, we rely on the implicit credentials provided by the AWS CLI in our environment.

Next, we add a resource block to configure the S3 bucket:

resource "aws_s3_bucket" "spacelift" {
  bucket_prefix = "spacelift"
}

The resource type for an Amazon S3 bucket is aws_s3_bucket. Each resource type has its own schema that determines what arguments and nested blocks its configuration supports. In this example, we configure the S3 bucket using a single argument, bucket_prefix = "spacelift".

An S3 bucket requires a globally unique name. This is why the bucket_prefix argument is useful. It allows you to specify a friendly name prefix for your bucket, and then the AWS provider will generate and attach a suffix to the name to make it globally unique.

As the last piece of our Terraform configuration, we will add an output block where we reference the name of the bucket:

output "s3_bucket_name" {
  value = aws_s3_bucket.spacelift.bucket
}

When we apply this Terraform configuration, a bucket will be created, and the output value we configured will be displayed in the terminal or on the platform we use to run Terraform (e.g., Spacelift or HCP Terraform).

Putting it all together, we have created the following HCL configuration file for Terraform (it is also compatible with OpenTofu without any changes):

# terraform configuration (could be configured in providers.tf)
terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.60.0"
    }
  }
}

# input variables (could be configured in variables.tf)
variable "aws_region" {
  type        = string
  description = "AWS region name"
  default     = "eu-west-1"
}

# provider configurations (could also be added to providers.tf)
provider "aws" {
  region = var.aws_region
}

# aws resource configurations (usually placed in main.tf)
resource "aws_s3_bucket" "spacelift" {
  bucket_prefix = "spacelift"
}

# output values (could be configured in outputs.tf)
output "s3_bucket_name" {
  value = aws_s3_bucket.spacelift.bucket
}

HCL best practices

HCL for Terraform and OpenTofu is a relatively forgiving language. It doesn’t care too much about how you format or structure your code or what you name your different objects. However, keep the following best practices in mind when working with HCL.

1. Organize the code of your root module

In the simple configuration example in an earlier section, we used a single Terraform file and placed all the different blocks inside it. This is fine for a small project, but it is appropriate to split your configuration into logical pieces for larger Terraform configurations.

The root module of a Terraform configuration is the root of the project where you run terraform plan and terraform apply commands.

A typical Terraform configuration might include the following files:

  • terraform.tf, where you configure the Terraform version (a single terraform block)
  • providers.tf, where you configure required providers (in another terraform block) and configure the providers themselves using provider blocks
  • variables.tf for all input variables
  • outputs.tf for all output values
  • locals.tf for local values used in multiple other places
  • main.tf or <category>.tf for resources and data sources, with the <category> part representing networking, compute, storage, etc.
  • Local modules should be placed in a modules/ directory in the root with subdirectories for each module (e.g., modules/networking, modules/storage, modules/cluster).

All the .tf files in the root directory of your projects are automatically treated as one file when you run terraform plan and terraform apply.

It is not important how you name the files in Terraform, but following a convention is prudent when you collaborate with others. Note that this is only a suggestion of common filenames. You might see other variants in the wild.

2. Use modules

Identify common pieces of code that repeat throughout your infrastructure. These can be made into Terraform modules.

Modules can be either local (placed in a subdirectory of your root module) or external in a public or private Terraform module registry. Organizations should use a private registry and avoid using local modules because these do not benefit the organization as a whole.

The benefit of modules is that you can configure a piece of infrastructure once and repeat it many times. This ensures the resources are consistently configured according to your organization’s standards.

Creating modules for common resources and sharing them within your organization in a private registry means your developers do not have to build the same thing multiple times in isolation.

3. Use a name convention

When we discuss naming conventions for HCL, we primarily refer to the labels for variables, data sources, resources, and outputs.

These labels support lowercase and uppercase letters, numbers, dashes, and underlines. It is a convention to avoid using dashes in these labels. This is primarily to avoid strange-looking references with mixed dashes and underlines like this: 

aws_s3_bucket.my-european-bucket_1.arn

Stick to using lowercase letters, numbers, and underlines. Start names with a lowercase letter.

Here are examples of names following this practice:

variable "cloud_region" {
}

output "aws_ami_id" {
}

resource "aws_instance" "backend_message_processing" {
}

A good practice is to avoid repeating obvious details in the name of an object. For instance, naming a variable “instance_type_variable” is unnecessary because it is clearly a variable. Another redundant name example is this:

resource "aws_s3_bucket" "aws_s3_bucket_resource" {
  # … details omitted
}

4. Favor simple code over clever code

If you write HCL as if it were a normal programming language, you might end up with unreadable code.

A best practice for HCL is to avoid writing concise code just to write concise code. Even if it requires more code lines, it is preferable to write readable code.

Let’s say you want to create two AWS subnets for your application. You might prepare for this application to grow, so you write a dynamic configuration like this:

data "aws_availability_zones" "all" {}

resource "aws_vpc" "default" {
  cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "all" {
  count = 2

  vpc_id            = aws_vpc.default.id
  availability_zone = data.aws_availability_zones.all.names[count.index]
  cidr_block = cidrsubnet(aws_vpc.default.cidr_block, 8, count.index)
}

The count meta-argument for the aws_subnet resource is set to 2. The availability zone names and CIDR blocks are computed using count.index.

This looks innocent enough for now, but as time passes, it is no longer clear why there is a count = 2 for the subnet resource and what can be changed in this configuration without breaking things. It’s also unlikely that you will update this specific piece of code with something other than count = 2.

A more readable solution is to be more explicit:

resource "aws_vpc" "default" {
   cidr_block = "10.0.0.0/16"
}

resource "aws_subnet" "first" {
   vpc_id            = aws_vpc.default.id
   availability_zone = "eu-west-1a"
   cidr_block        = "10.0.0.0/24"
}

resource "aws_subnet" "second" {
   vpc_id            = aws_vpc.default.id
   availability_zone = "eu-west-1b"
   cidr_block        = "10.0.1.0/24"
}

This code is a bit longer, and it uses hardcoded values. However, it is clear which resources will be provisioned.

5. Use local values

Local values can be used to encapsulate a value that will be referenced multiple times in a Terraform configuration. This is like using variables in other programming languages.

Local values that are used in multiple parts of your Terraform configurations can be placed in a locals.tf file. Other local values should be defined near where they are used.

For instance, imagine our Terraform configuration has a variable for a list of string names as input. We want to process these names and then create local files using them. This can be done using local values, and the locals block should be placed before the local_file resource:

locals {
  names           = var.names
  uppercase_names = [for name in local.names: upper(local.names)]
}

resource "local_file" "names" {
  count    = length(local.uppercase_names)
  filename = "name-${count.index}.txt"
  content  = local.uppercase_names[count.index]
}

Remember, you can use any number of locals blocks, and each locals block can contain any number of local values. The only limit is that you can’t have local values with the same name, even if they are configured in different blocks.

6. Set default values for variables

You should provide sensible default values for most of your variables. This makes your Terraform configurations and modules easier to use.

What is a sensible default value?

This depends on what the variable is used for. For example, for an AWS EC2 instance, you might define a variable for the instance size. The default value for this variable should be a cheap instance type, but suitable for most types of workloads that you run. You can allow special workloads to configure other instance types.

 

This best practice is especially important for modules with many input variables. If your module takes 100 input variables and has almost no default values, it will be difficult to use. Ideally, you want people to be able to use your module without required inputs. The resulting infrastructure should be good for about 80% of potential use cases.

7. Provide useful output values

Your Terraform configurations and modules should output all relevant information a consumer would be interested in reading. If the consumer has to either parse the state file or go to the target platform (e.g., AWS) to find the relevant values, this shows that the value should have been an output.

As with variables, this is especially important for modules that others will consume. Consider your module’s different use cases and the types of output values these use cases require.

Outputs can be simple attributes from your resources, or they could be composite values and helpful messages for the consumer. A great example of this is, if your module or Terraform configuration sets up an AWS EC2 instance and an SSH key, you could output a convenience value like this:

output "ssh_command" {
  value = "ssh -i ${local_file.private_key.filename} ubuntu@${aws_instance.web.public_ip}"
}

8. Handle sensitive values securely

Handling sensitive values in your Terraform configurations has long been an issue.

Here are some strategies for handling secrets:

  • Store the state file in a secure remote state storage solution (e.g., S3).
  • Encrypt your state (i.e., using OpenTofu plan and state encryption).
  • Use ephemeral values and write-only attributes for resources (available in Terraform 1.11+).
  • Use a secrets management solution (e.g., HashiCorp Vault) to generate short-lived credentials that are not persisted in the state.

What is the difference between HCL and YAML or JSON?

HCL uses a custom syntax optimized for readability and nested structure, whereas YAML and JSON are general-purpose data serialization formats widely used for configuration. HCL supports expressions, variables, and complex data types more cleanly than YAML or JSON, especially in infrastructure-as-code tools like Terraform.

Unlike YAML, which relies heavily on indentation, or JSON, which requires strict punctuation, HCL avoids common parsing issues and supports native interpolation and logic. JSON is more verbose and machine-friendly, while HCL balances human readability with structural clarity.

How can Spacelift help you with OpenTofu and Terraform projects?

Spacelift is an infrastructure orchestration platform that supports both OpenTofu and Terraform, as well as other tools such as Pulumi, CloudFormation, Terragrunt, Ansible, and Kubernetes. Spacelift offers a variety of features that map easily to your OpenTofu and Terraform workflow.

Spacelift stacks enable you to plug in the VCS repository containing your Terraform and OpenTofu configuration files and do a GitOps workflow for them. 

opentofu vs terraform stacks

At the stack level, you can add a variety of other components that will influence this GitOps workflow, such as:

  • Policies – Control the kind of resources engineers can create, their parameters, the number of approvals you need for runs, where to send notifications, and more.
  • Stack dependencies –  Build dependencies between your configurations, and even share outputs between them. There are no constraints on creating dependencies between multiple tools or the number of dependencies you can have.
  • Cloud integrations – Dynamic credentials for major cloud providers (AWS, Microsoft Azure, Google Cloud).
  • Contexts – Shareable containers for your environment variables, mounted files, and lifecycle hooks.
  • Drift detection – Easily detect infrastructure drift and optionally remediate it.
  • Resources view – Enhanced observability of all resources deployed with your Spacelift account.

Spacelift can greatly enhance workflows for both OpenTofu and Terraform. To learn more about it, create an account today or book a demo with one of our engineers.

Spacelift was also the most cost-effective and flexible option, supporting multiple frameworks including OpenTofu and Pulumi — not just Terraform. This flexibility mitigated the risk of vendor lock-in and sprawl as Orbica’s tech stack expanded.

Spacelift customer case study

Read the full story

Key points

The HashiCorp Configuration Language (HCL) is technically a “toolkit for creating structured configuration languages,” but it has become synonymous with Terraform and OpenTofu configuration languages.

The HCL language for Terraform/OpenTofu is built from:

  • Values (strings, numbers, booleans, objects/maps, lists/tuples, and sets) and expressions that evaluate to a value (e.g. 41+1 or "Hello, ${var.name}!")
  • Arguments that assign a value or expression to a named entity (e.g. attribute1 = "value1")
  • Blocks (objects that make up Terraform configurations) — e.g., . terraform, provider, resource, data, and module
  • Functions,  which are helpful for manipulating strings, working with CIDR IP addresses, interacting with the local file system, and more
  • Conditionals for simple if/else logic
  • Loops through list comprehensions and map comprehensions, as well as the meta-arguments count, and for_each for resource blocks

Terraform and OpenTofu are the primary use cases for HCL, but it is also used for different purposes in other HashiCorp products.

Best practices when working with HCL include writing readable and organized code for maintainability and using variables and outputs smartly.

Automate Terraform deployments with Spacelift

Automate your infrastructure provisioning, and build more complex workflows based on Terraform using policy as code, programmatic configuration, context sharing, drift detection, resource visualization, and many more.

Learn more