The Terraform local provider lets you work with files on the same machine where Terraform runs. You can generate files during an apply, read existing files into your configuration and handle sensitive content in a safer way. Terraform then tracks these files in state just like other resources, which keeps them predictable and reproducible.
In this article, we’ll explain how to use this provider based on some examples.
What is the Terraform local provider?
The Terraform local provider is a built-in provider that lets you work with data and files on your local machine. It is useful when you want Terraform to generate text files, render templates, or pass information between modules without depending on cloud resources.
The provider does not manage cloud infrastructure. Instead, it creates local artifacts, such as files, and creates parent directories as needed that support your overall workflow.
provider "local" {}
resource "local_file" "example" {
filename = "hello.txt"
content = "Hello world from Terraform"
}The filename argument specifies the location on your local system where Terraform should create the file. It must include both the file name and its path. The content argument defines the text that Terraform writes into the file.
Each time Terraform runs, it detects changes to the content and updates the file accordingly. When used together, these arguments allow Terraform to reliably track changes and keep the file aligned with your desired configuration.
The local_file data source automatically treats file contents as sensitive, so they are not printed in normal output. If you need to create files containing sensitive data, use the local_sensitive_file resource, which ensures the content is handled as sensitive throughout the plan and apply lifecycle.
For static files that already exist in your repository, Terraform functions such as file() or templatefile() are often simpler. As a result, the local provider is most useful when the file path or contents depend on other Terraform values.
Example 1: Creating a file with the local provider
In this example, we’ll create a configuration file dynamically during terraform apply.
The file content changes whenever your variables change, ensuring the generated config is always aligned with the Terraform state.
terraform {
required_providers {
local = {
source = "hashicorp/local"
version = "~> 2.0"
}
}
}
resource "local_file" "app_config" {
filename = "${path.module}/app.conf"
content = "environment = ${var.env}\nversion = ${var.version}\nregion = ${var.region}"
}Terraform writes the file app.conf with values pulled from your variables. If you update the environment or version, Terraform recreates or updates the file, which makes it a predictable part of your deployment workflow.
Example 2: Reading a file with the local provider
In many setups, teams already maintain policy documents as JSON files in the repo. Instead of copying that JSON into Terraform, you can read it directly using the local provider.
data "local_file" "policy_json" {
filename = "${path.module}/policy.json"
}
resource "aws_iam_policy" "example" {
name = "example_policy"
policy = data.local_file.policy_json.content
}Here, Terraform loads policy.json before creating the IAM policy. When someone edits the JSON file the next plan automatically shows any changes to the resulting IAM policy. This keeps your policy document in one place and avoids drift between the file on disk and the resource in Terraform.
Example 3: Writing secrets to a local file as sensitive content
Sometimes an external tool needs a credential file that is not directly supported by Terraform providers. You can still generate the file with Terraform but keep the content hidden from normal output by using local_sensitive_file.
variable "api_key" {
sensitive = true
}
variable "token" {
sensitive = true
}
resource "local_sensitive_file" "creds" {
filename = "${path.module}/creds.txt"
content = <<EOF
api_key=${var.api_key}
token=${var.token}
EOF
}Terraform creates creds.txt with your secret values yet treats the content as sensitive. That means the file body is not printed in plans or logs. However, the secret values are still stored in the Terraform state file, and marking them as sensitive only hides them from normal CLI and UI output.
The file is still managed by Terraform, which lets you recreate it reliably on new machines without exposing the raw secrets in standard output. On a different machine that does not have creds.txt yet, Terraform will see the file as missing and plan to create it again, which can add some harmless noise to your plans.
Example 4: Using a local shell script as a startup script
It is common to have a longer shell script that should run when a virtual machine starts. Instead of stuffing the script directly into the Terraform resource you can keep it as a .sh file in your module and either load it with the local provider or use a built in function, depending on your needs.
For a static script committed in your repo, a simple option is to skip the local provider and use Terraform’s file() function:
resource "google_compute_instance" "vm" {
name = "example-vm"
machine_type = "e2-medium"
zone = "us-central1-a"
boot_disk {
initialize_params {
image = "debian-cloud/debian-12"
}
}
network_interface {
network = "default"
}
metadata = {
startup-script = file("${path.module}/startup.sh")
}
}If the script content is generated by Terraform or another resource, then you can use the local provider. For example, you could generate a key and write it to a file that a script will later read:
resource "tls_private_key" "vpn" {
algorithm = "RSA"
rsa_bits = 2048
}
resource "local_file" "vpn_private_key" {
filename = "${path.module}/id_rsa"
content = tls_private_key.vpn.private_key_pem
}Here, the local provider turns Terraform generated data into an actual file on disk, which is a clear use case for local resources.
Key points
The Terraform local provider is a helper that connects your Terraform code with local files. It can write new files, handle sensitive content, and read existing scripts or JSON policies into resources. Used well, it removes ad hoc shell scripts, keeps configuration in sync with Terraform state, and gives you a cleaner and more reliable way to manage file-based setup steps.
Because these files only exist on the machine that runs terraform apply, it is better to use the local provider sparingly and prefer remote resources when you need the same configuration to work cleanly across many machines or CI runners.
Terraform is really powerful, but to achieve an end-to-end secure GitOps approach, you need to use a product that can run your Terraform workflows. Spacelift takes managing Terraform to the next level by giving you access to a powerful CI/CD workflow and unlocking features such as:
- Policies (based on Open Policy Agent)
- Multi-IaC workflows
- Self-service infrastructure
- Integrations with any third-party tools
If you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.
Terraform management made easy
Spacelift effectively manages Terraform state, more complex workflows, supports policy as code, programmatic configuration, context sharing, drift detection, resource visualization and includes many more features.
