The Terraform remote-exec provisioner can be a useful tool for quick automation and lightweight configuration tasks. It lets you execute commands directly on your newly created infrastructure, bridging the gap between provisioning and configuration.
In this article, we’ll look at how this provisioner works and how to use it based on examples.
What is Terraform remote-exec provisioner?
The Terraform remote-exec provisioner is a built-in feature that allows you to run commands on a remote resource after it has been created. It’s commonly used to perform quick post-deployment configuration tasks, such as installing software, updating packages, or initializing an application service.
Unlike tools that manage ongoing configuration, remote-exec focuses on one-time setup: it connects to the new instance via SSH (for Linux) or WinRM (for Windows) and executes either inline shell commands or predefined scripts.
remote-exec is great for simple bootstrapping tasks that must happen immediately after resource creation. However, it’s not ideal for long-term configuration management because provisioners aren’t idempotent. For complex setups, tools like Ansible, Chef, Puppet, or cloud-init are often better fits.
Read more: What are Terraform Provisioners?
How does Terraform remote-exec provisioner work?
When you use a remote-exec provisioner inside a resource block (for example, an aws_instance), Terraform waits until that resource is fully created and reachable, then establishes a connection to it.
Once connected, it executes the commands you specify, either as an inline list of shell commands or by calling external scripts. If a creation time provisioner fails Terraform marks the resource as tainted and a later apply will replace it. You can optionally set on_failure to continue.
Key components of Terraform remote-exec provisioner include:
- connection block: Defines how Terraform connects to the remote host: the protocol (
sshorwinrm), user, host address, authentication method (private key, password, etc.), and optional timeout. You can place connection at the resource level to apply to all provisioners or inside a specific provisioner to override. - inline: A list of shell commands to run directly.
- scripts: A list of local script files to upload and execute on the remote host.
Example 1: Installing NGINX on a VM using inline commands
A very common “hello world” for remote-exec is to connect to a fresh Linux VM over SSH and run a couple of shell commands to install and start a web server.
In the configuration below, the provisioner connects as the ubuntu user using your local private key, waits for SSH to come up, runs apt-get update, installs NGINX, writes a tiny index page, and ensures the service is enabled and started.
The connection block lives inside the provisioner, so Terraform knows exactly how to reach this instance.
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}
provider "aws" {
region = "eu-central-1"
}
resource "aws_key_pair" "demo" {
key_name = "demo-key"
public_key = file("~/.ssh/id_rsa.pub")
}
resource "aws_security_group" "ssh_http" {
name = "allow-ssh-http"
description = "Allow SSH and HTTP"
vpc_id = data.aws_vpc.default.id
ingress { from_port = 22 to_port = 22 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }
ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }
egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] }
}
data "aws_vpc" "default" {
default = true
}
data "aws_ami" "ubuntu" {
most_recent = true
owners = ["099720109477"] # Canonical
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*"]
}
}
resource "aws_instance" "web" {
ami = data.aws_ami.ubuntu.id
instance_type = "t3.micro"
key_name = aws_key_pair.demo.key_name
vpc_security_group_ids = [aws_security_group.ssh_http.id]
provisioner "remote-exec" {
inline = [
"sudo apt-get update -y",
"sudo apt-get install -y nginx",
"echo 'hello from terraform remote-exec' | sudo tee /var/www/html/index.html",
"sudo systemctl enable nginx",
"sudo systemctl restart nginx"
]
connection {
type = "ssh"
host = self.public_ip
user = "ubuntu"
private_key = file("~/.ssh/id_rsa")
timeout = "5m"
}
}
tags = { Name = "remote-exec-nginx" }
}
output "web_public_ip" {
value = aws_instance.web.public_ip
}Note: In real projects, restrict inbound SSH to a trusted CIDR or use a private path through a bastion. Avoid storing private keys in state by using an agent or a secured variable.
The important idea is that remote-exec happens after the instance is created and reachable. If the commands succeed, Terraform records the provisioner as complete.
If SSH never becomes available or a command exits non-zero, the apply fails, so you can correct the issue.
For repeatability, try to keep the inline steps short and idempotent. For example, writing the same index file multiple times is harmless, while reinstalling large packages repeatedly can be slow and fragile. Each inline command runs on its own and a non zero exit stops the provisioner. If you move logic into a single script add set -e at the top.
In production, you would typically prefer cloud-init or an image bake for most configuration, and reserve remote-exec for small, last-mile adjustments. Use a remote state backend that provides encryption at rest and locking.
Example 2: Running an app bootstrap script
In this example, imagine you already have a running VM or a managed instance group and you simply need to push a one-time bootstrap that pulls your app, writes a .env, seeds a database, and starts a systemd unit.
Rather than tying the provisioner to a specific compute resource, you can use a terraform_data with a remote-exec provisioner. The resource does not create infrastructure. It gives you a lifecycle hook that can depend on anything and rerun when its triggers change.
Here, we checksum the local script and produce an .env file from variables. We upload both and run the script on the host. remote-exec does not support environment, so we write a file instead.
variable "app_host" { type = string } # the VM IP reachable via SSH
variable "ssh_private_key" { type = string } # path to the private key file
variable "app_git" { type = string } # ex: "https://github.com/example/myapp.git"
variable "app_env" { type = map(string) }
locals {
bootstrap_sha = filesha256("${path.module}/bootstrap.sh")
rendered_env = templatefile("${path.module}/.env.tmpl", var.app_env)
}
resource "terraform_data" "bootstrap_app" {
triggers_replace = {
host = var.app_host
script_sha256 = local.bootstrap_sha
env_fingerprint = sha256(local.rendered_env)
}
provisioner "file" {
source = "${path.module}/bootstrap.sh"
destination = "/tmp/bootstrap.sh"
connection {
type = "ssh"
host = var.app_host
user = "ubuntu"
private_key = file(var.ssh_private_key)
}
}
provisioner "file" {
content = local.rendered_env
destination = "/tmp/app.env"
connection {
type = "ssh"
host = var.app_host
user = "ubuntu"
private_key = file(var.ssh_private_key)
}
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/bootstrap.sh",
"sudo APP_ENV_FILE=/tmp/app.env APP_GIT='${var.app_git}' /tmp/bootstrap.sh"
]
connection {
type = "ssh"
host = var.app_host
user = "ubuntu"
private_key = file(var.ssh_private_key)
}
}
}And here is an example bootstrap.sh the provisioner uploads and executes.
It is written to be idempotent, so repeated runs won’t harm a previously configured machine. It pulls or updates the app repository, writes a .env from the provided environment, reloads systemd if the unit has changed, and ensures the service is active. The script reads variables from the file that was uploaded.
#!/usr/bin/env bash
set -euo pipefail
APP_DIR="/opt/myapp"
SERVICE_NAME="myapp.service"
ENV_FILE="${APP_ENV_FILE:-/opt/myapp/.env}"
if ! command -v git >/dev/null; then
sudo apt-get update -y
sudo apt-get install -y git
fi
sudo mkdir -p "$APP_DIR"
sudo chown "$USER":"$USER" "$APP_DIR"
if [ -d "$APP_DIR/.git" ]; then
git -C "$APP_DIR" fetch --all
git -C "$APP_DIR" reset --hard origin/main
else
git clone "${APP_GIT}" "$APP_DIR"
fi
sudo mkdir -p "$(dirname "$ENV_FILE")"
if [ -f "/tmp/app.env" ]; then
sudo mv /tmp/app.env "$ENV_FILE"
fi
if ! command -v systemctl >/dev/null; then
echo "systemd not found"
exit 1
fi
SERVICE_PATH="/etc/systemd/system/${SERVICE_NAME}"
if [ ! -f "$SERVICE_PATH" ]; then
sudo tee "$SERVICE_PATH" >/dev/null <<EOF
[Unit]
Description=My App
After=network.target
[Service]
EnvironmentFile=$ENV_FILE
WorkingDirectory=$APP_DIR
ExecStart=/usr/bin/env bash -lc 'python3 app.py'
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable "$SERVICE_NAME"
fi
sudo systemctl restart "$SERVICE_NAME"
echo "Bootstrap completed."This pattern shows a few useful ideas that make remote-exec practical in real projects.
- The
fileprovisioner gets your script onto the machine in a predictable location so you can run it without relying on copy-pastedinlineshell. - Instead of an environment argument use a rendered file or pass arguments directly.
- The
triggers_replacemap ensures Terraform will rerun the resource whenever the target host the script content or the effective environment changes. - The
filesha256function is what ties re-execution to actual edits ofbootstrap.sh.
Because the script is written to be idempotent, reruns converge the machine back to the desired state rather than breaking it. If you later replace the host with a new VM, or change APP_ENV, Terraform will detect it and run the bootstrap again automatically.
Security concerns for Terraform remote-exec provisioner
The Terraform remote-exec provisioner can be convenient for automating configuration steps on freshly created servers, but it also introduces several security concerns that should be carefully considered before use in production.
1. Exposure of sensitive credentials
Terraform needs credentials to connect to your remote host — usually via SSH keys or WinRM passwords. These are often stored in the Terraform configuration, environment variables, or even in the Terraform state file.
Because the state file stores all variable values (including sensitive ones) in plain text, private keys, passwords, or tokens can easily be exposed if the state file is shared, checked into version control, or stored in an insecure backend.
Use a remote state backend with encryption and locking to protect state.
2. Insecure command execution
When you use inline or scripts with remote-exec, Terraform transmits commands over the connection and executes them on the remote host. If your commands include sensitive data (like passwords or API keys), they may appear in:
- Terraform logs
- Shell history on the remote system
- Cloud provider console logs (if commands are logged)
Even something simple like writing a password into an environment file (echo "DB_PASS=secret" > .env) can lead to accidental leaks. Avoid embedding secrets in commands and prefer files managed by your secret store.
3. Lack of idempotency and predictability
remote-exec runs only during terraform apply when the resource is first created (or when re-provisioned). This means:
- If your commands change later, Terraform will not automatically rerun them unless the resource itself is replaced.
- Manual state drift can occur if people SSH into the machine and make changes.
These behaviors can create unpredictable, inconsistent environments and complicate compliance or auditing. Prefer a configuration tool for ongoing changes.
4. Network and firewall risks
remote-exec requires that Terraform can connect directly to the remote system over SSH or WinRM. To enable that:
- The instance must have a public IP or a route from the Terraform runner to the private network.
- SSH ports (22) or WinRM ports (5985/5986) must be open.
This increases the attack surface — especially when using temporary bastion hosts or opening broad CIDR ranges (0.0.0.0/0). Tighten CIDR ranges and use a bastion when possible.
5. Untrusted or dynamic scripts
Using remote-exec to download and run scripts from external sources (e.g., GitHub URLs or curl-piped bash commands) creates a significant attack vector. If the source changes or is compromised, Terraform will run arbitrary code as root.
6. Limited auditing and access control
Provisioners execute commands directly, which bypasses Terraform’s usual declarative state model. There’s no clear record of what was run beyond logs and shell history. That makes compliance and auditing difficult, particularly for regulated environments.
Key points
Terraform remote-exec can jump start servers by running commands after create. Use it for quick bootstraps and one-off fixes, but keep it small.
While it can be a powerful tool for lightweight setup tasks, it’s important to keep your infrastructure automation modular. Use remote-exec primarily for quick bootstrapping rather than for complete system configuration workflows. Prefer images cloud-init or a configuration tool before adding a provisioner.
Terraform is really powerful, but to achieve an end-to-end secure GitOps approach, you need to use a product that can run your Terraform workflows. Spacelift takes managing Terraform to the next level by giving you access to a powerful CI/CD workflow and unlocking features such as:
- Policies (based on Open Policy Agent)
- Multi-IaC workflows
- Self-service infrastructure
- Integrations with any third-party tools
If you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.
Terraform management made easy
Spacelift effectively manages Terraform state, more complex workflows, supports policy as code, programmatic configuration, context sharing, drift detection, resource visualization and includes many more features.
Frequently asked questions
What is the difference between Ansible and Terraform remote-exec?
Terraform’s remote-exec is a lightweight provisioner for initial setup or quick fixes, while Ansible is a robust system for ongoing, secure, and repeatable configuration management. In modern infrastructure workflows, Terraform and Ansible are complementary. Terraform defines what infrastructure exists, and Ansible defines how it’s configured and maintained.
