Managing cloud infrastructure manually can quickly become complex and error-prone, especially as your environments grow. In this guide, we’ll focus on the Terraform EC2 module, a widely used module that streamlines the process of deploying Amazon EC2 instances.
The terraform-aws-modules/ec2-instance module is a reusable, community-maintained Terraform module for deploying EC2 instances on AWS with minimal configuration. It supports a wide range of features, including launching multiple instances, attaching EBS volumes, assigning IAM roles, and configuring networking.
This module abstracts away much of the boilerplate required to provision EC2 resources directly using aws_instance, allowing users to launch instances with a simple set of input variables.
It also supports advanced options like user data scripts, CloudWatch monitoring, and key pair management.
This first example sets up one Amazon Linux 2023 instance in a specific subnet. It pulls the latest AMI ID from SSM Parameter Store, selects a tiny instance type, and applies a simple name tag.
When you run terraform apply, Terraform reads the SSM parameter to get the current AMI, creates the instance, waits until it is ready, and then records the resulting attributes in state.
Note: Using an SSM parameter ensures that you always get the latest Amazon Linux image, but this can trigger instance replacement when the AMI is updated, so it’s worth pinning a version in production.
terraform {
required_version = ">= 1.5.0"
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.0"
}
}
}
provider "aws" {
region = var.region
}
# Latest Amazon Linux 2023 AMI via SSM
data "aws_ssm_parameter" "al2023" {
name = "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-x86_64"
}
module "ec2_basic" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 5.0"
name = "demo-ec2-basic"
ami = data.aws_ssm_parameter.al2023.value
instance_type = "t3.micro"
subnet_id = var.subnet_id
key_name = var.key_name
tags = {
Project = "demo"
}
}
variable "region" { type = string }
variable "subnet_id" { type = string }
variable "key_name" { type = string }
output "basic_instance_id" {
value = module.ec2_basic.id
}When this finishes, you will have exactly one EC2 instance in the subnet you specified.
You can retrieve its ID from the output, and you will see the instance named demo-ec2-basic in the AWS console.
Note: Examples 2 and 3 are incomplete code blocks, as they omit the required terraform block. While the snippets illustrate the intended configuration, they won’t run properly as standalone files unless the terraform block is included.
Now you want the instance to be reachable on HTTP and to configure itself on first boot. This example adds a small security group that allows inbound 80 from anywhere, supplies a user data script that installs and starts Nginx, and increases the root disk to 16 GiB on gp3 with encryption.
Terraform first creates the security group, then the EC2 instance attaches to it. During boot, cloud-init executes your user data script to install and start Nginx.
Make sure the AMI you’re using supports dnf (Amazon Linux 2023). If you use Amazon Linux 2, replace dnf with yum. For better reusability, consider parameterizing security group rules and storage configuration.
provider "aws" {
region = var.region
}
data "aws_ssm_parameter" "al2023" {
name = "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-x86_64"
}
resource "aws_security_group" "web" {
name = "sg-web"
description = "Allow HTTP"
vpc_id = var.vpc_id
ingress {
description = "HTTP"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
tags = {
Name = "sg-web"
}
}
locals {
user_data = <<-EOT
#!/bin/bash
set -euxo pipefail
dnf -y update
dnf -y install nginx
systemctl enable nginx
systemctl start nginx
echo "hello from terraform" > /usr/share/nginx/html/index.html
EOT
}
module "ec2_web" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 5.0"
name = "demo-ec2-web"
ami = data.aws_ssm_parameter.al2023.value
instance_type = "t3.micro"
subnet_id = var.subnet_id
vpc_security_group_ids = [aws_security_group.web.id]
user_data = local.user_data
key_name = var.key_name
root_block_device = [{
volume_size = 16
volume_type = "gp3"
encrypted = true
}]
tags = {
Role = "web"
Project = "demo"
}
}
variable "region" { type = string }
variable "vpc_id" { type = string }
variable "subnet_id" { type = string }
variable "key_name" { type = string }
output "web_public_ip" {
value = module.ec2_web.public_ip
}After apply, open a browser to the public IP from the output and you’ll see Nginx running with your custom message. Allowing HTTP (port 80) from all IPs is fine for testing, but in production you should restrict access or front the instance with a load balancer.
You may want multiple instances sharing common tags but differing by subnet or instance type. This pattern uses for_each to define multiple EC2 instances in one module block. Terraform creates one per map entry and merges shared and role-specific tags.
Using for_each provides predictable scaling — you can add or remove instances simply by editing the map without affecting others.
provider "aws" {
region = var.region
}
data "aws_ssm_parameter" "al2023" {
name = "/aws/service/ami-amazon-linux-latest/al2023-ami-kernel-default-x86_64"
}
variable "region" { type = string }
variable "fleet" {
type = map(object({
subnet_id : string
instance_type : string
role : string
}))
default = {
app = {
subnet_id = "subnet_id = "subnet-REPLACE-ME-1"
instance_type = "t3.micro"
role = "app"
}
db = {
subnet_id = "subnet_id = "subnet-REPLACE-ME-2"
instance_type = "t3.small"
role = "db"
}
}
}
variable "common_tags" {
type = map(string)
default = {
Project = "demo"
Owner = "platform"
Env = "dev"
}
}
module "servers" {
source = "terraform-aws-modules/ec2-instance/aws"
version = "~> 5.0"
for_each = var.fleet
name = "demo-${each.key}"
ami = data.aws_ssm_parameter.al2023.value
instance_type = each.value.instance_type
subnet_id = each.value.subnet_id
tags = merge(var.common_tags, {
Role = each.value.role
})
}
output "fleet_ids" {
value = { for k, m in module.servers : k => m.id }
}
output "fleet_private_ips" {
value = { for k, m in module.servers : k => m.private_ip }
}After applying, you’ll get two instances: one app and one db. Removing an entry from the map deletes only that instance. Terraform leaves the others untouched.
The EC2 module also supports attaching additional volumes, assigning multiple network interfaces, or customizing IAM roles, so consider including those if your fleet needs them.
The Terraform EC2 module automates EC2 instance creation using parameterized, reusable code. It improves consistency, reduces manual work, and helps manage scalable AWS environments efficiently.
For production environments, always pin module versions, restrict inbound network rules, and manage secrets securely using AWS Systems Manager Parameter Store or HashiCorp Vault.
Terraform is really powerful, but to achieve an end-to-end secure GitOps approach, you need to use a product that can run your Terraform workflows. Spacelift takes managing Terraform to the next level by giving you access to a powerful CI/CD workflow and unlocking features such as:
- Policies (based on Open Policy Agent)
- Multi-IaC workflows
- Self-service infrastructure
- Integrations with any third-party tools
If you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.
Automate Terraform deployments with Spacelift
Automate your infrastructure provisioning, and build more complex workflows based on Terraform using policy as code, programmatic configuration, context sharing, drift detection, resource visualization, and many more.
