[Demo Webinar] Elevate your IaC workflows with Spacelift

Register Now

General

How to Use AWS for Infrastructure as Code (IaC)

aws iac

In 2004, Amazon Web Services (AWS) launched its first publicly available cloud service, the Simple Queue Service (SQS). Today, AWS has the largest market share for public cloud services, ahead of Microsoft Azure and Google Cloud Platform (GCP). Given its position as the oldest and current leading public cloud provider, what does the infrastructure-as-code (IaC) landscape look like in the context of AWS?

CloudFormation is AWS’s native service for IaC. It was released in 2011, and much has happened since then, both with CloudFormation and the surrounding IaC landscape.

In this blog post, we will discuss five popular options for infrastructure as code on AWS:

If you want to follow along the examples on your own system you must have AWS credentials available in your terminal. The simplest way is to download and configure the AWS CLI. The details are outside the scope of this blog post, but you can read up on how to do this in the AWS documentation

All the tools we will discuss can automatically pick up the AWS credentials from the AWS CLI configuration file.

All the sample commands in this post have been tested on a MacBook in a Z Shell (ssh) environment.

What is Infrastructure as Code?

Infrastructure as Code (or IaC) is the practice of defining cloud resources and related infrastructure through a configuration language or a full-fledged programming language.

IaC can be classified into two broad categories:

  • In declarative IaC, you define the desired state of your infrastructure, and it is up to the tools you use to create it to match that state. You define the destination, and the tool takes care of the journey. 

Examples include Terraform, OpenTofu, and AWS CloudFormation. The distinctive feature of declarative IaC is that the order of code statements does not matter. What is important is how the infrastructure is configured and how resources are related to each other.

  • In imperative IaC, you define the steps to take your infrastructure to the desired state. 

Examples include AWS CDK and Pulumi, as well as custom Bash or Powershell scripts. The distinctive feature of imperative IaC is that the order of code statements does matter. Switching the place of two commands or code statements results in a different result or even an error.

The tools we will explore in this blog post can broadly be classified into one of these two categories of IaC. However, both AWS CDK and Pulumi have elements of both declarative and imperative IaC. 

The same is true for Terraform and OpenTofu, which both can include elements of imperative IaC as well as declarative IaC. CDK is special in the sense that it is imperative in nature, but it produces CloudFormation templates that are inherently declarative in nature.

infrastructure as code diagram

Treating your infrastructure as code means storing the code in a source code repository, making changes to the code through peer review processes, and building CI/CD pipelines to move a change from commit to production.

Demo infrastructure

To compare the different tools, we must have a sample cloud architecture to work with. A good example contains a few different types of resources with dependencies between them.

With that in mind, the architecture we will be working with consists of a virtual private cloud (VPC) with a single subnet, a security group, an internet gateway, a route table, and an EC2 instance with an SSH public and private key pair.

The goal is to set up an EC2 instance that we can reach via SSH from our local system. It is not a production-ready system, but it is an example with just enough complexity to get a feel for the different options for infrastructure as code.

An overview of the architecture is given in the following figure:

If you are not familiar with the AWS ecosystem of cloud resources that are part of this architecture, a short description of the important pieces follows below:

  1. A Virtual Private Cloud (VPC) is a virtual network. Think of this as the digital equivalent of your private home or office network.
  2. A VPC can be split into multiple smaller networks called sub networks or subnets. Splitting a VPC into smaller parts is a best practice from a security and management standpoint.
  3. An internet gateway allows network resources in a VPC to talk to the internet.
  4. How traffic is routed inside of your subnets is configured in a route table. This is a table of IP address ranges and their destination. In our case we will route all traffic to go through the internet gateway.
  5. A security group is a firewall you can attach to network resources to configure what traffic is allowed in or out.
  6. You specify what is allowed or denied through security group rules. In our case we will have one rule to allow incoming traffic on port 22 (SSH) from the public internet.
  7. An Elastic Compute Cloud (EC2) instance is a virtual machine. In this architecture we will create an EC2 instance running Amazon Linux 2023.
  8. To access the EC2 instance via SSH, we will also need to set up an SSH public and private key pair and add the public key to the EC2 instance.

Resources in AWS are created in a geographical region. Each region is split into multiple availability zones. AWS has many regions around the globe. In this blog post, we will be using the region in Ireland, known as eu-west-1, in AWS terminology.

That was a high-level overview of what we will be creating. This architecture will allow us to get an understanding of the process of using each of the five tools for infrastructure as code on AWS.

Infrastructure as Code on AWS with AWS CloudFormation

AWS CloudFormation was launched in 2011 and was a game-changer for creating cloud infrastructure on AWS. Instead of clicking in the AWS console to create resources manually or writing complicated shell scripts, you could suddenly declaratively describe the infrastructure you wanted in a configuration language and have CloudFormation bring it to life.

CloudFormation uses a declarative approach to infrastructure as code, with templates written in either JSON or YAML

A deployed instance of a CloudFormation template is known as a CloudFormation stack. The stack is an important concept because it represents the current state of the world as far as CloudFormation is concerned. 

The CloudFormation template you use to create a stack is your desired state. To make changes to your infrastructure, you update your template and apply the new template to the stack. This, in turn, makes CloudFormation update the current state of the world to match your desired state.

CloudFormation is executed on the server side. There is no option to execute It locally, which is a major difference from how Terraform works.

Step 1. Install and configure the AWS CLI

To get started with CloudFormation, you must install and configure the AWS CLI on your local system. The details of this are beyond the scope of this blog post, but you can find instructions for how to do this in the AWS documentation.

Once you have the AWS CLI installed, verify that it is working:

$ aws --version
aws-cli/2.17.36 Python/3.11.9 Darwin/23.6.0 source/arm64

We need to decide whether to write the CloudFormation templates using JSON or YAML. In almost all situations, it is easier to work with YAML files, so we will use them. However, in some automation scenarios, where you are programmatically working with your CloudFormation templates, it can make sense to use JSON.

Create a new working directory and cd into it:

$ mkdir cloudformation-demo && cd cloudformation-demo

Create a new file named awstemplate.yaml in your favorite text editor to get started. Add the following boilerplate YAML code to indicate that we are creating a CloudFormation template:

AWSTemplateFormatVersion: "2010-09-09"
Description: AWS CloudFormation Demo Infrastructure

Parameters:
  # ...

Mappings:
  # ...

Resources:
  # ...

Outputs:
  # ...

This is currently not a valid CloudFormation template. We have added some metadata along with headers for parameters, mappings, resources, and outputs. We will fill in the details in the following subsections.

Step 2. Create the network architecture

We begin by creating the network architecture, which consists of a VPC, a subnet, an internet gateway, a security group, and a route table. 

In the Parameters section, add a parameter named VpcCidrBlock for the VPC CIDR block (the range of IP addresses that this network contains):

Parameters:
  VpcCidrBlock:
    Description: The CIDR block for the VPC
    Type: String
    Default: 10.0.0.0/16

Next, under the Resources keyword in the root of the document, add the VPC and subnet resources:

Resources:
  VPC:
    Type: AWS::EC2::VPC
    Properties: 
      CidrBlock: !Ref VpcCidrBlock

  Subnet:
    Type: AWS::EC2::Subnet
    Properties:
      VpcId: !Ref VPC
      MapPublicIpOnLaunch: true
      CidrBlock: !Select [ 0, !Cidr [ !Ref VpcCidrBlock, 1, 8 ]]
      AvailabilityZone: !Select [ 0, Fn::GetAZs: !Ref "AWS::Region" ]

Each resource has a logical name in the template. In this case, the resources are named simply VPC and Subnet. A resource has a Type and a number of Properties, which vary depending on the type of resource you are creating.

If this is the first CloudFormation template you see, you will have many questions. One thing that quickly gets confusing in CloudFormation is the use of intrinsic functions (e.g.,!Select, !Ref, Fn::GetAZs, etc.).

So far, we have an empty virtual network so we should keep adding the various network components that are required for us to be able to reach the EC2 virtual machine via SSH. The internet gateway requires two resources. 

Add the following resources below the other resources in the template:

InternetGateway:
  Type: AWS::EC2::InternetGateway

AttachGateway:
  Type: AWS::EC2::VPCGatewayAttachment
  Properties:
    VpcId: !Ref VPC
    InternetGatewayId: !Ref InternetGateway

The internet gateway itself does not require any configuration. However, we must add a specific resource that represents the attachment of the internet gateway to the VPC. Without this connection we would have an orphaned internet gateway that we can’t use for anything.

We control the routing in our VPC through a route table. A route table can have one or many routes. As with the internet gateway, we must use additional resources to specify in what subnet the route table should be in effect. Add the following resources below the other resources in the template:

RouteTable:
  Type: AWS::EC2::RouteTable
  Properties:
    VpcId: !Ref VPC

DefaultRoute:
  Type: AWS::EC2::Route
  DependsOn: AttachGateway
  Properties:
    RouteTableId: !Ref RouteTable
    DestinationCidrBlock: 0.0.0.0/0
    GatewayId: !Ref InternetGateway

SubnetRouteTableAssociation:
  Type: AWS::EC2::SubnetRouteTableAssociation
  Properties:
    SubnetId: !Ref Subnet
    RouteTableId: !Ref RouteTable

The important part is that we have created a route that sends all traffic (DestinationCidrBlock: 0.0.0.0/0) to the internet gateway.

The final network resource we must add is the security group. The security group specifies what traffic is allowed in (ingress traffic) and what traffic is allowed out (egress traffic) from an EC2 instance or other network connected resource. 

Add the security group resource under the rest of the resources in the template:

SecurityGroup:
  Type: AWS::EC2::SecurityGroup
  Properties:
    VpcId: !Ref VPC
    GroupDescription: EC2 instance
    SecurityGroupIngress:
      - IpProtocol: tcp
        FromPort: 22
        ToPort: 22
        CidrIp: 0.0.0.0/0

The security group is configured with one ingress rule allowing traffic on port 22 (SSH) from anywhere in the world (0.0.0.0/0).

Note that, in general, you should avoid opening port 22 to the world, but since this is a demo, we will allow it. If you want to, you can change the value of the CidrIp property to <your public IP>/32 to pin it to your exact IP address and nothing else.

Step 3. Create the EC2 instance

To create the EC2 instance, we must have an Amazon Machine Image (AMI) to base it on.

Apart from this, we will need an SSH key. There is no way to create a new SSH keypair with CloudFormation and obtain the private key so that you can connect to the EC2 instance from your local system. This means we must first create a local SSH keypair and provide CloudFormation with the public key during deployment.

Create a new keypair in your working directory:

$ ssh-keygen -t rsa -b 4096 -f ./ec2-key

The details of the ssh-keygen command is outside the scope of this blog post. It is sufficient to know this command creates a public and private keypair (named ec2-key.pub and ec2-key, respectively) we can use to connect to the virtual machine via SSH.

Next, add a new parameter in the Parameters section of your CloudFormation template:

PublicKeyMaterial:
  Description: Public key to add to the EC2 instance
  Type: String

As mentioned, an EC2 instance is created from an AMI. Unfortunately, we must know the ID of the AMI we intend to use. The ID is different depending on what AWS region you are creating the EC2 instance in. You can add support for multiple regions to your CloudFormation template by adding a mapping from the AWS region to the AMI ID. 

In the Mappings section of the template, add the following content:

Mappings:
  Region:
    eu-west-1:
      ami: ami-04e49d62cf88738f1
    us-east-1:
      ami: ami-066784287e358dad1

This maps from a given AWS region to a supported AMI ID for that specific region. These particular AMI IDs correspond to an Amazon Linux 2023 machine image. You should add each region that you want your template to support. The AMI IDs can be obtained from the AWS console or by using the AWS CLI.

With all the prerequisites for the EC2 instance out of the way, we can add resources for the keypair and the EC2 instance to the Resources section of our template:

KeyPair:
  Type: AWS::EC2::KeyPair
  Properties:
    KeyName: Demo
    PublicKeyMaterial: !Ref PublicKeyMaterial

Instance:
  Type: AWS::EC2::Instance
  Properties:
    KeyName: !Ref KeyPair 
    ImageId: !FindInMap [ Region, !Ref AWS::Region, ami]
    InstanceType: t3.micro
    SubnetId: !Ref Subnet
    SecurityGroupIds:
      - !Ref SecurityGroup

The KeyPair resources reference the input parameter PublicKeyMaterial. A value for this parameter will be provided at deployment time.

The Instance resource uses the correct AMI ID in the ImageId property by retrieving the value from the Region map using the !FindInMap function.

Finally, to simplify connecting to the EC2 instance we add an output to the Outputs section of our CloudFormation template. The value for the output is the SSH connection command needed to connect to the instance:

Outputs:
  SshCommand:
    Value: !Join ["", ["ssh ", "-i ec2-key ", "ec2-user@", !GetAtt Instance.PublicIp ]]

The default user for an Amazon Linux 2023 instance is ec2-user.

Step 4. Deploy the demo infrastructure

As mentioned, a deployed instance of a CloudFormation template is called a CloudFormation stack. There are two different AWS CLI commands you can run to create a stack. 

The most convenient command is:

$ aws cloudformation deploy \
    --stack-name demo \
    --template-file awstemplate.yaml \
    --parameter-overrides PublicKeyMaterial="$(cat ec2-key.pub)"

The aws cloudformation deploy command creates a stack if it does not exist and updates a stack if it does exist. This command is idempotent, which is why it is the preferred command. 

The alternative command is:

$ aws cloudformation create-stack \
    --stack-name demo \
    --template-body file://awstemplate.yaml \
    --parameters ParameterKey=PublicKeyMaterial,ParameterValue=$(cat ec2-key.pub)

The downside of the aws cloudformation create-stack command is that it is not idempotent, so rerunning the same command will result in a failure. You can update a stack using the aws cloudformation update-stack command or just use the aws cloudformation deploy command described above.

Run an additional command to obtain the value of the output that we added to the template:

$ aws cloudformation describe-stacks \
    --stack-name demo \
    --query "Stacks[0].Outputs[?OutputKey=='SshCommand'].OutputValue" \
    --output text
ssh -i ec2-key ec2-user@<your instance public ip>

Run the SSH command to verify that the connection works:

$ ssh -i ec2-key ec2-user@<your instance public ip>
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
[ec2-user@ip-10-0-0-134 ~]$ echo “Hello CloudFormation!Hello CloudFormation!

Once you are done, exit from the EC2 instance (using the exit command) and delete the CloudFormation stack to avoid unnecessary costs:

$ aws cloudformation delete-stack --stack-name demo

The command returns immediately, and the stack deletion happens asynchronously on AWS.

Reflections on working with AWS CloudFormation

It is clear from the preceding walkthrough that working with CloudFormation requires intimate knowledge of how services in AWS fit together and exactly what resources are needed to set up a successful cloud architecture. There is no level of abstraction, you are working with the raw building blocks of AWS. However, this allows you to learn the AWS platform at a deep level.

 

You can work with nested stacks in CloudFormation if you want to split your infrastructure into modular pieces.

 

The CloudFormation service offers many features related to your templates and resources that we did not have time to cover. These include working with Stack Sets to deploy infrastructure across multiple regions simultaneously and drift detection to discover when changes happen to your infrastructure outside of your CloudFormation workflows.

Infrastructure as Code on AWS with AWS Cloud Development Kit (CDK)

The AWS Cloud Development Kit, or AWS CDK for short, was introduced in 2018. CDK was another game-changer in the landscape of infrastructure as code. CDK is imperative in nature, and you write CDK code using a high-level programming language (e.g., Python, TypeScript, C#, and more).

The source code you write is eventually converted (or synthesized) into CloudFormation templates for you, and a CDK application could consist of one or many CloudFormation stacks. However, CDK is an abstraction on top of CloudFormation, and in a sense, it is not necessary to even be aware that CloudFormation is part of the process under the hood.

CDK has a concept of constructs. A construct can represent a single AWS resource or a larger piece of infrastructure consisting of multiple underlying resources. This is similar in spirit to a module in Terraform.

It allows users to create powerful abstractions. Imagine packaging the architecture we set up with CloudFormation above into a construct named LinuxEnvironment. There is no need for users of this LinuxEnvironment to know all the pieces of underlying cloud infrastructure needed to connect to a Linux machine.

Step 1. Set up AWS CDK

To get started with the AWS CDK, you must install it locally along with Node.js and additional prerequisites depending on the language you will use for the CDK. Again, the details of how to do this are beyond the scope of this blog post, but there are guides available in the AWS documentation.

In the following examples, I will use TypeScript, but support is also available for JavaScript, Python, Java, C#, and Go.

When you have installed the CDK, make sure that it is working:

$ cdk --version
2.154.1 (build febce9d)

Create a new working directory named cdk-demo and initialize a new CDK project inside of it (the output from these commands is truncated for brevity):

$ mkdir cdk-demo && cd cdk-demo
$ cdk init app --language typescript
Applying project template app for typescript
Initializing a new git repository...
Executing npm install...
✅ All done!

You will also have to run a bootstrap command before you can create your first stacks with CDK in a given region. This command creates some required resources for CDK to function in your AWS environment. 

This process takes a few minutes to complete:

$ cdk bootstrap
...
✅  Environment aws://<account id>/<aws region> bootstrapped.

The cdk init command creates a new project from a template, and you are given a skeleton application to work with. The full details of how to write your CDK app are beyond the scope of this blog post, but we will see enough about how writing a CDK app works. 

A huge benefit of using TypeScript or another typed language is the great support you can get in your IDE or text editor.

There are two important files that have been created for you: bin/cdk-demo.ts is the entrypoint of your infrastructure:

#!/usr/bin/env node
import 'source-map-support/register';
import * as cdk from 'aws-cdk-lib';
import { CdkDemoStack } from '../lib/cdk-demo-stack';

const app = new cdk.App();
new CdkDemoStack(app, 'CdkDemoStack', {});

In this file, an instance of the CdkDemoStack class is instantiated. If your infrastructure consists of multiple stacks this file is where you should instantiate each of the stacks. The CdkDemoStack is defined in the lib/cdk-demo-stack.ts file:

import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';

export class CdkDemoStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);
    // this is where your infrastructure will go
  }
}

All the work we will do in the following subsections will be in the lib/cdk-demo-stack.ts file.

Step 2. Create the network architecture

All the infrastructure we will create lives in the EC2 service space. To work with these resources, we must import the EC2 library. Add the following import statement to the top of the file:

import * as ec2 from “aws-cdk-lib/aws-ec2”

To create a fully functioning VPC with multiple subnets, an internet gateway, route tables, and much more, we could create an instance of the ec2.Vpc class in our CdkDemoStack class:

new ec2.Vpc(this, “vpc”)

This shows the power of abstractions in the CDK. This one line creates hundreds of lines of CloudFormation with a fully configured VPC. Unfortunately, this is more than what we ask for. So to mimic the examples we saw with CloudFormation, we must configure our VPC a bit:

const vpc = new ec2.Vpc(this, "vpc", {
  maxAzs: 1,
  subnetConfiguration: [
    {
      name: "subnet",
      cidrMask: 24,
      subnetType: ec2.SubnetType.PUBLIC,
    },
  ],
  natGateways: 0,
})

We tell CDK to use a single availability zone, and we specify a single subnet of the public type. Creating a public subnet automatically adds an internet gateway and associates it with the VPC.

There is one more network resource we must create ourselves, this is the security group. We create an instance of the ec2.SecurityGroup class and configure it to allow SSH traffic from anywhere:

const securityGroup = new ec2.SecurityGroup(this, "sg", { vpc })
securityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.SSH)

To associate the security group to the VPC we simply pass in the vpc instance into the security group configuration. We also see that we do not need to know that “anywhere” corresponds to the CIDR block >0.0.0.0/0, instead we use the ec2.Peer.anyIpv4() method. Likewise, we do not need to know that SSH corresponds to port 22, we just use the ec2.Port.SSH property.

Step 3. Create the EC2 instance

To create and then connect to the EC2 instance we need a public and private keypair and add the public key to the instance. We could use any available Node.js package to create the keypair, but to avoid introducing external dependencies, we will create the keypair using the ec2.KeyPair class:

const keyPair = new ec2.KeyPair(this, "key")

How do we obtain the private key, you might ask? We will see how to do this in the next subsection.

What is left is to create the EC2 instance using the ec2.Instance class:

new ec2.Instance(this, "instance", {
  vpc,
  securityGroup,
  keyPair,
  instanceType: ec2.InstanceType.of(
    ec2.InstanceClass.T3,
    ec2.InstanceSize.MICRO
  ),
  machineImage: ec2.MachineImage.latestAmazonLinux2023(),
  vpcSubnets: vpc.selectSubnets({
    subnetType: ec2.SubnetType.PUBLIC,
  }),
})

A few things to note:

  • We pass in the vpc, securityGroup, and keyPair instances directly to the configuration.
  • We can define the EC2 instance type using helper classes and methods.
  • We obtain the AMI ID simply from the latestAmazonLinux2023 method on the ec2.MachineImage class.

As a last thing, we add an output for the SSH command:

new cdk.CfnOutput(this, "ssh", {
  value: `ssh -i ec2-key ec2-user@${instance.instancePublicIp}`
})

That was it. We were able to define our infrastructure in fewer lines of code than when using CloudFormation, and our editor (VS Code, in our case) helped us a lot.

Step 4. Deploy the demo infrastructure

You can preview the CloudFormation template that the CDK code produces by running the synthesize or cdk synth command:

$ cdk synth

This command outputs the CloudFormation template to the terminal. You can pipe the output to a YAML file if you want to review it:

$ cdk synth > template.yaml

In general, running the synth command is not required, but you always have the option of using CDK to produce the CloudFormation template(s) and then using the regular AWS cloud formation commands to deploy the infrastructure if it makes sense in your environment.

To create the infrastructure using CDK, you issue the cdk deploy command (the output is truncated for brevity):

$ cdk deploy
CdkDemoStack: deploying... [1/1]
CdkDemoStack: creating CloudFormation changeset...

 ✅  CdkDemoStack

✨  Deployment time: 175.84s

Outputs:
CdkDemoStack.ssh = ssh -i ec2-key ec2-user@<your instance public IP>

✨  Total time: 177.81s

As you can see, the deployment took almost three minutes. Copy the SSH command from the output for later.

If there are any changes to IAM permissions or network traffic rules, the output will indicate these changes clearly and ask you to approve them before continuing. This is a safety mechanism since changes to IAM permissions and network rules can have a huge impact on your environment. In this case there are both changes to network traffic rules and IAM permissions. You might be surprised at the IAM permission changes, but this is simply because CDK adds additional resources for us (e.g. an EC2 instance profile).

To verify that we can access the EC2 instance via SSH, we must get the private key. It turns out that CDK stores the private key for us in an AWS Systems Manager (SSM) parameter. We can use the AWS CLI to obtain the value of the parameter:

$ aws ssm get-parameters-by-path \
    --path "/ec2/keypair/" \
    --with-decryption \
    --recursive \
     --query 'Parameters[0].Value' --output text > ec2-key

Edit the file permissions for the ec2-key file, otherwise the SSH client will reject it due to too permissive permissions:

$ chmod 400 ec2-key

Connect to the instance using the SSH command you copied above:

$ ssh -i ec2-key ec2-user@<your instance public IP>
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
[ec2-user@ip-10-0-0-58 ~]$ echo “Hello CDK!Hello CDK!

Reflections on working with AWS CDK

We saw that through the use of constructs with default values and behavior together with helper classes and methods, working with the AWS CDK is easier than working with raw CloudFormation.

 

There are still some things we must know about resources in AWS to create a working infrastructure, but we got a lot more help than we did from using CloudFormation directly. 

 

What we did not see was the use of even higher-level abstractions, or CDK patterns. This is a topic for another blog post.

Infrastructure as Code on AWS with Terraform

Terraform appeared in 2014 as an alternative to AWS CloudFormation. It sprung from the desire to have a “CloudFormation for everything.”

Common for both AWS CloudFormation and AWS CDK is that the orchestration of creating all the resources happens on AWS, with the CloudFormation engine being in charge. This changed with Terraform, where the execution happens locally where you install the Terraform binary with the help of a vast system of providers for interacting with external systems accessible through APIs.

Terraform is similar in spirit to AWS CloudFormation in the sense that it is declarative infrastructure-as-code. Terraform uses a domain-specific language called HashiCorp Configuration Language (HCL). 

HCL has many features that make it easier to work with than the YAML or JSON files of AWS CloudFormation. Most prominent is the ease of splitting the infrastructure into multiple files and having Terraform automatically stitch them up into one file. It is also possible to write Terraform configurations using JSON, but this is an uncommon practice.

Step 1. Install Terraform

To get started with Terraform you need to install the Terraform binary locally. The details of how to do this is beyond the scope of this blog post but you can find instructions in the HashiCorp documentation or in our Terraform installation tutorial.

Once you have Terraform installed locally, verify that it works:

$ terraform version
Terraform v1.5.7
on darwin_arm64

Your version of Terraform might differ, but everything that follows should work with your version unless it is very old (before the 1.0 release).

Create a new working directory and cd into it:

$ mkdir terraform-demo && cd terraform-demo

Create a new file named main.tf using your favorite text editor. As mentioned, you can split your Terraform configuration into multiple files, but we will not do that for this simple demo.

Add the following boilerplate code to main.tf to configure Terraform to use the AWS provider to allow you to create resources in AWS:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "5.64.0"
    }
  }
}

variable "aws_region" {
  type    = string
  default = "eu-west-1"
}

provider “aws” {
  region = var.aws_region
}

Step 2. Create the network architecture

Add resources for the AWS VPC and the subnet. Resources can be added anywhere in the root of the main.tf file. Also, add a new variable for the VPC CIDR block:

variable "vpc_cidr_block" {
  type    = string
  default = "10.0.0.0/16"
}

resource "aws_vpc" "this" {
  cidr_block = var.vpc_cidr_block
}

resource "aws_subnet" "public" {
  vpc_id     = aws_vpc.this.id
  cidr_block = cidrsubnet(var.vpc_cidr_block, 8, 1)
}

This looks familiar to what we saw for CloudFormation. However, the use of functions is significantly easier in HCL than the intrinsic functions that exist in CloudFormation.

Add the rest of the networking resources to main.tf:

resource "aws_internet_gateway" "this" {
  vpc_id = aws_vpc.this.id
}

resource "aws_route_table" "public" {
  vpc_id = aws_vpc.this.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.this.id
  }
}

resource "aws_route_table_association" "public" {
  route_table_id = aws_route_table.public.id
  subnet_id      = aws_subnet.public.id
}

resource "aws_security_group" "this" {
  vpc_id = aws_vpc.this.id
}

resource "aws_vpc_security_group_ingress_rule" "ssh" {
  security_group_id = aws_security_group.this.id
  ip_protocol       = "Tcp"
  from_port         = 22
  to_port           = 22
  cidr_ipv4         = "0.0.0.0/0"
}

With a few exceptions, there is almost a one-to-one mapping between what we saw with CloudFormation (e.g., we do not need to add a specific resource to associate the internet gateway with the VPC).

Step 3. Create the EC2 instance

Remember how we needed to know the AMI ID to be able to create an EC2 instance with CloudFormation? This is still true when we use Terraform. However, Terraform has the concept of data sources that allow us to ask for data from AWS.

We can use the aws_ami data source to do this:

data "aws_ami" "amazon_linux" {
  most_recent = true
  owners      = ["amazon"]

  filter {
    name   = "name"
    values = ["al2023-ami-2023.*-x86_64"]
  }

  filter {
    name   = "architecture"
    values = ["x86_64"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

This data source takes a number of arguments to filter out and find the AMI ID we want. In this case, we use three different filters along with the owners and most_recent arguments. The output of this data source will be a given AMI ID fulfilling these filters. The data source will work for whatever AWS region we are targeting without requiring any changes or hardcoded values.

Terraform has providers for a vast number of different systems. It can also work with your local filesystem. Of particular interest in our demo is the use of the TLS provider to create a public and private keypair that we can use to connect to our EC2 instance. 

Basically, we can ask Terraform to generate the key that we needed to generate ourselves when we worked with CloudFormation. We will also use the local provider to store the private key in a file.

To do this, add the following two providers to the list of required providers:

terraform {
  required_providers {
    # previous providers not shown

    local = {
      source  = "hashicorp/local"
      version = "2.5.1"
    }

    tls = {
      source  = "hashicorp/tls"
      version = "4.0.5"
    }
  }
}

Next, create a public and private keypair, store the private key in a local file, and add the public key to AWS:

resource "tls_private_key" "ssh" {
  algorithm = "RSA"
  rsa_bits  = 4096
}

resource "local_sensitive_file" "private_key" {
  filename        = "ec2-key"
  content         = tls_private_key.ssh.private_key_pem
  file_permission = "0400"
}

resource "aws_key_pair" "ec2" {
  key_name   = "ec2-key"
  public_key = tls_private_key.ssh.public_key_openssh
}

The last piece is the EC2 instance. Add the instance and reference the AMI ID from above and the public key we added to AWS:

resource "aws_instance" "this" {
  instance_type               = "t3.micro"
  ami                         = data.aws_ami.amazon_linux.image_id
  key_name                    = aws_key_pair.ec2.key_name
  associate_public_ip_address = true
  vpc_security_group_ids = [
    aws_security_group.this.id,
  ]
  subnet_id = aws_subnet.public.id
}

Step 4. Deploy the demo infrastructure

The Terraform workflow consists of a number of Terraform CLI commands. To initialize the Terraform configuration, run the terraform init command (some of the output has been removed for brevity):

$ terraform init
Initializing the backend...
Initializing provider plugins...
- Installing hashicorp/local v2.5.1...
- Installed hashicorp/local v2.5.1 (signed by HashiCorp)
- Installing hashicorp/tls v4.0.5...
- Installed hashicorp/tls v4.0.5 (signed by HashiCorp)
- Installing hashicorp/aws v5.64.0...
- Installed hashicorp/aws v5.64.0 (signed by HashiCorp)

Terraform has been successfully initialized!

The terraform init command downloads the provider binaries and initializes the Terraform state backend. The state backend is where you store your Terraform state. In this case, we use the local backend, which means the state file is a regular JSON file next to your Terraform files.

Next, we can issue the terraform plan command to have Terraform tell us what would happen if we apply the current configuration:

$ terraform plan -out=actions.tfplan
# … output truncated
Plan: 11 to add, 0 to change, 0 to destroy.

This command produces a lot of output, but it can be very useful if you are not sure of exactly what changes will take place. In this case, the infrastructure is brand new, so the plan is to add 11 resources.

If we are satisfied with what the plan is telling us, we can go ahead and apply it:

$ terraform apply actions.tfplan
# … output truncated
Apply complete! Resources: 11 added, 0 changed, 0 destroyed.

Terraform will get to work creating the resources that make up our infrastructure. This process takes a few minutes. Once the apply operation is complete, we can ask Terraform to output the SSH connection command for us:

$ terraform output -raw ssh
ssh -i ec2-key ec2-user@<your instance public IP>

If we run the SSH command, we end up on our EC2 instance:

$ ssh -i ec2-key ec2-user@<your instance public IP>
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
[ec2-user@ip-10-0-1-110 ~]$ echo “Hello Terraform!”
Hello Terraform!

Once you are happy with the outcome, exit from the EC2 instance using the exit Linux command and then destroy your infrastructure using the terraform destroy command:

$ terraform destroy -auto-approve
...
Destroy complete! Resources: 11 destroyed.

The destroy command takes a few minutes to complete.

Reflections on working with Terraform

There are clear similarities between Terraform and CloudFormation. However, Terraform has several improvements over CloudFormation. The HCL language is, in general, easier to work with as you can get both syntax highlighting and support (depending on what editor you are using). 

 

Also, a small benefit is the number of functions available in the HCL language. Resource-wise, working with Terraform and CloudFormation is about the same, with smaller differences. However, you need intimate knowledge of AWS cloud infrastructure to work with Terraform on AWS.

 

A major difference between Terraform and both CloudFormation and CDK is the state file. Terraform records the state of the world in the state file, while CloudFormation and CDK use the underlying CloudFormation stack as the state.

Infrastructure as Code on AWS with OpenTofu

OpenTofu appeared in 2023 as a fork of Terraform, responding to HashiCorp’s change of license from a fully open-source license to a business source license (BSL). It is backed by the Linux Foundation.

OpenTofu is fully compatible with Terraform configurations using version 1.5.x. This means that if you want to migrate from Terraform to OpenTofu, it would be a simple process, most likely requiring more work in your CI/CD system than your Terraform configuration.

Step 1. Set up OpenTofu

To get started with OpenTofu, you need to install it on your local system. The details of how to do this are outside the scope of this blog post, but you can find detailed instructions in the OpenTofu documentation.

Once the installation is complete, verify that OpenTofu works:

$ tofu -version
OpenTofu v1.8.1
on darwin_arm64

Create a new working directory for the OpenTofu demo and cd into it:

$ mkdir opentofu-demo && cd opentofu-demo

We could create a new empty file named main.tf and write our OpenTofu configuration in it. Since we are working with OpenTofu, which is compatible with Terraform configurations using version 1.5.x, we can simplify our lives by copying the Terraform configuration from the previous section. 

Reuse your main.tf file from the Terraform example without making any changes.

Step 2. Deploy the demo infrastructure

Deploying infrastructure using OpenTofu follows the same steps as Terraform, with the exception of using the tofu CLI instead of the terraform CLI. The first step is to run the tofu init command (the output is truncated):

$ tofu init

Initializing the backend...

Initializing provider plugins...
- Installed hashicorp/local v2.5.1 (signed, key ID 0C0AF313E5FD9F80)
- Installed hashicorp/tls v4.0.5 (signed, key ID 0C0AF313E5FD9F80)
- Installed hashicorp/aws v5.64.0 (signed, key ID 0C0AF313E5FD9F80)
...
OpenTofu has been successfully initialized!

Next, we create a plan to let OpenTofu tell us what would happen if we apply the infrastructure changes:

$ tofu plan -out=actions.tfplan
OpenTofu will perform the following actions:
Plan: 11 to add, 0 to change, 0 to destroy.

As expected, this is a new infrastructure, so OpenTofu will only create new resources without changing or destroying existing resources. If you are satisfied with the plan output you can go ahead and run the tofu apply command:

$ tofu apply actions.tfplan

OpenTofu creates the resources for you, and the whole process takes about a minute to complete. Once it is done, ask OpenTofu to output the SSH command to connect to the EC2 instance:

$ tofu output -raw ssh
ssh -i ec2-key ec2-user@<your instance public IP>

Connect to the instance to verify that the infrastructure is working as intended:

$ ssh -i ec2-key ec2-user@<your instance public IP>
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
[ec2-user@ip-10-0-1-229 ~]$ echo "Hello OpenTofu!"
Hello OpenTofu!

Reflections on working with OpenTofu

As is clear from the preceding walkthrough of OpenTofu, it works as a drop-in replacement for Terraform. We did not have to change anything in the Terraform configuration to make it compatible with OpenTofu. The only thing we did was switch the Terraform binary to the OpenTofu binary. This is akin to doing this:

 

$ alias tofu=terraform

 

OpenTofu has all the benefits of using Terraform, i.e., being a “CloudFormation for everything.”

 

If the BSL license of Terraform is problematic to you and your organization, consider using OpenTofu. This section illustrates the ease of migrating from one tool to the other.

 

Note that features appearing in newer versions of Terraform will not necessarily be available in OpenTofu. The reverse is also true.

Infrastructure as Code on AWS with Pulumi

Pulumi is the fifth and last tool we will cover in this blog post. 

Pulumi was created around 2017 and open-sourced in 2018. It looks similar to AWS CDK, but it has the advantage of using a similar provider architecture as Terraform. In a sense, Pulumi is like “AWS CDK for everything,” just like Terraform is “CloudFormation for everything.”

Step 1. Install Pulumi

To get started with Pulumi, you need to install it on your local system. The details of this are outside the scope of this blog post, but you can find detailed instructions in the Pulumi documentation. You will also need to sign up for a Pulumi cloud account and authenticate your Pulumi CLI.

Once Pulumi is installed, verify that it is working:

$ pulumi version
v3.130.0

You can write your Pulumi code in one of TypeScript, JavaScript, Python, Go, C#, Java, or a specific YAML variant. Depending on the language you choose, you might need additional prerequisites installed. In the following example, I will be using TypeScript, so I have TypeScript and Node.js installed.

Create a new directory for your Pulumi code and cd into it:

$ mkdir pulumi-demo && cd pulumi-demo

Use the pulumi new command to create a new skeleton Pulumi project. You pass the name of a template to base your project from to the pulumi new command. There are hundreds of available templates to choose from. In this case, we use the aws-typescript template:

$ pulumi new aws-typescript \
    --name pulumi-demo \
    --description "A sample AWS infrastructure" \
    --stack demo \
    --runtime-options packagemanager=npm \
    --config "aws:region"=eu-west-1

If you forget to add a required flag in the command there will be a prompt asking you to provide the required value.

The skeleton aws-typescript project contains everything you need to get started. The important files are:

  • Pulumi.yaml contains configuration for your Pulumi project.
  • Pulumi.demo.yaml contains configuration specific to your stack. A Pulumi stack is similar to a CloudFormations stack. One Pulumi project can contain multiple stacks.
  • index.ts is the source code that defines the infrastructure of your stack.

Step 2. Create the network architecture

There is a specific AWS crosswalk (awsx) package containing convenience resource classes for common network architecture setups. We will use this package to create the VPC resource:

import * as pulumi from "@pulumi/pulumi";
import * as aws from "@pulumi/aws";
import * as awsx from "@pulumi/awsx";

const vpc = new awsx.ec2.Vpc("vpc", {
  cidrBlock: "10.0.0.0/16",
  numberOfAvailabilityZones: 1,
  natGateways: {
    strategy: awsx.ec2.NatGatewayStrategy.None,
  },
  subnetSpecs: [
    {
      type: "Public",
      size: 256,
    }
  ]
})

This is similar to how the AWS CDK class for a VPC worked. Since we specified that we want a public subnet in our VPC, we will automatically get an internet gateway and a route table with the default route.

To create the security group, we use the SecurityGroup class from the aws.ec2 package:

const securityGroup = new aws.ec2.SecurityGroup("securityGroup", {
  vpcId: vpc.vpcId,
  ingress: [
      {
          protocol: "tcp",
          fromPort: 22,
          toPort: 22,
          cidrBlocks: ["0.0.0.0/0"],
      },
  ],
})

We now have all the networking resources that are required.

Step 3. Create the EC2 instance

As in all the other examples, we must configure an SSH public and private keypair as well as an AMI ID to base the EC2 instance on.

We could generate a public and private keypair using any available TypeScript package for this purpose, but to simplify, we will generate a keypair using ssh-keygen and import it into our Pulumi project:

$ ssh-keygen -t rsa -b 4096 -f ./ec2-key

To read the public key into our Pulumi code, we will use two Node.js packages:

import * as fs from "fs"
import * as path from "path"

const publicKeyPath = path.join(__dirname, "ec2-key.pub")
const publicKey = fs.readFileSync(publicKeyPath, "utf-8").trim()

This shows the power of using Pulumi (or AWS CDK), where you can use the full Node.js ecosystem apart from the Pulumi-specific packages (similar ecosystems are available for the other programming languages).

Finding the AMI ID is similar to how we did it with Terraform:

const ami = aws.ec2.getAmi({
  mostRecent: true,
  owners: ["amazon"],
  filters: [
      {
          name: "architecture",
          values: ["x86_64"],
      },
      {
          name: "name",
          values: ["al2023-ami-2023.*-x86_64"],
      },
      {
        name: "virtualization-type",
        values: ["hvm"],
      }
  ],
})

We specify filters to find the exact AMI ID we are interested in.

With all the pieces needed for our EC2 instance, we can create it using the Instance class from the aws.ec2 package:

const instance = new aws.ec2.Instance("instance", {
  ami: ami.then(ami => ami.id),
  instanceType: aws.ec2.InstanceType.T3_Micro,
  subnetId: vpc.publicSubnetIds[0],
  keyName: keyPair.keyName,
  vpcSecurityGroupIds: [securityGroup.id]
})

Similar to how the AWS CDK works, we see that there are convenience classes for defining the EC2 instance type.

Step 4. Deploy the demo infrastructure

Run the pulumi up command to get a preview of what changes will be applied (the output is truncated for brevity):

$ pulumi up
Previewing update (demo)
     Type                                          Name              Plan
 +   pulumi:pulumi:Stack                           pulumi-demo-demo  create
 +   ├─ aws:ec2:KeyPair                            keypair           create
 +   ├─ awsx:ec2:Vpc                               vpc               create
 +   │  └─ aws:ec2:Vpc                             vpc               create
 +   │     ├─ aws:ec2:Subnet                       vpc-public-1      create
 +   │     │  └─ aws:ec2:RouteTable                vpc-public-1      create
 +   │     │     ├─ aws:ec2:RouteTableAssociation  vpc-public-1      create
 +   │     │     └─ aws:ec2:Route                  vpc-public-1      create
 +   │     └─ aws:ec2:InternetGateway              vpc               create
 +   ├─ aws:ec2:SecurityGroup                      securityGroup     create
 +   └─ aws:ec2:Instance                           instance          create

Resources:
    + 11 to create

The output is similar to what you get from a terraform plan command. If you do not want to get the prompt asking you to reply yes or no to accept or disregard the changes you can run pulumi up --yes to automatically accept the changes.

Answer yes to the prompt and wait for a few minutes while the infrastructure is created.

The output from the pulumi up command shows the SSH command needed to connect to the EC2 instance. You can also obtain the output with the following command:

$ pulumi stack output sshCommand
ssh -i ec2-key ec2-user@<your instance public IP>

Copy the command and run it in the Pulumi working directory:

$ ssh -i ec2-key ec2-user@<your instance public IP>
   ,     #_
   ~\_  ####_        Amazon Linux 2023
  ~~  \_#####\
  ~~     \###|
  ~~       \#/ ___   https://aws.amazon.com/linux/amazon-linux-2023
   ~~       V~' '->
    ~~~         /
      ~~._.   _/
         _/ _/
       _/m/'
[ec2-user@ip-10-0-0-135 ~]$ echo “Hello Pulumi!”
Hello Pulumi!

When you are satisfied with the outcome, make sure to delete the infrastructure to avoid unnecessary costs:

$ pulumi destroy --yes

The destruction takes a few minutes to complete. You should also remove the Pulumi stack:

$ pulumi stack rm --yes demo
Stack 'demo' has been removed!

Reflections on working with Pulumi

Once you get used to working with Pulumi you can create infrastructure quickly with confidence. This is especially true for developers who are used to working in code (e.g. TypeScript) instead of a configuration language (e.g. YAML). 

 

The workflow is similar to AWS CDK, but with the important difference being that Pulumi is the engine orchestrating the creation of infrastructure instead of CloudFormation. You can use Pulumi to create infrastructure in multiple clouds, similar to how Terraform works.

IaC tools comparison, strengths, and weaknesses

First of all, there is a time and place for each of the tools we have seen. Personal preferences, current environment, worker experience and skills will all have an impact on the tool that is right for you and your organization.

The table below summarizes a few key technical differences between the five tools:

CloudFormation CDK Terraform OpenTofu Pulumi
IaC Engine CloudFormation CloudFormation Terraform OpenTofu Pulumi
Execution mode(1) Remote Remote Local Local Local
Type Declarative Imperative Declarative Declarative Imperative
State mechanism CloudFormation Stack(s) CloudFormation Stack(s) State file State file State file
Language YAML, JSON JavaScript, TypeScript, Python, Go, .NET, Java HCL, JSON HCL, JSON JavaScript, TypeScript, Python, Go, .NET, Java, YAML
Target providers AWS AWS Any supported provider Any supported provider Any supported provider
License Proprietary Open Source + Proprietary BSL Open Source Open Source

(1) Execution mode refers to where the execution steps of creating the infrastructure takes place. Remote execution takes place in a system you are not really in control of. Local execution means you can run the tool standalone on your own system, but you could of course run it in a remote system as well.

Apart from technical differences, remember that there are also important differences in what you need to know about the AWS platform. CloudFormation, Terraform and OpenTofu all work at a lower level where you must understand how different resources fit together. 

Both AWS CDK and Pulumi try to help you with some of these details. No matter what tool you use, you will have an easier time if you understand the underlying AWS resources and their functionality.

Programming languages (JavaScript/TypeScript, Java, .NET, Go, Python) generally have much better support in the popular IDEs compared to YAML, JSON, and even HCL. If you are a developer working in a specific language (e.g. TypeScript) then you will most likely be more comfortable working with AWS CDK or Pulumi compared to CloudFormation, Terraform, or OpenTofu.

Finally, consider what license makes sense for your environment. If you want to, or are required to use open source tools you should pick one of OpenTofu or Pulumi. The CDK is in itself open source, but the underlying engine (CloudFormation) is not.

Best practices for implementing Infrastructure as Code on AWS

No matter what tool you are using for Infrastructure as Code on AWS, there are best practices that apply to all of them. Here are a few important things to keep in mind:

  • Store your infrastructure as code in a source code repository (e.g., GitHub). Make sure to have robust access management around your infrastructure repository.
  • Keep secrets, keys, and certificates out of your infrastructure code. Utilize secrets management solutions (e.g., AWS Secrets Manager, GitHub Actions secrets, etc). Rotate secrets regularly, or generate short-lived secrets on-demand where possible.
  • If you work in a team, you should set up a process for moving changes from implementation into production. All infrastructure updates should go through a review process (pair-programming or pull-request reviews) and be applied consistently through a CI/CD pipeline or other automation platform (e.g., Spacelift).
  • Test the infrastructure changes you are making, either using built-in testing support in your tool of choice (e.g., the OpenTofu test framework) or making the corresponding changes in a development or staging environment.
  • Set up monitoring and alerts for drift detection, either using the native support in AWS CloudFormation or another platform (e.g., Spacelift). If appropriate, use auto-remediation of detected drifts.
  • Build reusable modules and components for common architectural pieces in your AWS environment. Build high-level abstractions in AWS CDK or Pulumi. Build modules in Terraform and OpenTofu. Use nested templates in CloudFormation.
  • Apply policy-as-code to set a baseline for what changes are allowed in your AWS environment (e.g., OPA integration in Spacelift). Use policies to positively influence your AWS bill.
  • Restrict permissions to the bare minimum. This is true for the automation system applying changes to your infrastructure, as well as the AWS IAM permissions and network rules that you set up as part of your infrastructure.

There are more points that can be made, but following the practices listed above is a great start.

Enhance your IaC workflow with Spacelift

Spacelift is an infrastructure orchestration platform that increases your infrastructure deployment speed without sacrificing control.

With Spacelift, you can provision, configure, and govern with one or more automated workflows that orchestrate Terraform, OpenTofu, Terragrunt, Pulumi, CloudFormation, Ansible, and Kubernetes. 

You don’t need to define all the prerequisite steps for installing and configuring the infrastructure tool you are using, nor the deployment and security steps, as they are all available in the default workflow.

Spacelift offers a unique set of infrastructure orchestration capabilities, such as:

  • Policies (based on Open Policy Agent) — You can control how many approvals you need for runs, the kind of resources you can create, and the kind of parameters these resources can have, and you can also control the behavior when a pull request is open or merged.
  • Multi-IaC workflows — Combine Terraform with Kubernetes, Ansible, and other IaC tools such as OpenTofu, Pulumi, and CloudFormation, create dependencies among them, and share outputs
  • Build self-service infrastructure — You can use Blueprints to build self-service infrastructure; simply complete a form to provision infrastructure based on Terraform and other supported tools.
  • Integrations with any third-party tools — You can integrate with your favorite third-party tools and even build policies for them. For example, you can Integrate security tools in your workflows using Custom Inputs.
  • Drift detection and remediation

Spacelift enables you to create private workers inside your infrastructure, which helps you execute Spacelift-related workflows on your end. The documentation provides more information on configuring private workers.

If you want to learn more about what you can do with Spacelift, check out this article, create a free account today, or book a demo with one of our engineers.

Key points

In this blog post, we have seen how to use five different approaches to infrastructure as code on AWS.

  • AWS CloudFormation is the native tool for infrastructure as code on AWS. It is declarative and you write CloudFormation templates using either JSON or YAML. It has been around for a long time and has evolved over time with additional features.
  • AWS CDK is an imperative approach to infrastructure as code. You write CDK code using a high-level programming language such as TypeScript. The output of CDK are CloudFormation templates.
  • Terraform is a declarative tool for infrastructure as code. The AWS provider for Terraform is the most downloaded provider in the Terraform registry, indicative of how widespread its use is in the AWS community. Terraform configuration is written in either HCL or JSON.
  • OpenTofu is a drop-in replacement for Terraform. It sprung from the movement started after HashiCorp switched the license for its products from open-source to source-available. OpenTofu is declarative, written in HCL or JSON.
  • Pulumi is an open-source tool for infrastructure as code. You write Pulumi infrastructure in an imperative fashion in a language of your choice (JavaScript/TypeScript, Python, .NET, Java, Go). Pulumi can work with any target provider that is supported, not just AWS.

What tool is right for your situation depends on your current environment, personal preferences, experience, and skills. Be sure to review the technical differences and evaluate the different options.

Follow best practices for infrastructure as code on AWS to keep your AWS environment secure and cost-efficient.

Solve your infrastructure challenges

Spacelift is a flexible orchestration solution for IaC development. It delivers enhanced collaboration, automation, and controls to simplify and accelerate the provisioning of cloud-based infrastructures.

Learn more

The Practitioner’s Guide to Scaling Infrastructure as Code

Transform your IaC management to scale

securely, efficiently, and productively

into the future.

ebook global banner
Share your data and download the guide