AWS EKS (Elastic Kubernetes Service) is the go-to managed Kubernetes option on AWS, but getting a cluster up and running involves a surprising number of moving parts: VPCs, subnets, IAM roles, node groups, add-ons, and more.
You can provision all of this through the AWS Console, eksctl, or CloudFormation, but if Terraform is already part of your stack, it makes far more sense to keep everything in one place.
In this guide, you’ll learn how to provision a fully functional AWS EKS cluster with Terraform using the battle-tested terraform-aws-modules/eks/aws community module. We’ll cover:
- Setting up the VPC and network prerequisites
- Configuring managed node groups
- Running
terraform init,plan, andapply - Connecting to your cluster with kubectl and verifying it’s working
What is AWS EKS?
AWS EKS (Elastic Kubernetes Service) is a managed service from AWS that makes Kubernetes cluster management easier to implement and scale. It provides a reliable and scalable platform to run Kubernetes workloads, allowing engineers to focus on building applications while AWS takes care of managing the underlying Kubernetes infrastructure.
Why should you use Terraform with AWS EKS?
AWS EKS clusters can be provisioned through different mechanisms: UI, CLI, CloudFormation, Pulumi, Crossplane, and others. If you are using Terraform already, it doesn’t make sense to add another tool for provisioning an AWS EKS cluster because Terraform does an excellent job.
At the same time, if you are not using an IaC tool to manage your infrastructure, Terraform is one of the most popular tools available. Alternatively, OpenTofu could be even more appropriate for your use cases.
Here are some of the benefits Terraform provides:
- Easy-to-learn syntax based on HashiCorp configuration language (HCL), which is also easy to understand
- Full deployment mechanism, including deletion. With Terraform, you can initialize, plan, apply, and even destroy your infrastructure resources without the hassle of understanding all the API requests required to get the resources, saving their IDs, and then running the desired commands.
- Relationship workflows. Resources can have dependencies between other resources: For example, a route table, may have a dependency on a VPC, or a route table rule may have a dependency on an internet gateway. Terraform can easily solve these dependencies without you explicitly creating them. On the other hand, if resources don’t have dependencies, you can create them manually, if you want.
- CI integrations. Terraform comes with formatting, validating, and testing mechanisms out of the box. You can also easily integrate it with third-party tools for building infrastructure policies your code should respect or that can inspect your code for security vulnerabilities.
- Conditionally create resources and easy looping mechanisms. Terraform allows you to create resources conditionally by using ternary operators. You also have the option to create multiple resources of the same type by leveraging count and for_each.
- Fully customizable infrastructure. By using variables, you don’t have to hardcode your infrastructure. You can define your Terraform code, use variables throughout it, and then specify the variable values.
- Enhanced reusability. Use Terraform modules to encapsulate a configuration and reuse it wherever you want. They can be hosted in many ways and easily reused to accommodate the same templates (with different variables) for different environments or configurations.
How to provision an AWS EKS cluster with Terraform
In this example, we will create an Amazon EKS cluster using a Terraform module that already includes most of the required configuration.
Step 1 - Install Terraform locally
First, install Terraform locally:
brew install terraformas well as the AWS CLI:
brew install awscliand kubectl:
brew install kubernetes-cliIf you’re on a different operating system, please find the respective installation instructions here:
Step 2 - Configure the AWS CLI
Now, you’ll need to configure your AWS CLI with access credentials to your AWS account. You can do this by running:
aws configureand providing your Access Key ID and Secret Access Key. You will also need to add the region. For the purposes of this guide, we will use us-east-2. Terraform will later use these credentials to provision your AWS resources.
Step 3 - Get the code
For this example, we will use the terraform-aws-modules/eks/aws module. We’ve created this repository to leverage the module and build some of the prerequisites.
We will initially create a VPC and its required network components for AWS EKS to establish a network environment for our K8s cluster. To keep things really simple and cost-effective, this repository will create:
- 1 VPC
- 2 Public subnets
- 1 Internet gateway
- 1 Route table
- 1 Route table rule to the internet gateway
- 2 Route table associations for the two public subnets
data "aws_availability_zones" "available" {}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main-vpc-eks"
}
}
resource "aws_subnet" "public_subnet" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = {
Name = "public-subnet-${count.index}"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main-igw"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "main-route-table"
}
}
resource "aws_route_table_association" "a" {
count = 2
subnet_id = aws_subnet.public_subnet.*.id[count.index]
route_table_id = aws_route_table.public.id
}Now, we will take the public EKS module and use the network configurations from above by leveraging one of the examples:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.31"
cluster_name = "example"
cluster_version = "1.31"
# Optional
cluster_endpoint_public_access = true
# Optional: Adds the current caller identity as an administrator via cluster access entry
enable_cluster_creator_admin_permissions = true
eks_managed_node_groups = {
example = {
instance_types = ["t3.medium"]
min_size = 1
max_size = 3
desired_size = 2
}
}
vpc_id = aws_vpc.main.id
subnet_ids = aws_subnet.public_subnet.*.id
tags = {
Environment = "dev"
Terraform = "true"
}
}Other examples can be leveraged depending on your use case, so make sure to check out the module documentation.
Step 4 - Run Terraform
Now that you’ve picked up this configuration, you are ready to create all of the resources using Terraform. To do that, we will initially need to initialize the working directory and download any modules and providers used:
terraform initAfter the initialization, you can run a plan to see which resources will be created:
terraform planYou will see there are 44 resources to create:
Plan: 44 to add, 0 to change, 0 to destroy.Carefully inspect the plan, and then apply the code:
terraform apply
Plan: 44 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yesAfter a couple of minutes, all the resources should be created:
Apply complete! Resources: 44 added, 0 changed, 0 destroyed.Step 5 - Connect with Kubectl
At the configuration level, we also set up an output to make it easy to connect to your Kubernetes cluster:
Outputs:
eks_connect = "aws eks --region eu-west-1 update-kubeconfig --name example"To leverage kubectl, you will first need to run the above command and connect to your cluster:
aws eks --region eu-west-1 update-kubeconfig --name example
Added new context arn:aws:eks:eu-west-1:012321433432232:cluster/example to /Users/dsadas/.kube/configNow, let’s see if we have logged it correctly to our cluster:
kubectl config current-context
arn:aws:eks:eu-west-1:012321433432232:cluster/exampleStep 6 - Interact with your cluster
You can view your cluster’s nodes by running the following command:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-32.eu-west-1.compute.internal Ready <none> 97s v1.31.4-eks-aeac579
ip-10-0-1-117.eu-west-1.compute.internal Ready <none> 96s v1.31.4-eks-aeac579To get more specific details about them, you can use the -o custom-columns options:
kubectl get nodes -o custom-columns=Name:.metadata.name,nCPU:.status.capacity.cpu,Memory:.status.capacity.memory
Name nCPU Memory
ip-10-0-0-32.eu-west-1.compute.internal 2 3919544Ki
ip-10-0-1-117.eu-west-1.compute.internal 2 3919552KiWith the long command from above, we can view the number of CPUs our nodes have and their available memory.
Let’s deploy an Nginx instance to see if the cluster is working correctly:
kubectl run --port 80 --image nginx nginx
pod/nginx createdYou can see the status of it by running:
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 33sNow, let’s set up a tunnel from your computer to this pod:
kubectl port-forward nginx 3000:80
Forwarding from 127.0.0.1:3000 -> 80
Forwarding from [::1]:3000 -> 80If you open http://localhost:3000 in your browser, you should see the web server greeting:
Step 7 - Clean up
To destroy the resources we’ve created in this session, run
terraform destroyThis may take a few minutes. You can read more about it here: How to Delete Resources from Terraform Using the Destroy Command.
Key points
In this guide, we provisioned a working AWS EKS cluster with Terraform using the terraform-aws-modules/eks/aws module, which handles the heavy lifting around node group IAM roles, launch templates, and cluster access entries. Keep in mind that this setup, with public subnets and a public cluster endpoint, is designed for learning.
For production, you’ll want private subnets with a NAT gateway, a restricted or private cluster endpoint, and IAM Roles for Service Accounts (IRSA) to scope AWS permissions per workload.
From here, the natural next steps include managing EKS add-ons (CoreDNS, VPC CNI, EBS CSI) as aws_eks_addon resources, adding Karpenter or the Cluster Autoscaler for dynamic node scaling, and using Spacelift to manage the full Terraform workflow across environments with policy-as-code, drift detection, and CI/CD built in.
Spacelift takes managing Terraform to the next level by giving you access to a powerful CI/CD workflow and unlocking features such as:
- Policies (based on Open Policy Agent) – You can control how many approvals you need for runs, what kind of resources you can create, and what kind of parameters these resources can have, and you can also control the behavior when a pull request is open or merged.
- Multi-IaC workflows – Combine Terraform with Kubernetes, Ansible, and other IaC tools such as OpenTofu, Pulumi, and CloudFormation, create dependencies among them, and share outputs.
- Build self-service infrastructure – You can use Templates and Blueprints to build self-service infrastructure; simply complete a form to provision infrastructure based on Terraform and other supported tools.
- AI-powered provisioning and diagnostics – Spacelift Intelligence adds an AI-powered layer for natural language provisioning, diagnostics, and operational insight across your infrastructure workflows.
- Integrations with any third-party tools – You can integrate with your favorite third-party tools and even build policies for them. For example, see how to integrate security tools in your workflows using Custom Inputs.
Discover better way to manage Terraform
Spacelift orchestrates your Terraform workflows end to end, including state management, policy as code, drift detection, resource visualization, context sharing, programmatic configuration, and support for complex, multi-step workflows.
Frequently asked questions
How long does it take to create an EKS cluster with Terraform?
Creating an EKS cluster with Terraform typically takes 15 to 20 minutes, depending on configuration complexity and AWS region performance. The control plane provisioning alone accounts for about 10 to 15 minutes, while node groups, networking resources (VPC, subnets, security groups), and IAM roles add several more minutes. Using modules like terraform-aws-eks can streamline the process but does not significantly reduce provisioning time.
What's the difference between EKS managed node groups and self-managed?
EKS managed node groups have AWS handle the provisioning, lifecycle, and updates of EC2 worker nodes through the EKS API, including automated AMI patching and graceful draining during upgrades. Self-managed nodes give you full control over the EC2 instances, AMIs, and scaling configuration, but you handle patching, upgrades, and scaling logic yourself.
