AWS EKS provides managed Kubernetes clusters as a service. If you’re on AWS and want to avoid having to set up a Kubernetes cluster from scratch, EKS is the solution.
In this guide, you will learn how to provision an AWS EKS Kubernetes cluster with Terraform. Let’s start with the basics.
AWS EKS (Elastic Kubernetes Service) is a managed service from AWS that makes Kubernetes cluster management easier to implement and scale. It provides a reliable and scalable platform to run Kubernetes workloads, allowing engineers to focus on building applications while AWS takes care of managing the underlying Kubernetes infrastructure.
AWS EKS clusters can be provisioned through different mechanisms: UI, CLI, Cloudformation, Pulumi, Crossplane, and others. If you are using Terraform already, it doesn’t make sense to add another tool for provisioning an AWS EKS cluster because Terraform does an excellent job.
At the same time, if you are not using an IaC tool to manage your infrastructure, Terraform is one of the most popular tools available alternative OpenTofu could be even more appropriate for your use cases.
Here are some of the benefits Terraform provides:
- Easy-to-learn syntax based on HashiCorp configuration language (HCL), which is also easy to understand
- Full deployment mechanism including deletion. With Terraform you can initialize, plan, apply, and even destroy your infrastructure resources without the hassle of understanding all the API requests required to get the resources, saving their IDs, and then running the desired commands.
- Relationship workflows. Resources can have dependencies between other resources: For example, a route table, may have a dependency on a VPC, or a route table rule may have a dependency on an internet gateway. Terraform can easily solve these dependencies without you explicitly creating them. On the other hand, if resources don’t have dependencies, you can create them manually, if you want.
- CI integrations. Terraform comes with formatting, validating, and testing mechanisms out of the box. You can also easily integrate it with third-party tools for building infrastructure policies your code should respect or that can inspect your code for security vulnerabilities.
- Conditionally create resources and easy looping mechanisms. Terraform allows you to create resources conditionally by using ternary operators. You also have the option to create multiple resources of the same type by leveraging count and for_each
- Fully customizable infrastructure. By using variables, you don’t have to hardcode your infrastructure. You can define your Terraform code, use variables throughout it, and then specify the variable values.
- Enhanced reusability. Use Terraform modules to encapsulate a configuration and reuse it wherever you want. They can be hosted in many ways and easily reused to accommodate the same templates (with different variables) for different environments or configurations.
In this example, we will create the Amazon EKS cluster using a Terraform module that already has most of the required configuration prepared.
First, install Terraform locally:
brew install terraform
as well as the AWS CLI:
brew install awscli
and kubectl:
brew install kubernetes-cli
If you’re on a different operating system, please find the respective installation instructions here:
Now, you’ll need to configure your AWS CLI with access credentials to your AWS account. You can do this by running:
aws configure
and providing your Access Key ID and Secret Access Key. You will also need to add the region. For the purposes of this guide, we will use us-east-2. Terraform will later use these credentials to provision your AWS resources.
For this example, we will use the terraform-aws-modules/eks/aws module. We’ve created this repository to leverage the module and build some of the prerequisites.
We will initially create a VPC and its required network components for AWS EKS to establish a network environment for our K8s cluster. To keep things really simple and cost-effective, this repository will create:
- 1 VPC
- 2 Public subnets
- 1 Internet gateway
- 1 Route table
- 1 Route table rule to the internet gateway
- 2 Route table associations for the two public subnets
data "aws_availability_zones" "available" {}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "main-vpc-eks"
}
}
resource "aws_subnet" "public_subnet" {
count = 2
vpc_id = aws_vpc.main.id
cidr_block = cidrsubnet(aws_vpc.main.cidr_block, 8, count.index)
availability_zone = data.aws_availability_zones.available.names[count.index]
map_public_ip_on_launch = true
tags = {
Name = "public-subnet-${count.index}"
}
}
resource "aws_internet_gateway" "main" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main-igw"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.main.id
}
tags = {
Name = "main-route-table"
}
}
resource "aws_route_table_association" "a" {
count = 2
subnet_id = aws_subnet.public_subnet.*.id[count.index]
route_table_id = aws_route_table.public.id
}
Now, we will take the public EKS module and use the network configurations from above by leveraging one of the examples:
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.31"
cluster_name = "example"
cluster_version = "1.31"
# Optional
cluster_endpoint_public_access = true
# Optional: Adds the current caller identity as an administrator via cluster access entry
enable_cluster_creator_admin_permissions = true
eks_managed_node_groups = {
example = {
instance_types = ["t3.medium"]
min_size = 1
max_size = 3
desired_size = 2
}
}
vpc_id = aws_vpc.main.id
subnet_ids = aws_subnet.public_subnet.*.id
tags = {
Environment = "dev"
Terraform = "true"
}
}
Other examples can be leveraged depending on your use case, so make sure to check out the module documentation.
Now that you’ve picked up this configuration, you are ready to create all of the resources using Terraform. To do that, we will initially need to initialize the working directory and download any modules and providers used:
terraform init
After the initialization, you can run a plan to see which resources will be created:
terraform plan
You will see there are 44 resources to create:
Plan: 44 to add, 0 to change, 0 to destroy.
Carefully inspect the plan, and then apply the code:
terraform apply
Plan: 44 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
After a couple of minutes, all the resources should be created:
Apply complete! Resources: 44 added, 0 changed, 0 destroyed.
At the configuration level, we also set up an output to make it easy to connect to your Kubernetes cluster:
Outputs:
eks_connect = "aws eks --region eu-west-1 update-kubeconfig --name example"
To leverage kubectl, you will first need to run the above command and connect to your cluster:
aws eks --region eu-west-1 update-kubeconfig --name example
Added new context arn:aws:eks:eu-west-1:012321433432232:cluster/example to /Users/dsadas/.kube/config
Now, let’s see if we have logged it correctly to our cluster:
kubectl config current-context
arn:aws:eks:eu-west-1:012321433432232:cluster/example
You can view your cluster’s nodes by running the following command:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-10-0-0-32.eu-west-1.compute.internal Ready <none> 97s v1.31.4-eks-aeac579
ip-10-0-1-117.eu-west-1.compute.internal Ready <none> 96s v1.31.4-eks-aeac579
To get more specific details about them, you can use the -o custom-columns options:
kubectl get nodes -o custom-columns=Name:.metadata.name,nCPU:.status.capacity.cpu,Memory:.status.capacity.memory
Name nCPU Memory
ip-10-0-0-32.eu-west-1.compute.internal 2 3919544Ki
ip-10-0-1-117.eu-west-1.compute.internal 2 3919552Ki
With the long command from above, we can view the number of CPUs our nodes have and their available memory.
Let’s deploy an Nginx instance to see if the cluster is working correctly:
kubectl run --port 80 --image nginx nginx
pod/nginx created
You can see the status of it by running:
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 33s
Now, let’s set up a tunnel from your computer to this pod:
kubectl port-forward nginx 3000:80
Forwarding from 127.0.0.1:3000 -> 80
Forwarding from [::1]:3000 -> 80
If you open http://localhost:3000 in your browser, you should see the web server greeting:
To destroy the resources we’ve created in this session, run
terraform destroy
This may take a few minutes. You can read more about it here: How to Delete Resources from Terraform Using the Destroy Command.
For more help managing your Terraform state file, building more complex workflows based on Terraform, and creating self-service infrastructure, check out Spacelift.
Spacelift takes managing Terraform to the next level by giving you access to a powerful CI/CD workflow and unlocking features such as:
- Policies (based on Open Policy Agent) – You can control how many approvals you need for runs, what kind of resources you can create, and what kind of parameters these resources can have, and you can also control the behavior when a pull request is open or merged.
- Multi-IaC workflows – Combine Terraform with Kubernetes, Ansible, and other IaC tools such as OpenTofu, Pulumi, and CloudFormation, create dependencies among them, and share outputs
- Build self-service infrastructure – You can use Blueprints to build self-service infrastructure; simply complete a form to provision infrastructure based on Terraform and other supported tools.
- Integrations with any third-party tools – You can integrate with your favorite third-party tools and even build policies for them. For example, see how to Integrate security tools in your workflows using Custom Inputs.
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.
Discover better way to manage Terraform
Spacelift helps manage Terraform state, build more complex workflows, supports policy as code, programmatic configuration, context sharing, drift detection, resource visualization and many more.