AWS EKS provides managed Kubernetes clusters as a service. You’re on AWS and want to avoid getting into the details of setting up a Kubernetes cluster from scratch? EKS is the way to go!
In this guide, you will learn how to provision an AWS EKS Kubernetes cluster with Terraform. Let’s start with the basics.
AWS EKS (Elastic Kubernetes Service) is a managed service from AWS that makes Kubernetes cluster management easier to implement and scale. It provides a reliable and scalable platform to run Kubernetes workloads, allowing engineers to focus on building applications while AWS takes care of managing the underlying Kubernetes infrastructure.
Using Terraform with AWS EKS provides many benefits, from streamlining the process of provisioning to configuring and managing your Kubernetes clusters. Managing the lifecycle of a service through Infrastructure as Code, it’s usually a very good idea, as you will configure that service faster, while minimizing the potential for human error.
First, locally install Terraform:
brew install terraform
as well as the AWS CLI:
brew install awscli
and kubectl:
brew install kubernetes-cli
If you’re on a different operating system, please find the respective installation instructions here:
Now, you’ll need to configure your AWS CLI with access credentials to your AWS account. You can do this by running
aws configure
and providing your Access Key ID and Secret Access Key. You will also need to add the region. For the purposes of this guide, we will use us-east-2. Terraform will later use these credentials to provision your AWS resources.
You can now clone a repository which contains everything you need to set up EKS:
git clone https://github.com/hashicorp/learn-terraform-provision-eks-cluster/
Inside you’ll see a few files, the main one being eks-cluster.tf:
module "eks" {
source = "terraform-aws-modules/eks/aws"
cluster_name = local.cluster_name
cluster_version = "1.20"
subnets = module.vpc.private_subnets
tags = {
Environment = "training"
GithubRepo = "terraform-aws-eks"
GithubOrg = "terraform-aws-modules"
}
vpc_id = module.vpc.vpc_id
workers_group_defaults = {
root_volume_type = "gp2"
}
worker_groups = [
{
name = "worker-group-1"
instance_type = "t2.small"
additional_userdata = "echo foo bar"
asg_desired_capacity = 2
additional_security_group_ids = [aws_security_group.worker_group_mgmt_one.id]
},
{
name = "worker-group-2"
instance_type = "t2.medium"
additional_userdata = "echo foo bar"
additional_security_group_ids = [aws_security_group.worker_group_mgmt_two.id]
asg_desired_capacity = 1
},
]
}
It uses the EKS Terraform module to set up an EKS cluster with 2 worker groups (the actual nodes running your workloads): one with a single medium machine, and one with two small machines.
You can now create all of those resources using Terraform. First, run
terraform init -upgrade
to initialize the Terraform workspace and download any modules and providers which are used.
In order to do a dry run of the changes to be made, run
terraform plan -out terraform.plan
This will show you that 51 resources will be added, as well as their relevant details. You can then run terraform apply with the resulting plan, in order to actually provision the resources:
terraform apply terraform.plan
This may take a few minutes to finish. You might get a “timed out” error, in which case just repeat both the terraform plan and terraform apply steps.
In the end you will get a list of outputs with their respective values printed out. Make note of your cluster_name.
In order to use kubectl, which is the main tool to interact with a Kubernetes cluster, you have to give it credentials to your EKS Kubernetes cluster. You can do this by running
aws eks --region us-east-2 update-kubeconfig --name <output.cluster_name>
Make sure to replace <output.cluster_name> with the relevant value from your Terraform apply outputs.
You can now view the nodes of your cluster by running
> kubectl get nodes -o custom-columns=Name:.metadata.name,nCPU:.status.capacity.cpu,Memory:.status.capacity.memory
Name nCPU Memory
ip-10-0-1-23.us-east-2.compute.internal 2 4026680Ki
ip-10-0-2-8.us-east-2.compute.internal 1 2031268Ki
ip-10-0-3-128.us-east-2.compute.internal 1 2031268Ki
This command is so long because it displays custom columns, thanks to which we can indeed see that there are 2 smaller nodes, and 1 bigger node.
Let’s deploy an Nginx instance to see if the cluster is working correctly.
kubectl run --port 80 --image nginx nginx
You can see the status of it by running:
> kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 2m46s
And finally set up a tunnel from your computer to this pod:
kubectl port-forward nginx 3000:80
If you open http://localhost:3000 in your browser, you should see the web server greet you:
In order to destroy the resources we’ve created in this session, run
terraform destroy
This may again take up to a few minutes. You can read more about it here: How to Delete Resources from Terraform Using the Destroy Command.
I hope this guide helped you on your Kubernetes journey on AWS! If you want more help managing your Terraform state file, building more complex workflows based on Terraform, and create self-service infrastructure, check out Spacelift. We’d love to have you!
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.
Discover better way to manage Terraform
Spacelift helps manage Terraform state, build more complex workflows, supports policy as code, programmatic configuration, context sharing, drift detection, resource visualization and many more.