Kubernetes

Terraform vs. Kubernetes : Key Differences and Comparison

Terraform vs Kubernetes

This article will compare two of the most dominant tools in the cloud infrastructure space, Terraform and Kubernetes. These two tools share some similarities but are built to serve different purposes. Terraform is a tool focused on infrastructure provisioning and operates in the Infrastructure as code space. Kubernetes focuses on running container workloads and operates in the container orchestration space.

We will briefly take a look at each one of them and discuss their similarities and differences.

To learn more about these two foundational cloud infrastructure technologies, check the multiple tutorials on Spacelift’s blog around Kubernetes and Terraform.

What is Terraform?

Terraform is a tool that allows us to safely and predictably manage infrastructure at scale using cloud-agnostic and infrastructure as code principles. It is a powerful tool developed by Hashicorp that enables infrastructure provisioning both on the cloud and on-premises. 

Terraform is written in a declarative configuration language, Hashicorp Configuration Language (HCL), and facilitates the automation of infrastructure management in any environment. It allows IT professionals to collaborate and perform changes safely on cloud environments and scale them on-demand according to the business needs. 

Modules provide excellent reusability and code-sharing opportunities to boost the collaboration and productivity of teams operating on the cloud. Providers are plugins that offer integration and interaction with different APIs and are one of the main ways to extend Terraform’s functionality. 

Terraform keeps an internal state of the managed infrastructure, which represents resources, configuration, metadata, and their relationships. The state is actively maintained by Terraform and utilized to create plans, track changes, and enable modifications of infrastructure environments. The state should be stored remotely to allow teamwork and collaboration as a best practice. 

The core Terraform workflow consists of three concrete stages. First, we generate the infrastructure as code configuration files representing our environment’s desired state. Next, we check the output of the generated plan based on our manifests. After carefully reviewing the changes, we apply the plan to provision infrastructure resources.

terraform vs kubernetes - terraform workflow

Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.

What is Kubernetes?

Kubernetes (K8s) is an open-source system for container orchestration, automating deployments, and managing containerized apps. Its powerful orchestration system enables applications to scale seamlessly and achieve high availability. It has been designed and developed by Google, leveraging its vast experience in running and maintaining critical workloads in production.

Kubernetes strives to be cloud agnostic at its core, providing great flexibility in running workloads across cloud and on-premises environments. Additionally, it is designed with extensibility in mind, providing the option to add features and custom tooling to clusters easily.  

One of its main benefits is the self-healing capabilities it provides. Containers that fail are automatically restarted and rescheduled, nodes can be configured to be automatically replaced, and traffic is served only by healthy components based on health checks.  

Rollouts are handled progressively, and Kubernetes provides smart mechanisms to monitor application health during deployments. Rolling back problematic changes happens automatically if the application health doesn’t report a healthy status after a new deployment. Keeping the application running while rolling out new software versions has been a hot topic in the Kubernetes ecosystem over the past years, with many possible deployment strategies.

Kubernetes handles service discovery and load-balances traffic between similar pods natively without the need for complex external solutions. It has extendable built-in mechanisms to manage configuration and secrets for your applications. Scaling your applications has never been easier since it provides autoscaling options, scaling through commands, or via a UI. 

Kubernetes provides a cluster of nodes, a group of worker machines that run containerized applications. Each node hosts pods that hold application workload containers. The brain of the whole system is the control plane. Each cluster consists of several components that manage the worker nodes and pods and guarantee operational continuity. 

The API server is the component exposing the Kubernetes API and operates as the front-end of the control plane by handling all the communication between other parts. The etcd component is used to store all cluster data and state. The scheduler manages how pods are assigned to nodes and takes all the workload scheduling decisions. The controller manager components run different controller processes to ensure that the cluster’s desired state matches its current state. The cloud controller manager integrates Kubernetes clusters with external cloud providers, embeds their logic, and links the Kubernetes API with the Cloud Provider’s API. On each node, the kubelet is the agent responsible for running containers in pods, and kube-proxy is the component that adds the necessary networking capabilities for communication between pods and nodes. 

terraform vs kubernetes - kubernetes cluster

Check out this article to learn more about the Key Kubernetes Cluster Components.   

Differences between Terraform and Kubernetes

These two modern technologies have many similarities but also fundamental differences. Let’s look into some of them in more detail.

1) Area of Focus

First and foremost, Terraform and Kubernetes have different purposes and try to solve different problems. Terraform focuses on provisioning infrastructure components and targets the Infrastructure as Code space. Kubernetes aims to enable us to run container workloads and targets the container orchestration space. 

2) Configuration Language and CLI

Manifests in Terraform are written in HCL language, while in Kubernetes in YAML or JSON. Each tool has its own command line utility and tool-specific internals to understand before being productive. 

3) Tool workflow

The Terraform workflow is generally considered easy to understand and provides a welcoming experience for new users. On the other hand, to be effective in running applications in Kubernetes, one has to understand a lot of cluster internal components and mechanics, and usually, it takes more time for users to get up to speed with Kubernetes. 

4) Configuration Drift & Planning Phase

Terraform provides a native way to detect and inform you about configuration drift and unwanted changes by leveraging the planning phase of the typical workflow. In contrast, Kubernetes doesn’t support this functionality out of the box.

terraform vs kubernetes

Kubernetes vs Terraform: Similarities

1) DevOps Tools

Both tools operate in the DevOps space and are typically set up and configured by the same type of IT practitioners; Site Reliability, DevOps, and Cloud engineers. 

2) Open Source & Cloud Agnostic

Both tools are open source with various contributions by their online communities. They also take a similar approach to striving to be as cloud, platform, and API agnostic as possible to accommodate workloads across different environments. Even though they try to keep the core of the projects agnostic to external providers, both tools have mature and actively maintained integrations with the most common cloud providers.

3) Declarative Configuration

Although they use different languages, Terraform and Kubernetes take a similar approach conceptually to define the configuration. Manifests in both tools are written with the declarative approach. 

4) State Management

The notion of the state exists in both tools, although it is implemented differently. Terraform and Kubernetes apply some logic to reconcile the desired state configured in declarative configuration files with the running state.

5) Extensibility

Both tools are highly extensible by leveraging external plugins, connecting to external APIs, or defining custom resources if necessary. 

6) Well-Suited for Scale

Terraform and Kubernetes are battle-tested technologies that can support huge scale since they are designed and architected with scaling considerations for modern cloud-native environments.

7) CI/CD Compatibility

Since both tools offer highly and easily automatable workflows, they can be integrated and combined with CI/CD pipelines to automate their lifecycle.

Kubernetes and Terraform synergies

By decomposing all the information we discussed, we realize that Kubernetes and Terraform complement each other since they operate at two different levels and can be utilized in parallel. 

A typical model that cloud practitioners adopt is to use Terraform to provision infrastructure resources (e.g. Kubernetes clusters) and use Kubernetes to manage the containerized apps that run on top of the clusters. 

Terraform’s approach simplifies and standardizes the complex task of provisioning Kubernetes clusters. Terraform, in this case, enables a unified flow for provisioning Kubernetes clusters across providers with a declarative approach that is preferred over command line utilities. This approach works great, but users must use separate flows to manage infrastructure and application resources. 

Another approach is to use Terraform to manage Kubernetes-specific application components as well. This model has the advantage of adding the Terraform workflow to Kubernetes components. This way, IT operators can detect configuration drift on Kubernetes and manage infrastructure and application resources with the same workflow and configuration language. 

This approach has a significant disadvantage since Terraform requires a well-defined schema for each managed resource. Thus each Kubernetes resource needs to be translated into a Terraform schema to be available. This dependency makes maintaining Kubernetes resources through Terraform cumbersome at times.

Kubernetes Cloud Operators

Kubernetes Cloud Operators manage, configure, and integrate cloud-specific resources with a Kubernetes cluster. Cloud providers usually develop these operators to facilitate integrations with their respective services.

They facilitate the following use cases:

  • Service Provisioning: You can automatically provision services required by your microservices. For example, if your application requires a database, you can use a k8s operator to provide the service for you.
  • Secrets Integrations: When you are working with k8s, you will most likely have many configmaps and secrets defined. Sometimes, sharing these secrets on some of your Infrastructure resources can be tricky. By leveraging operators, this becomes easier as you can seamlessly integrate your k8s resources with your secrets management tool.
  • GitOps workflows: Using operators in the cloud context, can facilitate your GitOps workflow, as you will declare both your infrastructure and the applications in the same way

There are a couple of advantages of using Kubernetes Cloud Operator, but there are many downsides to using them too. The downsides are:

  • Limited service coverage – Terraform providers are always close to supporting absolutely anything that a cloud provider supports due to the extensive work of the community. When it comes to k8s cloud operators, they don’t have such a big community backing them up, and sometimes, for specific services, only some basic operations are available, requiring fallback to other services/methods.
  • Troubleshooting complexity – Using k8s cloud operators increases the troubleshooting complexity. This happens because you will not only need to troubleshoot your code and the resource you are provisioning but the operator itself too, which will be harder to do.
  • Learning curve – Terraform is more popular among DevOps engineers than k8s cloud operators are. Managing infrastructure resources via k8s will steepen the learning curve.
  • Vendor lock-in – By managing your infrastructure with a k8s cloud operator, will tie your infrastructure to a cloud provider and migrating will be harder to do than by using Terraform.
  • Maturity – Some k8s cloud operators are newer and might not be as stable as tools such as Terraform, Pulumi, or CloudFormation.

Some popular Cloud Operators provided by major cloud providers are:

Building an AWS infrastructure by using Terraform and AWS Controllers for Kubernetes (ACK)

AWS ACK configuration

AWS ACK can be deployed on any k8s cluster, and for this example, I will be using an EKS cluster.

The recommended way to install an ACK Service Controller is by using Helm.

I already have the EKS cluster deployed, so the next step would be to deploy the S3 ACK service controller. 

This is done by logging into the ecr public registry, getting the correct helm chart for your service, and running helm install, similar to the commands below:

export SERVICE=s3
export RELEASE_VERSION=$(curl -sL https://api.github.com/repos/aws-controllers-k8s/${SERVICE}-controller/releases/latest | jq -r '.tag_name | ltrimstr("v")')
export ACK_SYSTEM_NAMESPACE=ack-system
export AWS_REGION=us-west-2

aws ecr-public get-login-password --region us-east-1 | helm registry login --username AWS --password-stdin public.ecr.aws
helm install --create-namespace -n $ACK_SYSTEM_NAMESPACE ack-$SERVICE-controller \
  oci://public.ecr.aws/aws-controllers-k8s/$SERVICE-chart --version=$RELEASE_VERSION --set=aws.region=$AWS_REGION

You can check the status of the controller by running:

kubectl --namespace ack-system get pods -l "app.kubernetes.io/instance=ack-s3-controller"

NAME                                          READY   STATUS    RESTARTS   AGE
ack-s3-controller-s3-chart-646b5bd457-vbvxb   1/1     Running   0          11s

Now, ACK is deployed, but we still need to configure it before we are able to use it.

The next step refers to configuring IAM Roles for Service Accounts (IRSA). 

First, ensure you have eksctl installed on your machine. Check out this guide to see how to install it.

Create an OpenID Connect (OIDC) identity provider for your EKS cluster:

export EKS_CLUSTER_NAME=<eks cluster name>
export AWS_REGION=<aws region id>
eksctl utils associate-iam-oidc-provider --cluster $EKS_CLUSTER_NAME --region $AWS_REGION --approve

Next, create a bash script based on this and run it. This will create an IAM role for your ACK service controller. You will also need to attach an IAM policy to that role. For that, you can create another bash script based on this and run it. The policy can be changed to satisfy your needs.

Associate the IAM role you’ve created with the ACK service account like so:

IRSA_ROLE_ARN=eks.amazonaws.com/role-arn=$ACK_CONTROLLER_IAM_ROLE_ARN kubectl annotate serviceaccount -n $ACK_K8S_NAMESPACE $ACK_K8S_SERVICE_ACCOUNT_NAME $IRSA_ROLE_ARN

To ensure that everything will work properly, You will also need to create a cluster role and cluster role binding for the leases.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: ack-s3-controller-leader-election
rules:
- apiGroups:
  - "coordination.k8s.io"
  resources:
  - "leases"
  verbs:
  - "get"
  - "list"
  - "watch"
  - "create"
  - "update"
  - "delete"
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ack-s3-controller-leader-election-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ack-s3-controller-leader-election
subjects:
- kind: ServiceAccount
  name: ack-s3-controller
  namespace: ack-system

Apply the above configurations to ensure the role and rolebinding are created successfully.

Now, restart your deployment to take all changes into consideration:

kubectl get deployments -n $ACK_K8S_NAMESPACE
kubectl -n $ACK_K8S_NAMESPACE rollout restart deployment <ACK deployment name>

AWS ACK S3 bucket creation

To generate an S3 bucket, you will just need to define a configuration for it similar to:

apiVersion: s3.services.k8s.aws/v1alpha1
kind: Bucket
metadata:
  name: k8s-bucket-demo
spec:
  name: k8s-bucket-demo

Save it to a file and run:

kubectl apply -f s3_bucket.yaml

You can head over to your AWS account and see the S3 bucket created.

If you are facing any issues and you don’t see your bucket created in AWS, even though you ran kubectl apply successfully, you will need to check the controller logs to understand what has happened.

Terraform S3 bucket creation

The process of installing and using Terraform is pretty straightforward. Simply go here and download the correct Terraform version for your operating system.

The following code will generate an S3 bucket with the name terraform-test-bucket

provider "aws" {
  region = "eu-west-1"
}

resource "aws_s3_bucket" "this" {
  bucket = "terraform-test-bucket"
}

You will simply need to run the following commands, and you are good to go:

terraform init
terraform apply -auto-approve

After you run the apply, your resource will be created if you are not receiving any errors.

As mentioned above, with k8s operators, even though you are getting a successful result after running kubectl apply, you can still have surprises in your environment. The controller is the source of truth, so that’s the place you will need to look for logs and understand what has happened.

With Terraform, this is not the case, as if your apply succeeds, your resources will be provisioned successfully, and if you are having any issues, you will see them directly after your apply fails.

As you can see, Kubernetes operators achieve the same thing as Terraform does, but they are hard to deploy, hard to maintain, and you have to do troubleshooting in multiple places. For Terraform, the process is easy, and you won’t get false positives as you can get by using operators. 

Kubernetes and Terraform with Spacelift

Spacelift supports both Terraform and Kubernetes and enables users to create stacks based on them. Leveraging Spacelift, you can build CI/CD pipelines to combine them and get the best of each tool. This way, you will use a single tool to manage your Terraform and Kubernetes resources lifecycle, allow your teams to collaborate easily, and add some necessary security controls to your workflows.

You could, for example, deploy Kubernetes clusters with Terraform stacks and then, on separate Kubernetes stacks, deploy your containerized applications to your clusters. With this approach, you can easily integrate drift detection into your Kubernetes stacks and enable your teams to manage all your stacks from a single place

To take this one step further, you could add custom policies to harden the security and reliability of your configurations and deployments. Spacelift provides different types of policies and workflows easily customizable to fit every use case. You could, for instance, add plan policies to restrict or warn about security or compliance violations or approval policies to add an approval step during deployments. The possibilities are endless with Spacelift since it provides a great way to blend Terraform and Kubernetes and enhance their capabilities with extra functionality. 

Take a look at the Getting Started Guide to liftoff with Spacelift!

Key points

We delved into two of the most used modern DevOps tools, Kubernetes and Terraform. We discovered what makes each of them appealing and what functionalities they provide to IT operators and developers. We discussed their similarities, differences, and synergies and explored ways to combine them with Spacelift.

Thank you all for reading, and I hope you enjoyed this as much as I did.

Continuous Integration and Deployment for your IaC

Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure.  It helps overcome common state management issues and adds several must-have features for infrastructure management.

Start free trial

The Practitioner’s Guide to Scaling Infrastructure as Code

Transform your IaC management to scale

securely, efficiently, and productively

into the future.

ebook global banner
Share your data and download the guide