Kubernetes

Terraform Kubernetes Provider: Manage & Deploy Resources

155.terraform kubernetes provider

When it comes to managing Infrastructure as Code, Terraform is the go-to choice for the modern engineer. As well as using Terraform providers specific to the cloud platform hosting your K8S instance, such as azurerm for Azure Kubernetes Service (AKS) on Azure, or aws for Elastic Kubernetes Service (EKS) on AWS, you can also use the native kubernetes provider to directly deploy and manage objects on your K8S cluster.

In this article, we will dive into how to use the kubernetes Terraform provider to first get a token from the AKS cluster to authenticate. Once connected, we will deploy K8S manifest files, described in HCL (Hashicorp configuration language). Manifest files are usually written in YAML, which can get a bit overwhelming. Using HCL to set up your K8S deployments can be a lot nicer.

To first set up your AKS cluster, it is recommended to use the Terraform provider for your cloud provider of choice. In the case of Azure, azurerm. You can use the azuread provider for Terraform to set up Azure Active Directory authentication to your AKS cluster. In this article, we won’t focus on the setup of the AKS cluster itself but rather on the kubernetes provider to deploy objects onto the cluster.

If you need to set up AKS, check out Provision Azure Kubernetes Service (AKS) Cluster using Terraform article to learn how to use the Terraform registry module to deploy a test cluster with just four lines of code!

Set up the Kubernetes Provider in Terraform

The easiest way to set up the Kubernetes provider with AKS is to first use the Azure CLI command below to get credentials:

az aks get-credentials --resource-group myResourceGroup --name myAKSCluster

Next, configure the kubernetes provider block by supplying a path to your kubeconfig file using the config_path attribute or using the KUBE_CONFIG_PATH environment variable.

A kubeconfig file may have multiple contexts. If config_context is not specified, the provider will use the default context.

provider "kubernetes" {
  config_path = "~/.kube/config"
}

Another example of how to configure the provider below uses the kubernetes provider block to provide the host and certificate, also using the exec plugin to issue the command kubelogin to get an AAD token for the cluster.

The service principal is created along with the AKS cluster deployment, and it can be referenced here to extract the server ID and client secret. The clientID is pulled from the AzureAD application, and the Azure AD tenant ID is also used.

The command is setup to login with spn. This information pulled together gives us enough to request our login token for the AKS cluster.

providers.tf

terraform {
  required_version = ">= 1.3.7"
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = ">= 3.41.0"
    }
    azuread = {
      version = ">= 2.33.0"
    }
    kubernetes = {
      version = ">= 2.17.0"
    }
  }
}

provider "azurerm" {
  features {}
}

provider "kubernetes" {
  host                   = azurerm_kubernetes_cluster.aks.kube_config.0.host
  cluster_ca_certificate = base64decode(azurerm_kubernetes_cluster.aks.kube_config.0.cluster_ca_certificate)

  # using kubelogin to get an AAD token for the cluster.
  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "kubelogin"
    args = [
      "get-token",
      "--environment",
      "AzurePublicCloud",
      "--server-id",
      data.azuread_service_principal.aks_aad_server.application_id, # Note: The AAD server app ID of AKS Managed AAD is always 6dae42f8-4368-4678-94ff-3960e28e3630 in any environments.
      "--client-id",
      azuread_application.app.application_id, # SPN App Id created via terraform
      "--client-secret",
      azuread_service_principal_password.spn_password.value,
      "--tenant-id",
      data.azurerm_subscription.current.tenant_id, # AAD Tenant Id
      "--login",
      "spn"
    ]
  }
}

Create a Namespace

Now we have the provider set to authenticate to our cluster, we need to create a namespace for our new resources.

The example file below takes a variable for the name called var.kube_namespace and sets an annotation and label.

ns.tf

resource "kubernetes_namespace_v1" "ns" {

  metadata {
    name = var.kube_namespace

    annotations = {
      name = "This blog post is amazing"
    }

    labels = {
      tier = "frontend"
    }
  }
}

Create a Pod

The example below shows how to create a pod with an NGINX image. It defines the image version to use, a port of 8080, and sets up a liveness probe using a custom HTTP header.

The namespace name is referenced from the ns resource we created in ns.tf.

NGINX is an open source reverse proxy server for HTTP, HTTPS, SMTP, POP3, and IMAP protocols, as well as a load balancer, HTTP cache, and a web server (origin server).

pod_nginx.tf

resource "kubernetes_pod_v1" "pod_nginx" {
  metadata {
    name      = "nginx"
    namespace = kubernetes_namespace_v1.ns.metadata.0.name
  }

  spec {
    container {
      image = "nginx:1.23.3"
      name  = "nginx"

      env {
        name  = "environment"
        value = "dev"
      }

      port {
        container_port = 8080
      }

      liveness_probe {
        http_get {
          path = "/"
          port = 8080

          http_header {
            name  = "X-Custom-Header"
            value = "GreatBlogArticle"
          }
        }

        initial_delay_seconds = 2
        period_seconds        = 2
      }
    }
  }
}

Create a Deployment

The example deployment file below creates our NGINX deployment in our namespace and specifies three replicas with resource limits.

deployment.tf

resource "kubernetes_deployment_v1" "deploy" {
  metadata {
    name      = "deploy-nginx"
    namespace = kubernetes_namespace_v1.ns.metadata.0.name

    labels = {
      tier = "frontend"
    }
  }

  spec {
    replicas = 3

    selector {
      match_labels = {
        tier = "frontend"
      }
    }

    template {
      metadata {
        labels = {
          tier = "frontend"
        }
      }

      spec {
        container {
          image = "nginx:1.23.3"
          name  = "nginx"

          resources {
            limits = {
              cpu    = "1"
              memory = "256Mi"
            }
            requests = {
              cpu    = "500m"
              memory = "30Mi"
            }
          }

          liveness_probe {
            http_get {
              path = "/"
              port = 8080

              http_header {
                name  = "X-Custom-Header"
                value = "GreatBlogArticle"
              }
            }

            initial_delay_seconds = 2
            period_seconds        = 2
          }
        }
      }
    }
  }
}

Create a Service

The example below creates a frontend service in the namespace we created earlier in ns.tf.

It uses the tier selector, frontend port 4444, backend of 8080 and is of type ‘Loadbalancer’ (exposes the traffic publically using a public IP).

svc.tf

resource "kubernetes_service_v1" "svc" {
  metadata {
    name      = "frontend-svc"
    namespace = kubernetes_namespace_v1.ns.metadata.0.name
  }
  spec {
    selector = {
      tier = kubernetes_deployment_v1.deploy.spec.0.template.0.metadata.0.labels.tier
    }
    port {
      port        = 4444
      target_port = 8080
    }

    type = "LoadBalancer"
  }
}

To check out more capabilities of the kubernetes Terraform provider, check out the official Terraform Registry documentation.

It is best practice to secure your cluster using Azure AD Role-based access control (RBAC), but if you want to use different methods, you can check out the Hashicorp guides here.

Also, take a look at how Spacelift helps you manage the complexities and compliance challenges of using Terraform and Kubernetes.  If you need any help managing your Terraform infrastructure, building more complex workflows based on Terraform, and managing AWS credentials per run, instead of using a static pair on your local machine, Spacelift is a fantastic tool for this. It supports Git workflows, policy as code, programmatic configuration, context sharing, drift detection, and many more great features right out of the box.

Key Points

You can use the kubernetes Terraform provider to manage objects on your K8S cluster. When combined with the cloud provider for your Kubernetes service, like azurerm for Azure, you can deploy your cluster and deploy objects into your cluster, all using Terraform, avoiding YAML manifest files!

Note: New versions of Terraform will be placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that will expand on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6. OpenTofu retained all the features and functionalities that had made Terraform popular among developers while also introducing improvements and enhancements. OpenTofu is not going to have its own providers and modules, but it is going to use its own registry for them.

Manage Terraform Better with Spacelift

Build more complex workflows based on Terraform using policy as code, programmatic configuration, context sharing, drift detection, resource visualization and many more.

Start free trial
Terraform CLI Commands Cheatsheet

Initialize/ plan/ apply your IaC, manage modules, state, and more.

Share your data and download the cheatsheet