Kubernetes

Kubernetes Ingress with NGINX Ingress Controller Example

Kubernetes Ingress & Examples

In this article, we explore the Ingress object in Kubernetes (K8S), and look at how it can be used with some examples. We will then walk step-by-step through setting up an NGINX Ingress controller with Azure Kubernetes Service (AKS).

What is Ingress in Kubernetes?

Ingress in K8S is an object that allows access to services within your cluster, from outside your cluster.

The official documentation on kubernetes.io describes Ingress:

An API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination and name-based virtual hosting.

Traffic routing is defined by rules specified on the Ingress resource.

Ingress objects refer to allowing HTTP or HTTPS traffic through to your cluster services. They do not expose other ports or protocols to the wider world. For this, a service type of LoadBalancer or NodePort should be used.

A service is an external interface to a logical set of Pods. Services use a ‘virtual IP address’ local to the cluster, external services would have no way to access these IP addresses without an Ingress.

Ingress, LoadBalancer, and NodePort

Ingress, LoadBalancer, and NodePort are all ways of exposing services within your K8S cluster for external consumption.

NodePort and LoadBalancer let you expose a service by specifying that value in the service’s type.

With a NodePort, K8S allocates a specific port on each node to the service specified. Any request received on the port by the cluster simply gets forwarded to the service.

With a LoadBalancer, there needs to be an external service outside of the K8S cluster to provide the public IP address. In Azure, this would be an Azure Application Gateway in front of your Azure Kubernetes Service (AKS) cluster. In AWS, this would be an Application Load Balancer (ALB) in front of your Elastic Kubernetes Service (EKS), and in Google cloud, this would be a Network Load Balancer in front of your Google Kubernetes Engine (GKE) cluster.

Each time a new service is exposed, a new LoadBalancer needs to be created to get a public IP address. Conveniently, the Load balancer provisioning happens automatically for you because of the way the Cloud providers plugin to Kubernetes, so that doesn’t have to be done separately.

Ingress is a completely independent resource to your service. As well as enabling routing rules to be consolidated in one place (the Ingress object), this has the advantage of being a separate, decoupled entity that can be created and destroyed separately from any services.

Ingress Controllers

To set up Ingress in K8S, you need to configure an Ingress controller. These do not come as default with the cluster and must be installed separately. An ingress controller is typically a reverse web proxy server implementation in the cluster.

There are many available Ingress controllers, all of which have different features. The official documentation lists the available Ingress controllers. A few commonly used ones include:

You can have multiple ingress controllers in a cluster mapped to multiple load balancers should you wish!

Learnk8s has a fantastic feature comparison of all the available ingress controllers to help you make your choice. Note the limitations of the Azure application gateway ingress controller. For now, the NGINX Ingress Controller seems like a better choice…

Setting up Ingress with NGINX - Step by Step

NGINX is a widely used Ingress controller, we will run through how to set this up with Azure Kubernetes Service. We will set up two simple web services and use the NGINX Ingress to route the traffic accordingly.

Step 1 – Fire up your AKS cluster and connect to it

To do this, browse to the AKS cluster resource in the Azure Portal and click on connect. The commands needed to connect via your shell using the Azure CLI will be shown.

Step 2 – Install the NGINX Ingress controller

It will install the controller in the ingress-nginx namespace, creating that namespace if it doesn’t already exist.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.3.0/deploy/static/provider/cloud/deploy.yaml

Keep all of your kubectl commands in one place with our Kubernetes Cheat Sheet.

Note you can also use Helm to install if you have it installed (you don’t need to run this if you have already installed using the previous command):

helm upgrade --install ingress-nginx ingress-nginx --repo https://kubernetes.github.io/ingress-nginx --namespace ingress-nginx --create-namespace

Step 3 – Check the Ingress controller pod is running

kubectl get pods --namespace ingress-nginx

Step 4 – Check the NGINX Ingress controller has been assigned a public Ip address

kubectl get service ingress-nginx-controller --namespace=ingress-nginx

Note the service type is LoadBalancer:

kubernetes ingress - loadbalancer

Browsing to this IP address will show you the NGINX 404 page. This is because we have not set up any routing rules for our services yet.

Step 5 – Set up a basic web app for testing our new Ingress controller

First, we need to set up a DNS record pointing to the External IP address we discovered in the previous step. Once that is set, run the following command to set up a demo (replace the [DNS_NAME] with your record, e.g. www.jackwesleyroper.io).

Note that you must set up a DNS record, this step will not work with an IP address. This command comes from the NGINX documentation, we will look at declarative approaches later in this article.

kubectl create ingress demo --class=nginx --rule [DNS_NAME]/=demo:80

Step 6 – Browse to the web address

You will see ‘It works!’ displayed, confirming that the Ingress controller is correctly routing traffic to the demo app.

Step 7 – Set up two more web apps

Now we will set up two more web apps, and route traffic between them using NGINX.

We will create two YAML files using the demo apps from the official Azure documentation.

aks-helloworld-one.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: aks-helloworld-one  
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aks-helloworld-one
  template:
    metadata:
      labels:
        app: aks-helloworld-one
    spec:
      containers:
      - name: aks-helloworld-one
        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
        ports:
        - containerPort: 80
        env:
        - name: TITLE
          value: "Welcome to Azure Kubernetes Service (AKS)"
---
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld-one  
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: aks-helloworld-one

aks-helloworld-two.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: aks-helloworld-two  
spec:
  replicas: 1
  selector:
    matchLabels:
      app: aks-helloworld-two
  template:
    metadata:
      labels:
        app: aks-helloworld-two
    spec:
      containers:
      - name: aks-helloworld-two
        image: mcr.microsoft.com/azuredocs/aks-helloworld:v1
        ports:
        - containerPort: 80
        env:
        - name: TITLE
          value: "AKS Ingress Demo"
---
apiVersion: v1
kind: Service
metadata:
  name: aks-helloworld-two  
spec:
  type: ClusterIP
  ports:
  - port: 80
  selector:
    app: aks-helloworld-two

Apply the two configuration files to setup the apps:

kubectl apply -f aks-helloworld-one.yaml --namespace ingress-nginx
kubectl apply -f aks-helloworld-two.yaml --namespace ingress-nginx
kubernetes ingress - configuration file one

Check the new pods are running (you should see two aks-helloworld pods running):

kubectl get pods --namespace ingress-nginx
kubernetes ingress - kubectl get pods

Step 8 – Setup the Ingress to route traffic between the two apps

We will set up path-based routing to direct traffic to the appropriate web apps based on the URL the user enters. EXTERNAL_IP/hello-world-one is routed to the service named aks-helloworld-one. Traffic to EXTERNAL_IP/hello-world-two is routed to the aks-helloworld-two service. Where the path is not specified by the user (EXTERNAL_IP/), the traffic is routed to aks-helloworld-one.

Create a file named hello-world-ingress.yaml.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: hello-world-ingress
  annotations:
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
    nginx.ingress.kubernetes.io/use-regex: "true"
    nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
  ingressClassName: nginx
  rules:
  - http:
      paths:
      - path: /hello-world-one(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port:
              number: 80
      - path: /hello-world-two(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-two
            port:
              number: 80
      - path: /(.*)
        pathType: Prefix
        backend:
          service:
            name: aks-helloworld-one
            port:
              number: 80

Create the ingress

kubectl apply -f hello-world-ingress.yaml --namespace ingress-nginx

Step 9 – Browse to the EXTERNAL_IP/hello-world-one

And EXTERNAL_IP/hello-world-two:

Key Points

An Ingress in K8S is a robust way to expose services within your K8S cluster to the outside world. It allows you to consolidate routing rules in one place. There are many available Ingress controllers available for use, in this article, we configured an NGINX Ingress on AKS and used it to route traffic between two demo apps.

For more information on Kubernetes Ingress, check out the Kubernetes documentation, Kubernetes GitHub and AKS documentation.

And if you want to learn how Spacelift can help you with Kubernetes management, see Spacelift documentation. Anything that can be run via kubectl can be run within a Spacelift stack.

The most Flexible CI/CD Automation Tool

Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities s for infrastructure management.

Start free trial