Kubernetes

Kubernetes Service – What It is, Types & Examples

Kubernetes Service

It’s important to understand how Services work and which options are available so you can correctly deploy your workloads to your Kubernetes clusters. In this guide, we’ll explain the different Service types and share some simple examples of how to create Services for your apps. Let’s begin!

We will cover:

  1. What is a Kubernetes Service?
  2. Types of Kubernetes Service
  3. How do Services relate to ingresses?
  4. Example: How to use a Kubernetes Service

What is a Kubernetes service?

Kubernetes Services are resources that map network traffic to the Pods in your cluster. You need to create a Service each time you expose a set of Pods over the network, whether within your cluster or externally.

Kubernetes Services are API objects that enable network exposure for one or more cluster Pods. Services are integral to the Kubernetes networking model and provide important abstractions of lower-level components, which could behave differently between different clouds.

Note: The term “service” is often used generically by other tools to refer to an app, workload, or deployment. However, a Kubernetes Service is a specific type of entity that always provides network access to an app or component.

Why Services are needed in Kubernetes?

Services are necessary because of the distributed architecture of Kubernetes clusters. Apps are routinely deployed as Pods that could have thousands of replicas, spanning hundreds of physical compute Nodes. When a user interacts with your app, their request needs to be routed to any one of the available replicas, regardless of where it’s placed.

Services sit in front of your Pods to achieve this behavior. All network traffic flows into the Service before being redirected to one of the available Pods. Your other apps can then communicate with the service’s IP address or DNS name to reliably access the Pods you’ve exposed.

DNS for Services is enabled automatically through the Kubernetes service discovery system. Each Service is assigned a DNS A or AAAA record in the format <service-name>.<namespace-name>.svc.cluster-domain—a service called demo in the default namespace will be accessible within a cluster.local cluster at demo.default.svc.cluster.local, for example. This enables reliable in-cluster networking without having to lookup service IP addresses.

Kubernetes Service types

All Kubernetes Services ultimately forward network traffic to a set of Pods they represent. However, several different types of Service exist with their own characteristics and use cases. Here’s how the five currently available options compare.

1. ClusterIP Services

ClusterIP Services assign an IP address that can be used to reach the Service from within your cluster. This type doesn’t expose the Service externally.

ClusterIP is the default service type used when you don’t specify an alternative option. It’s the most common kind of service you’ll use as it enables simple internal networking for your workloads.

2. NodePort Services

NodePort Services are exposed externally through a specified static port binding on each of your Nodes. Hence, you can access the Service by connecting to the port on any of your cluster’s Nodes. NodePort Services are also assigned a cluster IP address that can be used to reach them within the cluster, just like ClusterIP Services.

Use of NodePort Services is generally unadvisable. They have functional limitations and can lead to security issues:

  • Anyone who can connect to the port on your Nodes can access the Service.
  • Each port number can only be used by one NodePort Service at a time to prevent conflicts.
  • Every Node in your cluster has to listen to the port by default, even if they’re not running a Pod that’s included in the Service.
  • No automatic load-balancing: clients are served by the Node they connect to.

When a NodePort Service is used, it’s generally to facilitate the use of your own load-balancing solution that reroutes traffic from outside the cluster. NodePorts can also be convenient for temporary debugging, development, and troubleshooting scenarios where you need to quickly test different configurations.

3. LoadBalancer Services

LoadBalancer Services are exposed outside your cluster using an external load balancer resource. This requires a connection to a load balancer provider, typically achieved by integrating your cluster with your cloud environment. Creating a LoadBalancer service will then automatically provision a new load balancer infrastructure component in your cloud account. This functionality is automatically configured when you use a managed Kubernetes service such as Amazon EKS or Google GKE.

Once you’ve created a LoadBalancer service, you can point your public DNS records to the provisioned load balancer’s IP address. This will then direct traffic to your Kubernetes Service. Therefore, LoadBalancers are the Service type you should normally use when you need an app to be accessible outside Kubernetes.

4. ExternalName Services

ExternalName Services allow you to conveniently access external resources from within your Kubernetes cluster. Unlike the other Service types, they don’t proxy traffic to your Pods.

When you create an ExternalName Service, you have to set the spec.externalName manifest field to the external address you want to route to (such as example.com). Kubernetes then adds a CNAME DNS record to your cluster that resolves the Service’s internal address (such as my-external-service.app-namespace.svc.cluster.local) to the external address (example.com). This allows you to easily change the external address in the future, without having to reconfigure the workloads that refer to it.

5. Headless Services

Headless services are a special type of Service that don’t provide load balancing or a cluster IP address. They’re “headless” because Kubernetes doesn’t automatically proxy any traffic through them. This allows you to use DNS lookups to discover the individual IP addresses of any Pods selected by the Service.

A headless service is useful when you want to interface with other service discovery systems without kube-proxy interfering. You can create one by specifically setting a Service’s spec.clusterIP field to the None value.

Which Service type should I use?

The correct Kubernetes Service type for a particular workload primarily depends on whether you’ll need external access. If you will interact with a Service from outside Kubernetes, then LoadBalancers should be preferred. A NodePort Service can be useful instead if you can accept its tradeoffs, a load balancer integration is unavailable, or you plan to implement your own load balancing solution.

For workloads that will only be accessed within your cluster—typically including database connections, caches, and other internal system components—ClusterIPs should be used to prevent inadvertently exposing the Service. Developers and operators can still connect to these Services from their workstations to debug problems and manually interact with workloads using the port-forwarding features available in Kubectl.

Here’s a quick tabular summary of how each Service type compares.

Characteristic ClusterIP NodePort LoadBalancer ExternalName Headless
Accessibility Internal External External Internal Internal
Use case Expose Pods to other Pods in your cluster Expose Pods on a specific Port of each Node Expose Pods using a cloud load balancer resource Configure a cluster DNS CNAME record that resolves to a specified address Interface with external service discovery systems
Suitable for Internal communications between workloads Accessing workloads outside the cluster, for one-off or development use Serving publicly accessible web apps and APIs in production Decoupling your workloads from direct dependencies on external service URLs Advanced custom networking that avoids automatic Kubernetes proxying
Client connection type Stable cluster-internal IP address or DNS name Port on Node IP address IP address of external load balancer Stable cluster-internal IP address or DNS name Stable-cluster internal IP address or DNS name that also enables DNS resolution of the Pod IPs behind the Service
External dependencies None Free port on each Node A Load Balancer component (typically billable by your cloud provider) None None

How do Services relate to Ingresses?

Ingresses are another type of Kubernetes networking object. They’re often used in conjunction with Services. Whereas Services manage how networking is managed within your cluster, Ingresses are used to control external access, typically based on HTTP and HTTPS routes.

Ingresses make it much easier to run multiple apps in one cluster. Ideally, you want to avoid creating a new LoadBalancer service for every app, as your cloud provider will bill you for each load balancer resource you use. Ingresses allow you to work with one LoadBalancer service that reroutes traffic based on HTTP characteristics such as hostname and port. You can direct requests to app.example.com to your web-app Service, for example, while api.example.com targets the backend Service.

To use Ingresses, you have to run an Ingress controller inside your cluster. The controller creates a single LoadBalancer service that you direct all your external traffic to. When requests hit that service, the Ingress controller compares their characteristics to the Ingress rules you’ve created. It then forwards the requests to the Service indicated by the matching Ingress object.

Note: Ingress is a mature Kubernetes feature, but work is underway to replace it with the Gateway API, a modern evolution that’s more generic and offers improved separation of concerns. Gateway is installable as an optional add-on in Kubernetes v1.24+, but Ingress remains supported for the foreseeable future.

Example: How to use a Kubernetes Service

The following example demonstrates how to create and test simple ClusterIP, NodePort, and LoadBalancer Services in basic configurations. You can learn more about all the available options in the Kubernetes documentation.

For ease of use, we’ll deploy an NGINX web server Pod, but the app you expose could equally be a database, metrics agent, microservice, or any other workload that needs network access. You’ll need Kubectl and an existing Kubernetes cluster to follow along.

1. Deploy the sample app

First, copy the following Deployment manifest and save it as app.yaml in your working directory:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
          - containerPort: 80

The manifest deploys three replicas of the nginx:latest container image. In the metadata.labels field, an app: nginx label is applied—this will be referenced by your Services in the following steps. The ports.containerPort field within the Pod spec template is used to indicate that the Pods will be exposing port 80, the default NGINX web server port.

Use Kubectl to apply the Deployment manifest to your cluster:

$ kubectl apply -f app.yaml
deployment.apps/nginx created

Wait until all the deployment’s Pod replicas are ready:

$ kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   3/3     3            3           20s

2. Create a ClusterIP Service

Now your NGINX deployment is running, but you don’t have a way to access it. Although you could directly connect to the Pods, this doesn’t load balance and will lead to errors if one of the Pods becomes unhealthy or is replaced. Creating a Service allows you to route traffic between the replicas so you can reliably access the Deployment.

The following manifest defines a simple ClusterIP service:

apiVersion: v1
kind: Service
metadata:
  name: nginx-clusterip
spec:
  type: ClusterIP
  selector:
    app: nginx
  ports:
    - port: 8080
      targetPort: 80

There are a few points to note in the manifest:

  • The spec.type field is set to ClusterIP as we’re creating a ClusterIP service.
  • The spec.selector field selects the NGINX Pods using the app: nginx label applied in the Deployment’s manifest.
  • The spec.ports field specifies that traffic to port 8080 on the Service’s Cluster IP address will be routed to port 80 at your Pods.

Save the manifest as clusterip.yaml, then add it to your cluster:

$ kubectl apply -f clusterip.yaml
service/nginx-clusterip created

Next, use Kubectl’s get services command to discover the cluster IP address that’s been assigned to the Service:

$ kubectl get services
NAME              TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
nginx-clusterip   ClusterIP   10.109.128.34   <none>        8080/TCP   10s

In this example, the service has the IP address 10.109.128.34. You can now connect to this IP from within your cluster in order to reach your NGINX Deployment, with automatic load balancing between your three Pod replicas.

To check, use kubectl exec to curl the IP address from inside one of your NGINX Pods:

$ kubectl exec deployment/nginx -- curl 10.109.128.34:8080
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0     0    0     0    0     0      0      0 --:--:-- --:--:-- --:--:--     0<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

You got a successful response, proving the Service is working. You can also try connecting using the Service’s DNS name, such as nginx-clusterip.default.svc.cluster.local. Whichever method you use, port 8080 must be specified, as that’s the port the Service is configured to listen on.

3. Create a NodePort Service

Now, let’s externally expose the Deployment using a NodePort Service. The manifest is similar to a ClusterIP Service—specify type: NodePort instead of type: ClusterIP and use the ports.nodePort field to set the Node port to listen on:

apiVersion: v1
kind: Service
metadata:
  name: nginx-nodeport
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - port: 80
      nodePort: 32000

You can omit the nodePort field, in which case the port number given by the port field will be used. The manifest above specifies that Node port 32000 will direct traffic to port 80 at your app: nginx Pods.

Save the manifest as nodeport.yaml and use Kubectl to apply it:

$ kubectl apply -f nodeport.yaml
service/nginx-nodeport created

Next, find the IP address of one of your cluster’s Nodes:

$ kubectl get nodes -o wide
NAME       STATUS   ROLES           AGE     VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION     CONTAINER-RUNTIME
minikube   Ready    control-plane   5h32m   v1.26.1   192.168.49.2   <none>        Ubuntu 20.04.5 LTS   6.2.0-37-generic   docker://20.10.23

In this demo, we’re using a local Minikube cluster with no external IP address available. The internal IP address is resolvable from the host machine, so the following example still works. For Nodes that have an external IP address assigned, such as those provisioned in your cloud accounts, you can access the service using both the internal and external IPs.

Accessing port 32000 on your Node’s IP address should now connect you to your NGINX deployment:

$ curl 192.168.49.2:32000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

4. Create a LoadBalancer Service

The simplest LoadBalancer Service looks very similar to ClusterIP Services:

apiVersion: v1
kind: Service
metadata:
  name: nginx-lb
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - port: 8080
      targetPort: 80

Adding this Service to your cluster will attempt to use the configured load balancer integration to provision a new infrastructure component. If you created your cluster from a managed cloud service, this should result in a load balancer resource being added to your cloud account.

Save the manifest as lb.yaml, then apply it with Kubectl:

$ kubectl apply -f lb.yaml
service/nginx-lb created

If you are following along using Minikube, start a minikube tunnel session to expose your load balancer Service before continuing below.

Run Kubectl’s get services command with the output format set to wide to view the publicly accessible external IP address that’s been assigned to the load balancer:

$ kubectl get services -o wide
NAME              TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)          AGE     SELECTOR
nginx-lb          LoadBalancer   10.96.153.245   10.96.153.245   8080:30226/TCP   79s     app=nginx

You can reach the Service by connecting to its external IP address. Note that our demo Service is configured by its ports.port field to listen on port 8080 again, so this is the port you should target.

$ curl 10.96.153.245:8080
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

Key points

In this article, we’ve learned about Kubernetes Services and how they provide network access and DNS discovery functions for apps running in your cluster. You must use a Service to expose Pods to other networked workloads while load-balancing traffic between available replicas.

Services handle how traffic is routed inside your cluster. You can expose Services externally using Ingress objects (or the emerging Gateway API), a mechanism for directing HTTP requests between Services based on characteristics such as hostname, port, and URL.

We encourage you to also check out how Spacelift helps you manage the complexities and compliance challenges of using Kubernetes. Anything that can be run via kubectl can be run within a Spacelift stack. Find out more about how Spacelift works with Kubernetes, and get started on your journey by creating a free trial account.

Manage Kubernetes Easier and Faster

Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.

Start free trial

Kubernetes Commands Cheat Sheet

Grab our ultimate cheat sheet PDF

for all the kubectl commands you need.

k8s book
Share your data and download the cheat sheet