Kubernetes is an open-source container orchestration system used for the management and deployment of containerized applications. It can run on virtual or physical machines or can be used as a service inside multiple cloud providers. Organizations of all sizes widely adopt Kubernetes, and it has become the de facto standard for container orchestration.
In this article, we will briefly take a look at what a K8s cluster is. We will then look at the components of the master node and worker nodes that make up a K8s cluster.
A K8s cluster is made up of a group of nodes. Nodes are machines that are linked together to form a ‘cluster’.
Kubernetes cluster also includes a control plane, which is responsible for the management and orchestration of the containers across the nodes and for maintaining the desired state. The goal of K8s is to provide maximum application uptime by scheduling applications across nodes in the most efficient manner to enable peak performance utilization and redundancy.
Nodes can be thought of as individual compute resources, so pretty much any group of compute resources can be used. Usually, nodes are made up of a group of virtual machines if the platform is running in the cloud behind the veil of a PaaS (Platform-as-a-service) offering. They can also be run on serverless offerings, such as Amazon Fargate. If the K8s cluster is running on-premises, then nodes can be physical machines. You can even run a K8s cluster on a group of Raspberry Pis!
If you want easy and efficient maintenance of your K8s cluster, see 15 Kubernetes Best Practices Every Developer Should Know.
A K8s cluster consists of two types of nodes, master nodes and worker nodes.
Master nodes host the K8s control plane components. The master node will hold configuration and state data used to maintain the desired state. The control plane maintains communication with the worker nodes in order to schedule containers efficiently. In production, the control plane runs across multiple nodes to ensure redundancy should one fail.
Worker nodes are so-called because they run pods. Pods are usually single instances of an application. Containers run inside pods, Pods run on a node. Each cluster always contains at least one worker node. It is recommended that a cluster consists of at least three worker nodes for redundancy purposes.
Read more about Kubernetes Architecture components.
- API server (kube-apiserver)
The REST API acts as a front door to the control plane. Any requests for modifications to the cluster come through the API, as well as any requests from worker nodes. If the requests are valid the API server executes them. Communications with the cluster occurs either via the REST API, or command-line tools like
- Scheduler (kube-scheduler)
The scheduler determines where resources will be deployed on which nodes within the K8s cluster. It does this by scheduling pods of containers across nodes and makes these decisions based on monitoring resource utilization. It places the resources on healthy nodes that can fulfill the requirements of the application.
- Controller (kube-controller-manager)
The controller runs a number of background processes and also initiates any requested changes to the state. It ensures that the cluster state remains consistent. The controller can identify failed nodes and take action, and also runs jobs as required (jobs are one-time tasks). Also when a new namespace is created, this controller creates default accounts and API access tokens.
- Etcd (key-value store)
Configuration data about the state is stored in a key-value store that all nodes can access in the cluster. It is a critical component that is distributed and fault-tolerant. Etcd can be a target for malicious actors as it provides effective control over the cluster and should be secured.
- Cloud Controller Manager (cloud-controller-manager)
An optional component, the cloud controller manager can be used to link the K8s API to the cloud providers API. Once linked, it can enable the cluster to scale horizontally. Components that interact with the cloud providers are separated from those that only interact with the K8s API to optimize performance.
- Pods & Containers
Worker nodes are so-called because they run pods that do the work. Pods are usually single instances of the application itself. Containers run inside pods, Pods run on a node. If more resource is needed in the K8s cluster to run the workloads, more nodes can be added to the cluster.
Each worker node runs a small application called the
kubelet, which enables communication with the control plane. If the control plane needs to make any changes on the worker nodes, the request will go via the
Each worker node also runs a component to control the networking, within and outside the cluster, called
kube-proxy. Traffic can be forwarded independently or using the Operating systems packet filtering layer. It acts as a load balancer and forwards requests from the control plane to the correct pods.
Creating a Kubernetes cluster can be a challenging task if you want to install all the components manually. Nevertheless, if you’ve never done it before, it is the recommended approach to follow through with one of the installation processes to get familiar with all of the components.
On the other hand, if you want to get up and running quickly, Kind might be a very easy solution to use on your local machine or a VM. It uses Docker containers to simulate Kubernetes nodes, making it possible to create and run a Kubernetes cluster with a single command. Additionally, Kind is compatible with many popular Kubernetes tools, including kubectl and Helm, making it easy to use in existing workflows. There are also other lightweight tools available for this, like Minikube or K3s.
If you are using a Cloud Provider like AWS, Azure, GCP, or OCI, you can easily use their managed Kubernetes Services like EKS, AKS, GKE, and OKE in order to spin up a Kubernetes cluster in a couple of minutes.
A K8S cluster has the desired state, usually defined using multiple YAML (or JSON) files.
The desired state means that rather than issuing specific individual instructions to the system, the desired end state is described and K8s decides which commands to issue to achieve the end goal.
The YAML files define things such as which pods to run, how many pods to run, and the amount of resources the pods require. Requests are made to a K8s cluster through the K8s API, using
kubectl via the command line, or via the REST API.
You can also take a look at how to maintain operations around the Kubernetes Cluster and how Kubernetes is integrated into Spacelift. Spacelift helps you manage the complexities and compliance challenges of using Kubernetes. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change. If you aren’t already using Spacelift, sign up for a free trial.
Understanding the components that make up a K8s cluster is important to give you the foundations in K8s operations. Knowing what each component of the cluster does aids in the operation, management, and troubleshooting of any issues that might arise with the cluster.
Automation and Collaboration Layer for Infrastructure as Code
Spacelift is a flexible orchestration solution for IaC development. It delivers enhanced collaboration, automation and controls to simplify and accelerate the provisioning of cloud based infrastructures.