Kubernetes is the leading container orchestration system for automating container operations at scale. It makes it easy to distribute container replicas across a cluster of compute Nodes. A centralized control plane governs the cluster and ensures deployments stay healthy.
In this article, we’re going to take a deep dive into the Kubernetes control plane. We’ll explore all its components and their roles in the cluster. We’ll finish up by sharing some best practices that ensure the control plane functions reliably.
What we will cover:
- What is the Kubernetes control plane?
- What are the components of the Kubernetes control plane?
- How do worker nodes work with the Kubernetes control plane?
- How to configure the Kubernetes control plane
- Kubernetes control plane vs. data plane
- High availability and the Kubernetes control plane
- Kubernetes control plane best practices
The Kubernetes control plane is the management layer inside a Kubernetes cluster. It’s a collection of components that work together to manage the cluster’s state, coordinate your Nodes, and provide the API server that lets you interact with the cluster.
What is the purpose of the control plane?
The control plane’s main responsibilities include:
- Scheduling new Pods onto available Nodes
- Automatically starting new containers when failures occur
- Moving Pods onto different Nodes when a Node becomes unavailable
- Detecting and reconciling cluster config changes, such as by creating and deleting Pods after Deployments are resized
- Serving the Kubernetes API
- Storing all the data about objects in the cluster
- Infrastructure management through integrations with cloud provider APIs
To summarize, the control plane runs your cluster by implementing the system-level functions that Kubernetes requires. It monitors for new events in the cluster and applies any necessary actions. For instance, when you create a Pod, the control plane selects a suitable Node to run the Pod’s containers.
What is the difference between the master and control plane in Kubernetes?
The terms “master” and “control plane” in Kubernetes are often used interchangeably, but there is a subtle distinction.
Historically, “master node” referred to the physical or virtual machine where the control plane components resided. However, as Kubernetes evolved, the control plane could be distributed across multiple nodes for high availability and resilience. This led to the shift towards the term “control plane” to emphasize the functional role rather than a specific physical location.
Read more: What is Kubernetes Architecture? – Components Overview
The Kubernetes control plane is composed of several different components. They work together to implement the control plane’s features and successfully run your cluster.
The control plane consists of:
- API Server
- Etcd
- Controller Manager
- Cloud Controller Manager
- Scheduler
Here’s a summary of the key components of the control plane and what they do.
1. API Server
The API Server (kube-apiserver) is your cluster’s core. It serves the HTTP API that enables cluster access.
The API powers user-facing tools like Kubectl. Whenever you interact with your cluster, you’re relying on the API Server being available. The API server also provides endpoints that Nodes use to fetch data from the cluster control plane.
Because the API server governs all cluster management operations, it’s vital that it stays healthy. It’s recommended to run multiple replicas of the API server, spread across several Nodes, so you can still access the API if one of your Nodes fails.
2. Etcd
etcd is a distributed key-value datastore. Kubernetes operates an etcd instance to store your cluster’s data. This includes config values, CRDs, and the states of objects in your cluster.
Kubernetes can use other datastores instead of etcd. Some popular distributions ship with alternatives that are better suited to their target use cases—K3s defaults to using SQLite, for example. Nonetheless, etcd is ideal for production clusters because it offers strong consistency guarantees. It reliably distributes data across control plane nodes and can quickly elect a new leader node when failures occur.
3. Controller Manager
Many Kubernetes features are based on controllers. A controller watches for changes in your cluster and applies new actions as needed. Examples of controllers include the Deployment controller, which creates new Pods based on a Deployment object’s spec, and the CronJob controller, which enables periodic creation of new Jobs.
Controllers run continuously in an automated control loop. They’re governed by the Kubernetes Controller Manager (kube-controller-manager
). This process is responsible for starting and maintaining individual controllers. It ensures the controllers operate reliably so your cluster’s state always matches its current configuration.
4. Cloud Controller Manager
Cloud Controller Manager (cloud-controller-manager
, or CCM) is the Kubernetes control plane component that interfaces with your cloud provider’s API. Your cluster will only include this component when you’re using a managed Kubernetes service.
CCM enables your cluster to manage resources in your cloud account. It automates infrastructure operations, such as adding a new cloud Load Balancer resource when you create a LoadBalancer
Kubernetes service. Cloud providers implement CCM support by building a plugin that sits between Kubernetes and their own API.
5. Scheduler
The Kubernetes Scheduler (kube-scheduler
) assigns new Pods to Nodes. It watches the cluster for Pods without a Node and selects the most suitable Node to schedule them to.
Scheduling decisions involve many different factors. The scheduler compares each Node’s resource utilization to the Pod’s requests. It also considers the Pod’s affinity rules, node selectors, and taints and tolerations. Overall, the scheduler aims to distribute Pods evenly across the cluster’s Nodes to ensure good performance and fault tolerance.
Kubernetes Nodes communicate with the cluster’s control plane using Kubelet, a dedicated agent process. Each worker Node runs its own instance of Kubelet. When you’re starting a cluster from scratch, you must manually install Kubelet on each of your worker Nodes to connect them to your cluster.
Kubelet registers the Node with the Kubernetes control plane, making it eligible to schedule Pods. The Scheduler can then allocate Pods to the Node, creating instructions that the API Server exposes as Pod specs. Kubelet regularly queries the API Server to learn which Pod specs it should be running.
Once a Pod has been scheduled, Kubelet is responsible for starting its containers. Kubelet uses the container runtime installed on the host to create new containers and then monitors them to ensure they stay running. If a container fails or turns unhealthy, then Kubelet will replace it with a new one.
Kube-Proxy is the second Node-level control plane component. This runs on each Node to implement the Kubernetes networking layer. The proxy allows internal and external cluster traffic to reach the containers running on the Node.
The Kubernetes control plane components support various config values for tuning your cluster. Some of the most commonly used options include:
- Feature gates: A mechanism for opting into enabling alpha and beta features.
- API server settings: You can customize the API server in several ways, such as by enabling audit logging, specifying SSL/TLS options, and choosing different authentication mechanisms.
- Etcd settings: You can pass settings through to your cluster’s etcd instance, letting you change how your cluster’s state is persisted.
- Kube-Scheduler settings: Settings can be used to change aspects of the scheduler’s operation, including how leader election works.
- Kubelet settings: Changing Kubelet settings lets you control how Nodes manage containers and interact with the control plane. For example, you can set the maximum size of container log files, manage eviction grace periods, and change how frequently Kubelet checks the API server for new data.
A complete reference of possible control plane options is available in the Kubernetes documentation.
Options must be set when you start your cluster. The way to change them depends on which Kubernetes distribution you’re using:
- Managed cloud Kubernetes services such as Amazon EKS and Google GKE provision and manage the control plane for you. You can’t usually modify the control plane’s configuration yourself.
- Self-hosted Kubernetes distributions like Minikube and K3s normally let you configure the control plane when you start your cluster. For example, Minikube’s
--extra-config
flag passes key-value pairs to Kubernetes components including the API server, Scheduler, and Kubelet. - Clusters created with Kubeadm are configured by a ConfigMap in your cluster. This is created automatically based on Kubeadm CLI flags when you start your cluster. You can change settings by modifying the ConfigMap and then using Kubeadm to upgrade your Nodes.
In practice, the Kubernetes control plane shouldn’t need reconfiguring very often. Major distributions usually ship with sensible defaults that are ready for production use. Nonetheless, it’s useful to understand how the components can be customized so you can adapt your clusters to your environment. Adjusting config keys can also help you debug cluster problems.
The control plane and data plane work together to provide a robust and scalable platform for running your applications. The control plane makes decisions about how to manage the cluster, while the data plane executes those decisions and provides the resources needed to run your applications.
The table below summarizes the key differences between them:
Control plane | Data plane | |
Focus | Cluster management and orchestration. | Workload execution and resource management. |
Components | API Server, etcd, Scheduler, Controller Manager. | Nodes, Kubelet, Kube Proxy, container runtime. |
Functionality | Decides where workloads should run and manages state. | Executes workloads and manages runtime environments. |
Role in scaling | Ensures Pods scale based on policies. | Executes new instances of Pods as directed. |
The Kubernetes control plane can be configured for high availability (HA) to make your cluster more fault-tolerant. An HA control plane distributes replicas of each component across multiple Nodes. It removes the single point of failure that exists if you run your control plane on a single Node.
“Stacked” Nodes are the most common way to achieve control plane HA. This deploys an instance of each control plane component to every Node. Each Node runs a local etcd instance. This topology is used automatically when you use Kubeadm to add new Nodes to a self-managed cluster.
An alternative strategy is to run a separate etcd cluster outside of Kubernetes. In this model, your Nodes run all the control plane components except etcd. This allows you to scale etcd independently of your Nodes and provides increased redundancy during a failure. Losing a Kubernetes Node no longer affects etcd so there’s less risk of consistency errors and no need to elect a new leader.
A high availability control plane is essential for Kubernetes clusters operating in production environments. It reduces the risk of a control plane failure preventing cluster access or stopping workloads from being scheduled. Many managed Kubernetes services provide HA control plane options, sometimes for an increased cost.
The control plane runs your Kubernetes cluster, so it’s crucial that it’s correctly configured. Here are some best practices that help improve security and reliability:
- Keep your Kubernetes control plane updated – Keeping your control plane updated with new Kubernetes releases ensures you’re running the latest security patches and bug fixes, providing increased protection for your workloads.
- Ensure RBAC is enabled – Kubernetes RBAC rules let you control which actions and resources are available to individual users. Correct use of RBAC is essential to support a strong Kubernetes security posture. It’s only available when activated in the API server. Popular Kubernetes distributions turn RBAC on by default, but you should make sure you enable it when you’re maintaining custom environments with Kubeadm.
- Avoid publicly exposing the Kubernetes API server – Whenever possible, avoid publicly exposing the Kubernetes API Server. Using a private network to access your cluster’s API provides protection against zero-day vulnerabilities discovered in the API server’s authentication layer.
- Configure the control plane for high availability (HA) – Having a highly available control plane makes your cluster more fault tolerant. Many Kubernetes distributions default to running all the control plane components on a single Node, causing a loss of operations if that Node goes down.
You can find more tips for protecting your control plane in our Kubernetes security guide.
If you need assistance managing your Kubernetes projects, look at Spacelift. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change.
You can also use Spacelift to mix and match Terraform, Pulumi, AWS CloudFormation, and Kubernetes Stacks and have them talk to one another.
To take this one step further, you could add custom policies to reinforce the security and reliability of your configurations and deployments. Spacelift provides different types of policies and workflows that are easily customizable to fit every use case. For instance, you could add plan policies to restrict or warn about security or compliance violations or approval policies to add an approval step during deployments.
You can try Spacelift for free by creating a trial account or booking a demo with one of our engineers.
The Kubernetes control plane is the set of components that manage a Kubernetes cluster and store its state. These components include the Kubernetes API Server, Cloud Controller Manager, Scheduler, and etcd datastore, as well as the Kubelet process that runs on each Node.
The control plane orchestrates cluster-level operations by watching for events and taking action in response. You won’t often need to engage directly with control plane components, but they’ll be working behind the scenes each time you interact with your cluster.
Using a managed Kubernetes service removes the complexity of maintaining your cluster’s control plane. Amazon EKS, Google GKE, Azure AKS, and other options configure the control plane for you so you can concentrate on your workloads. Cloud services also let you automate cluster provisioning so developers can access Kubernetes on-demand, using an IaC platform like Spacelift.
Manage Kubernetes Easier and Faster
Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.