Kubernetes is the most popular orchestrator for deploying and scaling containerized applications in production. While Kubernetes makes it easy to start new workloads, this convenience comes at a cost: The system isn’t secure by default, so your clusters and users could be at risk.
Container attacks are on the rise, and insecure Kubernetes installations attract attackers. Fortunately, you can protect your clusters by adhering to a few best practices. If you consciously implement protections around your deployments, you can run containers in production without security issues.
What we will cover:
Kubernetes security refers to all the aspects you should consider when creating and upgrading your cluster and all the security mechanisms you should implement as part of your K8s resources.
Securing a Kubernetes environment means protecting critical components like the API server, kubelet, and etcd, enforcing strict authentication and authorization controls, and implementing network policies to contain potential breaches.
Core security measures include role-based access control (RBAC) to regulate permissions, pod security policies (or their replacements, like admission controllers) to prevent risky container behaviors, network policies to manage inter-service communication, and secrets management to protect sensitive credentials.
Common Kubernetes vulnerabilities
Based on the components Kubernetes security incorporates, we can easily identify vulnerabilities that may appear as part of poor implementation of the security features:
- An improper RBAC can lead to excessive permissions, allowing unauthorized users to access sensitive data.
- If you don’t implement pod security, you will run your containers as root, increasing the risk of privilege escalation. You will also risk denial of service by not leveraging resource limits.
- Without network policies, all pods can communicate with each other, making it easy for an attacker to access many services from your clusters.
- Vulnerable container images introduce known issues inside your clusters.
- Using secrets in config maps exposes secrets to unauthorized users.
Why is Kubernetes security important?
Kubernetes security is important because it keeps your cluster up and running, meaning that your applications can run safely without unexpected downtime or unauthorized access. A compromised cluster can lead to service disruptions, data exposure, compliance violations, and even resource hijacking.
These steps help harden your environment to minimize your attack surface and defend against incoming threats. Implementing them all will give you the greatest protection by restricting network activity, encrypting data at rest, and preventing vulnerable workloads from reaching your cluster.
Role-Based Access Control (RBAC) is a built-in Kubernetes feature. It lets you control what individual users and service accounts can do by assigning them one or more roles. Each role allows a combination of actions, such as creating and listing Pods but not deleting them.
You should use RBAC to assign appropriate roles to each user and service account that interacts with your cluster. Developers may need fewer roles than operators and administrators, while CI/CD systems and Pod service accounts can be granted the minimum permissions needed to run their jobs.
RBAC protects your cluster if credentials are lost or stolen. An attacker who acquires a token for an account will be restricted to the roles you’ve specifically assigned.
Roles must be as granular as possible to have the greatest security effect. Over-privileged roles, configured with too many permissions, are a risk because they grant attackers extra capabilities without providing any benefit to the legitimate user.
The Kubernetes control plane is responsible for managing all cluster-level operations. It exposes the API server, schedules new Pods onto Nodes, and stores the system’s current state. Breaching the control plane could give attackers control of your cluster.
Implementing the following strategies will help lock down the control plane and limit the effects of any compromise that occurs:
- Restrict access to etcd – Kubernetes uses etcd to store your cluster’s data. This includes credentials, certificates, and the values of ConfigMaps and Secrets you create. The central positioning of etcd makes it an attractive target for attackers.
You should isolate access to it behind a firewall that only your Kubernetes components can penetrate. You can do this by running etcd on a dedicated Node and using a network policy engine like Calico to enforce traffic rules. - Enable etcd encryption – Data within etcd is not encrypted by default. This option can be turned on by specifying an encryption provider when you start the Kubernetes API server.
If you’re using a managed Kubernetes service in the cloud, confirm with your provider whether you can enable encryption, if it is not already active. Encryption will help protect the credentials, secrets, and other sensitive information within your cluster if the control plane is successfully compromised. - Set up external API server authentication – The Kubernetes API server is usually configured with simple certificate-based authentication, which makes it challenging to configure and maintain large user cohorts.
Integrating Kubernetes with your existing OAuth or LDAP provider tightens security by separating user management from the control plane itself. You can use your provider’s existing controls to block malicious authentication attempts and enforce login policies such as multi-factor authentication.
Anonymous Kubelet authentication should be disabled too. This will block requests to Kubelet from sources other than the Kubernetes API server instance to which it’s connected. Set the –anonymous-auth=false flag when you start Kubelet if you’re maintaining your own cluster.
Remember your Nodes when you’re securing your environment. In a worst-case scenario, basic misconfiguration at the Node level could compromise your entire cluster. Gaining access to a Node grants attackers privileged access to the Kubernetes API server through abuse of the Kubelet worker process that all Nodes run.
Securing Nodes used for Kubernetes is no different from protecting any other production server. You should monitor system logs regularly and update the OS with new security patches, kernel revisions, and CPU microcode packages as they become available.
You should dedicate your Nodes to Kubernetes. Avoid running other workloads directly on a Node, particularly network-exposed software, which could give attackers a foothold. Lock down external access to sensitive protocols such as SSH.
Kubernetes defaults to allowing all Pods to communicate freely with each other. Compromising one Pod could let bad actors inspect its surroundings and then move laterally into other workloads. For example, breaching the Pod that serves your website allows an attacker to direct traffic straight to your database Pods.
Kubernetes Network policies defend against this risk by giving you precise control over the situations when Pods are allowed to communicate. You can specify at the Pod level whether Ingress and Egress are allowed, based on the other Pod’s identity, namespace, and IP address range. This lets you prevent access to sensitive services from containers that shouldn’t need to reach them.
Network policies example
Network policies are a Kubernetes resource type that you can apply to your cluster using YAML files. Here’s a simple example policy that targets Pods with an app-component: database
label:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: database-access
spec:
podSelector:
matchLabels:
app-component: database
policyTypes:
- Ingress
ingress:
- from:
- podSelector:
matchLabels:
app-component: api
This policy declares that incoming traffic to your database Pods can only originate from other Pods in the same namespace with the app-component: api
label. Nefarious requests made from your frontend web server’s Pod labeled as app-component: frontend will be rejected at the network-level.
It’s possible to set up a default namespace-level network policy to guard against Pods being accidentally omitted from your rules. Using an empty podSelector
field will apply the policy to every Pod in the namespace:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: demo-namespace
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
This policy blocks both Ingress and Egress traffic from the namespace’s Pods, unless a more specific rule overrides it. You can block incoming or outbound traffic without affecting the other by changing the types listed under spec.policyTypes
.
Kubernetes has several other Pod-level capabilities that protect your cluster and applications from vulnerabilities in each other.
All Pods should be assigned a security context that defines their privileges. You can use this mechanism to require that containers run with restricted Linux capabilities, avoid the use of HostPorts, and run with AppArmor, SELinux, and Seccomp enabled, among other controls.
The security context can also define the user and group to run containers as:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
securityContext:
runAsUser: 1000
runAsNonRoot: true
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
containers:
- name: nginx
image: nginx:latest
This security context makes containers in the Pod run as the user with UID 1000 while preventing privilege escalation. The runAsNonRoot: true
declaration ensures that the container will run as a non-root user, while runAsUser: 1000
requires it to run with UID 1000. You can omit the UID and use runAsNonRoot
alone in situations where it doesn’t matter which user runs the container, provided it’s not root.
The security context shown above also stipulates that containers will run with a read-only root filesystem. Setting securityContext.readOnlyRootFilesystem
to true
prevents workloads inside the container from writing to the container filesystem. It limits what attackers can achieve if the container is compromised as they’ll be unable to persist malicious binaries or tamper with existing files.
Security contexts can also be set at the container-level, as spec.containers[].securityContext
, overriding any constraints set on their Pod. This allows you to further harden individual containers, or relax the rules for administrative workloads.
Pod Security admission rules, a replacement for the older PodSecurityPolicy system, allow you to enforce minimum security standards for the Pods in your cluster. This mechanism will reject any Pods that violate the configured Pod security standard, such as by omitting securityContext settings, binding HostPorts, or using HostPath volume mounts.
Policies can be enabled at the namespace or cluster-level. It’s good practice to use this system across your clusters. It guarantees that Pods with potentially dangerous weaknesses are prevented from running until you address their policy issues. If you need to run a Pod with elevated capabilities, you can use the Privileged profile to make your intentions explicit.
Kubernetes security isn’t all about your cluster. Applications need to be secure before they’re deployed, which means taking some basic steps to protect your container images:
- Run automated security scanning tools to detect vulnerabilities in your code.
- Use a hardened base image and layer in your source.
- Scan your built image with an analyzer like Clair or Trivy to identify outdated OS packages and known CVEs. If issues are present, rebuild the image to incorporate mitigations.
In security-critical situations, consider starting from scratch so you can be certain of what’s in your containers. This lets you assemble your entire filesystem without relying on an upstream image where threats could be lurking.
It’s also vital to use the security mechanisms Kubernetes provides. Sensitive data such as database passwords, API keys, and certificates shouldn’t reside in plain-text ConfigMaps or be hardcoded into container filesystems, for example.
Kubernetes Secrets let you store these values securely, independently of your Pods. However, they don’t encrypt values by default, so it’s vital to enable etcd encryption wherever they’re used. Secrets can also integrate with external datastores to save credentials outside your cluster.
You can run any container image with K8s, but this power can also introduce vulnerabilities. Using a security vulnerability scanning tool allows you to easily detect known vulnerabilities and security issues before they are deployed.
Admissions controllers can restrict the use of images to scanned ones, which can be scanned with tools such as Trivy, Clair, or Aqua Security.
Here is a simple example of a GitHub Actions pipeline that scans a Dockerfile inside your repository:
name: Docker Image Scan with Trivy
on:
push:
paths:
- 'Dockerfile'
pull_request:
paths:
- 'Dockerfile'
jobs:
scan:
name: Scan Docker Image
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v3
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Build image
run: |
docker build -t devimage:latest .
- name: Install Trivy
run: |
sudo apt-get install wget apt-transport-https gnupg lsb-release
wget -qO - https://aquasecurity.github.io/trivy-repo/deb/public.key | sudo apt-key add -
echo deb https://aquasecurity.github.io/trivy-repo/deb $(lsb_release -sc) main | sudo tee -a /etc/apt/sources.list.d/trivy.list
sudo apt-get update
sudo apt-get install trivy
- name: Run Trivy vulnerability scanner
run: |
echo "Scanning docker-image"
trivy image --exit-code 0 --severity CRITICAL,HIGH,MEDIUM devimage:latest
Here is a more complex example that also pushes the images to ghcr.
If you are using secrets — K8s’ built-in mechanism for secrets management — remember they are only base64 encoded and stored unencrypted in etcd. By leveraging HashiCorp Vault, OpenBao, or a cloud provider-specific secrets management tool such as AWS Secrets Manager, you can implement encryption at rest.
There are many integrations available for K8s, so incorporating a tool into your workflow should be relatively straightforward.
Comprehensive logging and monitoring will reduce the time needed to detect and resolve security incidents. Using ELK and implementing Prometheus and Grafana will help you track cluster metrics, detect anomalies, and alert you to any suspicious activity. This will also ensure easier audit processes and enable identification of what went wrong, who made the change, and when it went wrong.
All the security mechanisms we have mentioned should be implemented in different areas throughout your Kubernetes environment.
- Node security – Kubernetes clusters use nodes behind the scenes to handle your workloads. These nodes should be protected by keeping the operating system up to date, hardening it, and restricting SSH access. Applying security patches is also a good idea for maintaining node security.
- Kubernetes API security – The API server is one of the most important components of your Kubernetes cluster. You should ensure that RBAC is enabled and strong authentication mechanisms are in place.
- Kubernetes network security – You should always leverage network policies to control communication between pods and use service mesh solutions to encrypt traffic between services. Egress controls are always a good idea for restricting outbound traffic.
- Kubernetes pod security – Pod security is implemented by ensuring that your containers run with non-root users first to avoid privilege escalation. By limiting resources, you can easily prevent denial of service (Dos).
- Kubernetes data security – Leveraging external solutions such as Vault and OpenBao to encrypt secrets at rest will reduce the risk of secrets being intercepted, and implementing proper backup and restore procedures will ensure data integrity.
Using these best practices helps you adhere to the 4C model of cloud-native security. This simple mental picture sets out four security layers to protect yourself, each of which is a common word prefixed with “C.”
Here are the four Cs of Kubernetes security:
- Cloud – Vulnerabilities in your cloud infrastructure — such as not enabling 2FA for your Azure, AWS, or Google Cloud accounts— enable attackers to access all your resources. Protect yourself by regularly auditing your environment and choosing a reputable provider with a good compliance record.
- Cluster – Apply recommendations such as etcd encryption, RBAC controls, and Node isolation to protect your cluster from attack. Cluster-level compromise will expose all your applications and their data.
- Container – Individual containers can be strengthened by using hardened base images, scanning for vulnerabilities, and avoiding use of privileged capabilities. Malware inside a single container could break out to access other resources and the host Node.
- Code – Code inside containers should be audited, scanned, and probed as it’s created to identify any weaknesses. Don’t underestimate attackers: While Kubernetes provides strong container isolation when configured correctly, weaknesses in your code could let intruders exploit zero-day vulnerabilities to escape the container and control your cluster.
Applying security protections across all four segments will give you the greatest protection. Focusing on just a few areas could create weaknesses that let attackers move down the 4C pyramid and then laterally across your resources.
Read more about container security best practices and solutions.
Here are Kubernetes security tools in various categories:
Category | Tools | Description |
Vulnerability scanning tools | Trivy, Clair, Anchore | Detect vulnerabilities in container images. |
Compliance scanners | Kube-bench, Kubesec | Verify Kubernetes (K8s) configurations against security best practices. |
Network security & service mesh | Calico, Weave, Istio | Enhance network security and manage communication between microservices. |
Secrets management | Vault, OpenBao, AWS Secrets Manager | Manage secrets and encrypt them at rest. |
Policy management | Open Policy Agent (OPA), Kyverno | Enforce and manage security policies for Kubernetes clusters. |
Runtime security | Falco, Sysdig | Detect unexpected application behavior and implement runtime protection. |
Spacelift is an infrastructure orchestration platform that helps you with your Terraform, OpenTofu, Terragrunt, Pulumi, CloudFormation, Ansible, and Kubernetes workflows. With Spacelift you can easily use policy as code to restrict certain resources or resource parameters, require multiple approvals for your runs, control what happens when a pr is open or merged, and even where to send notifications.
The stack dependencies feature makes managing complex workflows effortless. You can create dependencies between multiple workflows and share outputs between them, ensuring smooth interactions between your favorite IaC tool and Kubernetes. For example, provision your K8s cluster using IaC, then create dependencies for your K8s workflows. With shared outputs, you can extract the K8s configuration from your IaC tool and deploy applications without hassle. And since dependencies can be nested at any level, even the most intricate workflows become a breeze to implement.
Spacelift also empowers teams with self-service infrastructure. Want to create templates for K8s resources? Users just fill out a form, and Spacelift handles the deployment with no manual intervention required.
As an API-first company, Spacelift provides a Terraform provider and a K8s operator, enabling you to manage your entire infrastructure directly from Kubernetes.
Integrating with Vault? Spacelift simplifies authentication through OIDC, so you only need to configure it once — no need to set up integrations for each K8s cluster individually. Every cluster can immediately benefit from a unified, secure connection.
And that’s just the beginning. There’s a lot more Spacelift can do for your K8s workflows. Book a demo to understand all the capabilities Spacelift offers.
Kubernetes makes it easy to start and run containers, but using plain images in a fresh cluster can be a security risk. Your workloads and clusters need to be hardened to make them safe for critical production environments. While it can be tempting to skip these steps, you’ll be vulnerable to exploitation if bad actors find your cluster.
The steps we’ve shared above will help you use Kubernetes securely by following the 4C model of Cloud, Cluster, Container, and Code. Attackers can manipulate weaknesses in any of these areas to cause a security incident.
Although the techniques listed here are good starting points, this is not an exhaustive list of measures. You can uncover additional improvement opportunities by using automated tools like Kubescape. This policy-based cluster scanner detects Kubernetes misconfigurations, security vulnerabilities, and container image risks in a single scan of your cluster.
There’s growing industry interest in strengthening Kubernetes deployments, including from U.S. government bodies. The NSA/CISA Kubernetes hardening guide and the Center for Internet Security’s Kubernetes security benchmark are two references you can use to find threats in security-critical situations.
Check out the Spacelift blog for more posts about Kubernetes security, such as how to use secrets to store sensitive data in your cluster.
The most flexible CI/CD automation tool
Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities s for infrastructure management.