Kubernetes is the most popular container orchestration system for deploying, managing, and scaling containers in production. It’s successful because it includes almost every conceivable feature for operating containers effectively. However, its huge range of capabilities and components also introduces a steep learning curve.
Nonetheless, mastering Kubernetes offers the potential to reap immense rewards. Organizations still struggle to hire and retain skilled cluster operators, so trained candidates get access to interesting and lucrative positions. Before you can land your new job, you’ll need to prove your expertise in an interview.
This guide is designed to prepare you for that final stage in your Kubernetes learning journey. We’ll share 33 sample Kubernetes interview questions and answers that cover the whole spectrum of K8s skills. The questions will test your knowledge and help you identify key topics to revise. Let’s dive straight in.
1. What is Kubernetes?
Kubernetes is an open-source container orchestration system that automates container deployment, scaling, and management processes. Using Kubernetes allows you to easily distribute container replicas over several physical hosts, called Nodes, to achieve high availability and boost performance.
2. What are K8s?
K8s is shorthand for Kubernetes, an open-source platform that automates running containerized applications.
3. What is a Kubernetes Node?
Kubernetes Nodes are the compute hosts that run your containers. They’re coordinated by a control plane that manages your Nodes as a cluster. The control plane stores cluster-level data, schedules containers onto Nodes, and takes action to reschedule workloads if a Node fails. The Node architecture makes Kubernetes clusters highly scalable because you can continually add new Nodes to increase deployment capacity.
4. What are Kubernetes Pods?
Pods are the smallest deployable unit in Kubernetes. They group one or more containers, such as an application instance and a logging sidecar. The containers within a Pod are managed as a single resource, always run on the same Node, and share a network namespace. Pods are automatically replaced with new ones if they fail, helping ensure reliable operations.
5. What is a Namespace in Kubernetes?
Kubernetes Namespaces group a collection of resources, such as Pods or StatefulSets, inside your Kubernetes cluster. They’re used to implement Kubernetes multi-tenancy by logically isolating resources belonging to different apps, teams, or environments.
For instance, you could create staging
and production
namespaces to deploy two instances of an application, each with its own customizations. Namespaces also integrate with the Kubernetes RBAC system, letting you precisely control who can access each Namespace.
6. What is the difference between Kubernetes Deployments and StatefulSets?
Deployments and StatefulSets are both high-level objects that manage a set of Pod replicas. Deployments are designed for stateless applications, such as frontend app deployments, where all the Pod replicas are identical to each other.
Conversely, StatefulSets enable you to run stateful applications like databases and file servers, where the identity of each replica is crucial. Pods in a StatefulSet have persistent identifiers, are started and stopped in sequential order, and are allocated unique Persistent Volume Claims. This allows you to ensure pod-0 always runs the primary replica in a database service, for example.
7. How can you roll back a failed Kubernetes Deployment?
You can easily roll back a failed rollout of a Kubernetes Deployment, StatefulSet, or DaemonSet using Kubectl. The kubectl rollout undo
command will automatically revert the resource to its previous state.
For instance, running kubectl rollout undo deployment/demo-deployment
restores the previous configuration of the demo-deployment
Deployment.
However, when declaratively managing workloads using Kubernetes YAML files, “rolling forward” may be preferable to rolling back. This is where you fix the issue in your manifest files first, and then reapply them to your Kubernetes cluster as a new deployment using kubectl apply
.
8. How would you access the logs from a Kubernetes Deployment?
Kubernetes Deployment logs can be accessed using Kubectl. The kubectl logs
command allows you to directly retrieve the logs from objects such as Pods and Deployments. For example, kubectl logs pod/demo-pod
will display the logs from the Pod called demo-pod
.
You can optionally livestream new logs to your terminal window by including the --follow
flag in your command. As an alternative to Kubectl, log collection tools like Fluentd and Logstash allow you to centrally monitor the logs from all the deployments in your cluster.
9. How would you troubleshoot a Kubernetes Pod that keeps restarting?
Pods that are stuck restarting will appear in the kubectl get pods
command’s output with a RESTARTS
count that keeps increasing. You can troubleshoot the issue by using kubectl describe pod <pod-name>
to view the events associated with the Pod.
Accessing the Pod’s logs using kubectl logs pod/<pod-name>
may also reveal useful information if the Pod’s restarting due to a problem with the containerized app.
Common causes of Pod restart loops include incorrect container image paths, failing liveness probes, and out-of-memory scenarios, so it’s often helpful to begin by checking for these issues.
10. Can you describe the Kubernetes architecture?
Kubernetes architecture is a client-server model consisting of a control plane and a set of worker nodes. The control plane manages the Kubernetes cluster, while worker nodes run the actual application workloads.
The control plane includes key components:
- API Server: The front end for the Kubernetes control plane, handling all REST requests
- Controller Manager: Maintains the desired state by watching resources and triggering actions
- Scheduler: Assigns pods to appropriate nodes based on resource availability and constraints
- etcd: A consistent, distributed key-value store for all cluster data
Each worker node runs:
- kubelet: Agent that ensures containers are running as expected on the node
- kube-proxy: Manages networking and forwards traffic to the correct pod
- Container runtime: Software like containerd or CRI-O that runs containers
11. Can you list a few important Kubectl commands?
View cluster resources:
kubectl get pods
– Lists all pods in the current namespacekubectl get nodes
– Shows the status of cluster nodeskubectl get services
– Displays all services deployed
Inspect and debug:
kubectl describe pod <pod-name>
– Shows detailed information and events for a specific podkubectl logs <pod-name>
– Outputs logs from a containerkubectl exec -it <pod-name> -- /bin/sh
– Opens an interactive shell in a running container
Apply and manage configurations:
kubectl apply -f <file.yaml>
– Creates or updates resources defined in a manifest filekubectl delete -f <file.yaml>
– Removes the specified resources
Namespace and context management:
kubectl config use-context <context>
– Switches between cluster contextskubectl get pods --all-namespaces
– Lists pods across all namespaces
See also: Kubectl Cheat Sheet – 15 Kubernetes Commands & Objects
12. How are Kubernetes and Docker related?
Docker packages applications and their dependencies into isolated units called containers, ensuring consistency across environments. Kubernetes takes these containers and handles deployment, scaling, load balancing, and fault tolerance across a distributed infrastructure.
Whereas Docker can run containers independently, Kubernetes requires a container runtime. Docker was the default before containerd replaced it in recent versions.
In this next set of ten questions, we’re targeting some of the more specific queries your interviewer could ask. These queries cover everyday Kubernetes usage and management scenarios.
13. How do Kubernetes Deployments and Services enable high availability for your workloads?
Kubernetes Deployments and Services work together to provide high availability by ensuring consistent Pod replication and automatic traffic distribution across those replicas.
A Deployment maintains the desired number of identical Pods, automatically replacing failed ones and spreading them across available Nodes to reduce the risk of a single point of failure. Services expose these Pods under a stable network name and route incoming traffic to healthy replicas using internal load balancing.
This abstraction allows the backend Pods to scale or move across Nodes without requiring changes to how clients connect. Even as infrastructure scales or Pods are rescheduled, the Service keeps routing traffic reliably.
14. How do Kubernetes Services work?
Services are the main component of Kubernetes networking. They provide stable network identifiers that can be resolved through the cluster DNS system.
Requests handled by Services will be directed to any available Pod replica that meets the Service’s selection criteria, such as having a specific label attached.
Several types of Service are supported for different use cases: for example, ClusterIP is the primary service type used for cluster-internal networking, while LoadBalancer makes Pods accessible from outside the Kubernetes cluster.
15. What is the difference between Kubernetes resource requests and limits?
Resource requests are used during Pod scheduling. Kubernetes ensures Pods are only scheduled to Nodes that can supply the requested amount of each resource, without impacting other workloads. Resource limits cap the maximum amount of a resource that a running Pod can use.
Breaching a CPU limit can cause Pods to be throttled, for example, while breaching a memory limit makes Pods eligible for termination.
16. What is the difference between Kubernetes ConfigMaps and Secrets?
Kubernetes ConfigMaps and Secrets both let you supply key-value configuration data to your Pods. ConfigMaps are designed to store plain-text values that require no special treatment. Secrets are a dedicated solution for sensitive values such as passwords, API keys, and certificates that must be kept secure. They can be encrypted when stored at rest, reducing the risk of exposure.
17. What is Horizontal Pod Autoscaler (HPA), why is it used, and how does it work?
Horizontal Pod Autoscaler (HPA) in Kubernetes automatically scales the number of pods in a deployment or replica set based on observed metrics, such as CPU or custom metrics. It ensures that applications can handle varying loads efficiently by increasing or decreasing pod replicas.
HPA is used to maintain application performance and optimize resource usage in dynamic environments. It prevents under-provisioning during traffic spikes and over-provisioning during low-demand periods, which is critical for cost control and system stability.
HPA works by periodically querying the Kubernetes Metrics API (typically every 15 seconds) to evaluate resource usage. It compares current usage against target thresholds and adjusts the number of pod replicas accordingly. For example, if the average CPU utilization exceeds a defined limit (e.g., 80%), HPA increases pod count proportionally. It uses a control loop and scaling algorithm to make smooth, proportional adjustments.
18. What is declarative configuration, and how is it used in Kubernetes?
Declarative configuration is a method of software operations where you define what you want to be deployed, rather than the process of how the deployment happens. This concept plays a key role in the design of Kubernetes.
You can use YAML manifest files to declaratively configure objects in your cluster, such as stating you want to deploy three replicas of a Pod using the nginx:latest
image.
After you apply the manifest, Kubernetes will automatically create the correct number of replicas in the desired configuration and ensure they remain running.
19. What is a Kubernetes DaemonSet, and when should one be used?
DaemonSets are specialist objects that replicate a set of identical Pods across every Node in your Kubernetes cluster. They ensure all your Nodes are running a particular workload.
This is useful for services such as monitoring agents and log collectors, where data must be gathered from each Node to make your cluster fully observable.
20. What is the importance of Kubernetes network policies?
Kubernetes Network Policies control how traffic flows between cluster Pods. They enforce which Pods are allowed to communicate with each other. This lets you prevent Pods from attempting to interact with neighboring workloads.
Network Policies ensure that if one Pod is compromised, attackers can’t send malicious traffic to other sensitive services running in your cluster. This makes them a key Kubernetes security control, as well as a crucial component in Kubernetes multi-tenancy implementations.
21. What is a Persistent Volume (PV) in Kubernetes?
A Persistent Volume (PV) in Kubernetes is a storage resource provisioned by an administrator or dynamically created through a StorageClass, used to persist data beyond the lifecycle of a Pod.
22. What is the difference between Kubernetes Persistent Volumes and Persistent Volume Claims?
Kubernetes Persistent Volumes (PVs) provide persistent storage to cluster Pods. They store data outside of Pods so it persists after Pods are restarted or replaced. Persistent Volume Claims (PVCs) represent a request by a Pod to use storage provided by a PV.
Therefore, PVs are the actual storage resources, whereas PVCs are Pod requests to use storage. PVCs are backed by PVs that can provide the PVC’s requested storage capacity and access mode, such as ReadWriteOnce
for single-Pod access, or ReadWriteMany
for simultaneous access from many Pods.
This final set of Kubernetes interview questions addresses some more advanced concepts. You can expect these topics to come up at the end of your interview, especially if you’re applying for a more senior role in Kubernetes operations.
23. What are Kubernetes Custom Resource Definitions (CRDs) and when are they used?
Kubernetes Custom Resource Definitions (CRDs) are custom object types added to your cluster. You can interact with CRDs in the same way as built-in objects, via the Kubernetes API and Kubectl.
For instance, a PostgresDatabase
CRD could hold config details for deploying a Postgres instance, or a SecurityPolicy
CRD might contain information relevant to a specific security service. CRDs extend Kubernetes while respecting its standard architecture, making it easier to implement custom automation for advanced use cases.
24. What is a Kubernetes Operator?
Kubernetes Operators are app-specific Kubernetes extensions that automate the process of running a specific service in your cluster. They provide CRDs and custom controllers that allow you to easily deploy the target app without manually configuring objects such as Pods, Services, StatefulSets, and Persistent Volumes.
For example, Spacelift’s Worker Pool operator lets you deploy a custom Spacelift worker pool in your cluster by creating WorkerPool
objects. WorkerPool
is a CRD backed by a controller; when you create a new WorkerPool
in your cluster, the operator automatically creates the necessary objects to deploy a new pool instance.
25. What are some Kubernetes security best practices?
Key Kubernetes security best practices include strict access control, network segmentation, and workload hardening.
- Use Role-Based Access Control (RBAC): Grant users and services the minimum necessary permissions to ensure secure access. Avoid using cluster-admin unless absolutely required.
- Enable Audit Logging: Configure the audit log to monitor and trace access patterns, especially changes to sensitive resources.
- Restrict API Server Access: Limit external access to the Kubernetes API server using firewalls or private networking.
- Use Network Policies: Define
NetworkPolicy
objects to control traffic flow between pods, enforcing least-privilege communication. - Run Containers as Non-Root: Avoid running containers with root privileges. Set
runAsUser
,readOnlyRootFilesystem
, and drop unnecessary Linux capabilities. Use correct security context settings to enforce this. - Keep Kubernetes and Dependencies Updated: Regularly patch the control plane, kubelets, and third-party tools to address known vulnerabilities.
- Scan Your Cluster Regularly: Use security tools to detect vulnerabilities in workloads and infrastructure. See Kubernetes security tools for options.
- Use Pod Security Standards: Apply
PodSecurity
admission or tools like OPA Gatekeeper to enforce pod-level security controls. - Limit Secret Exposure: Store secrets encrypted and only mount them where necessary. Use tools like Vault or native Kubernetes encryption at rest.
- Prevent Misconfigurations with Admission Controls: Use Validating Admission Policies or webhooks to block insecure resource definitions at creation.
26. How does Kubernetes interact with GitOps?
Kubernetes and GitOps are often used together, although Kubernetes isn’t specifically a GitOps tool. Nonetheless, with declarative configuration at its core, Kubernetes allows you to easily configure your apps using YAML manifests stored in a Git repository.
After committing changes to your manifests, you can then use a CI/CD pipeline to apply your new revisions to your cluster.
Tools like Argo CD and Flux CD make this even simpler by running an agent inside Kubernetes. The agent continually reconciles your Git repository state to the objects in your cluster.
27. How would you implement canary and blue-green deployments in Kubernetes?
Kubernetes doesn’t natively support canary or blue-green deployments, so extra tooling is required.
Argo Rollouts is one option: it provides a Kubernetes controller and a custom Rollout object that lets you easily configure canary and blue-green releases for a set of Pods.
Flux’s Flagger component is an alternative solution. Both tools also support progressive delivery strategies, allowing new deployments to be automatically promoted between rollout stages.
28. How does Kubernetes load balancing work with different service types?
Kubernetes automatically load balances traffic between the Pods selected by a service. In the case of ClusterIP services, only the cluster-internal networking layer is involved. Traffic to the Service is directed to any of the available Pods, on any available Node.
In comparison, LoadBalancer services also involve an external load balancer. This service type provisions a load balancer in your cloud account, using your cluster’s active cloud provider integration.
Traffic to the load balancer’s public IP address is then automatically routed into the cluster’s networking layer. The traffic’s onward routing to a matching Pod is handled internally, in the same way as a ClusterIP service.
29. How would you deploy and scale a database service in Kubernetes?
Deploying a database in Kubernetes will require a StatefulSet object, one or more Services, and Persistent Volumes.
The StatefulSet is important because it ensures the Pods running the database replicas have stable identities and are created in order. For example, it ensures that replica-0 is created first, so it can assume the role of the database primary. The other replicas will then start in order, each with its own Persistent Volumes for storage. This ensures each replica maintains its own copy of the data.
Services should then be created to route traffic to the replicas. For instance, a read-write Service may direct traffic to the primary replica (replica-0), whereas a read-only Service could load balance between all the available replicas.
Dedicated Kubernetes Operators, available from database vendors, offer a simpler experience by fully automating the deployment process using Kubernetes CRDs.
30. What is RBAC in Kubernetes, and why is it used?
RBAC (Role-Based Access Control) in Kubernetes is a mechanism for managing permissions within a cluster based on user roles. It controls who can perform specific actions on Kubernetes resources.
RBAC uses four key Kubernetes objects: Role, ClusterRole, RoleBinding, and ClusterRoleBinding. A Role defines a set of permissions within a namespace, while a ClusterRole defines them cluster-wide. These are linked to users or service accounts using RoleBinding (namespaced) or ClusterRoleBinding (cluster-wide). This structure allows precise, least-privilege access control.
RBAC is essential for securing Kubernetes environments by ensuring users and workloads only have the permissions they need.
31. How can you prevent workload disruptions during Kubernetes deployment updates?
Workload disruptions can be prevented by configuring correct Pod Disruption Budgets (PDBs). PDBs specify how many Pod replicas can become unavailable while Pods are updating due to a new Deployment rollout.
Pod disruptions may also occur due to maintenance operations such as upgrading Kubernetes or replacing a Node. You can mitigate the impacts of these events by ensuring you manually drain affected Nodes first. This enables Kubernetes to reschedule Pods onto other available Nodes gracefully.
32. How do you monitor a Kubernetes cluster?
The most common approach involves using Prometheus for collecting metrics and Grafana for visualization. Prometheus scrapes metrics from Kubernetes components like kubelet, API server, and containerized applications via exporters (e.g., node-exporter, kube-state-metrics). Grafana connects to Prometheus to display dashboards for cluster health, resource usage, and workload performance.
For logging, Fluent Bit or Fluentd collects logs from nodes and pods, forwarding them to a backend like Elasticsearch or Loki. These logs are then visualized in tools such as Kibana or Grafana Loki dashboards.
Alerting is typically handled by Alertmanager, integrated with Prometheus, enabling notifications based on metric thresholds or failures.
33. Can you explain the role of etcd in Kubernetes?
etcd is a distributed, consistent database based on the Raft consensus algorithm. It enables high availability and fault tolerance by replicating data across multiple nodes. Kubernetes components such as the API server interact with etcd to read and persist cluster state changes. For example, when a user deploys a new pod, the API server stores the pod specification in etcd, making it accessible to other control plane components.
Because etcd contains critical cluster data, it must be secured, backed up regularly, and monitored for performance and health to maintain Kubernetes reliability.
Preparing for a Kubernetes interview involves more than simply memorizing kubectl
commands. Employers want engineers who understand the “why” behind the YAML.
Start by brushing up on core Kubernetes concepts, Pods, Deployments, Services, ConfigMaps, and how they fit together to keep workloads running smoothly. Be ready to explain how Kubernetes schedules, scales, and recovers workloads in real-world conditions.
Before your interview, review the common Kubernetes interview questions above and practice answering them out loud. Don’t just repeat definitions. Walk through your reasoning and decision-making. For example, if asked how you’d handle a failing pod, explain how you’d debug it, check logs, and adjust the deployment spec.
Next, spend some hands-on time in a real or local cluster using Minikube or Kind. Recreate common issues: a bad config, a broken service, or a scaling failure. Interviewers want to see that you can stay calm, diagnose fast, and think like an operator.
Finally, be prepared to discuss Kubernetes within the broader DevOps context — how it ties into CI/CD, Helm, and infrastructure-as-code tools like Spacelift. The best candidates demonstrate not only an understanding of how Kubernetes works, but also why it matters in modern delivery pipelines.
If you need help managing your Kubernetes projects, consider Spacelift. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change.
With Spacelift, you get:
- Policies to control what kind of resources engineers can create, what parameters they can have, how many approvals you need for a run, what kind of task you execute, what happens when a pull request is open, and where to send your notifications
- Stack dependencies to build multi-infrastructure automation workflows with dependencies, having the ability to build a workflow that can combine Terraform with Kubernetes, Ansible, and other infrastructure-as-code (IaC) tools such as OpenTofu, Pulumi, and CloudFormation.
- Self-service infrastructure via Blueprints, enabling your developers to do what matters – developing application code while not sacrificing control
- Creature comforts such as contexts (reusable containers for your environment variables, files, and hooks), and the ability to run arbitrary code
- Drift detection and optional remediation
If you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.
Kubernetes is the leading container orchestration system, but with its power comes a lot of complexity to learn. If you’re applying for Kubernetes roles, then you should prepare yourself for a wide-ranging interview that’s likely to touch on many different themes and scenarios.
From describing basic tasks like accessing container logs and configuring services, to explaining advanced concepts like the use of operators, GitOps, and custom resources, the 33 revision topics we’ve provided above should set you on your way to Kubernetes success.
However, don’t stress if you’ve realized you’re missing some areas: we’ve got dozens of detailed Kubernetes guides and tutorials here on the Spacelift blog, ready to help you grow your knowledge.
Manage Kubernetes easier and faster
Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.