Kubernetes

How to Delete Pods from a Kubernetes Node with Examples

How to Delete Pods from a Kubernetes Node

In this article, we will run through how to delete a pod from a Kubernetes (K8S) node with examples using a cluster running on AKS (Azure Kubernetes Service).

Deleting a pod from a node is commonly required to manually scale down your cluster for troubleshooting purposes when pods need to be removed from a particular node or to clear pods off a node completely when it needs maintenance.

Delete All the Pods From a Node

To delete all the pods from a particular node, first, retrieve the names of the nodes in the cluster, and then the names of the pods. You can use the -o wide option to show more information.

kubectl get nodes -o wide
kubectl get nodes -o wide
kubectl get pods -o wide
kubectl get pods -o wide

In the above examples, I have two nodes running in my AKS cluster with 11 pods, all running on one node. If I have a problem with that node, I may wish to delete all the pods from the node and have them run on the other node.

Once they have been confirmed as safe to be removed, you can go ahead and remove the pods from the node using the kubectl drain command. The drain command will evict the pods from the node and schedule them on another free node.

kubectl drain aks-agentpool-39678053-vmss00000a
kubectl drain

Here I see a couple of errors:

  • cannot delete DaemonSet-managed Pods (use — ignore-daemonsets to ignore)
  • cannot delete Pods with local storage (use — delete-emptydir-data to override)
kubectl drain aks-agentpool-39678053-vmss00000a --ignore-daemonsets --delete-emptydir-data

The drain command will first cordon the node. This ensures that no new pods will get scheduled to the node while you are preparing it for removal or maintenance.

You can manually cordon the node using the kubectl cordon command if you wish. The logs will be streamed to the console as the pods are moved across.

kubectl cordon

Once complete, I can verify that my pods are now running on my other node called aks-agentpool-39678053-vmss00000b.

kubectl get pods -o wide
delete pods verify

When you are finished with the troubleshooting or maintenance work on the node, you can use kubectl uncordon to make it available for scheduling pods again. Without doing this, you will see SchedulingDisabled under the status of the node, meaning no pods can be placed on it.

kubectl get nodes -o wide
SchedulingDisabled
kubectl uncordon aks-agentpool-39678053-vmss00000a
kubectl uncordon

The status now shows as Ready:

delete pods ready

I can now drain the pods from node aks-agentpool-39678053-vmss00000b if required, and they will safely be scheduled back on aks-agentpool-39678053-vmss00000a.

See also How to Restart Kubernetes Pods With Kubectl.

Error ‘Cannot evict pod as it would violate the pod’s disruption budget’

When using the kubectl drain command, you may notice an error:

Cannot evict pod as it would violate the pod’s disruption budget.

kubectl drain error

The pod disruption budget is a way to ensure the availability of pods to prevent accidental removal. It is described as the following in the kubernetes.io documentation:

As an application owner, you can create a PodDisruptionBudget (PDB) for each application. A PDB limits the number of Pods of a replicated application that are down simultaneously from voluntary disruptions. For example, a quorum-based application would like to ensure that the number of replicas running is never brought below the number needed for a quorum. A web front end might want to ensure that the number of replicas serving load never falls below a certain percentage of the total.

The Pod disruption budget can be viewed using the following command:

kubectl get poddisruptionbudget -A
kubectl get poddisruptionbudget

They can also be deleted:

kubectl delete poddisruptionbudget <pod name>

You will notice that even though the error is shown, retries occur automatically on a configurable timeout. In my case, the pods were moved successfully after a short period, as once the pod came up on the other node. The min available was then 1 (the same as the configured PDB), and it was able to be deleted from the node I was draining.

Also, anything that can be run via kubectl can be run within a Spacelift stack. Spacelift helps you manage the complexities and compliance challenges of using Kubernetes. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change. It also has an extensive selection of policies, which lets you automate compliance checks and build complex multi-stack workflows. If you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.

Delete a Single Pod

You can simply delete pods using the kubectl delete pod command:

kubectl delete pod aks-helloworld-two-69ffd498d9-sfg7t
kubectl delete pods

This will cause the scheduler to re-create the pod with a different name.

kubectl get pods
kubectl get pods

If you want to guarantee that the pod doesn’t end up on the same node, cordon the node first before deleting the pod.

Scale the Number of Pods

Scaling the number of pods on the node up that are running before deleting a pod might be necessary if you need to run a minimum number at all times for application availability purposes.

For example, three pods may be required at all times, so you could scale up to four running pods before deleting one. If your pods are controlled by a StatefulSet, this is not an option. However, if they are controlled by a ReplicaSet or Deployment, you can use the kubectl scale command to achieve this.

To view my current deployments:

kubectl get deployments
kubectl get deployments

Here I have a deployment with three pods.

kubectl scale deployment aks-helloworld-one --replicas=4
kubectl scale deployment

Check the number of running pods:

kubectl get pods
number of running pods

I can now delete a pod and then scale back down to three replicas to maintain my requirement of three running pods.

kubectl delete pod aks-helloworld-one-56c7b8d79d-9ffq6
scale back down
kubectl scale deployment aks-helloworld-one --replicas=3
kubectl scale deployment replicas

Force Pod Deletion

Force deletions do not wait for confirmation from the kubelet that the Pod has been terminated. Where the pods are part of a StatefulSet, which has sticky identities for their pods. This means that another pod with the same name may attempt to be duplicated and run in parallel, causing problems with the application. It is for this reason this option is not recommended unless the graceful deletion using kubectl delete pods fails.

kubectl delete pods <pod name> --grace-period=0 --force
Force pod deletion

If the pod is stuck in the unknown state, run this command to remove it from the cluster:

kubectl patch pod <pod name> -p '{"metadata":{"finalizers":null}}'

Delete Completed Pods

Completed pods are pods that have a status of Succeeded or Failed. To delete this kind of pods you would first need to identify them:

kubectl get pods --namespace <namespace_name> --field-selector=status.phase=Succeeded,status.phase=Failed

This will show all the pods that have a status of Succeeded or Failed in a specific namespace. If you want to view the pods in all the namespaces, you can modify the above command to:

kubectl get pods --all-namespaces --field-selector=status.phase=Succeeded,status.phase=Failed

To delete the completed pods in a particular namespace, you can run:

kubectl delete pods --namespace <namespace_name> --field-selector=status.phase=Succeeded,status.phase=Failed

Similarly, to delete the completed pods in all of the namespaces you can use:

kubectl delete pods --all-namespaces --field-selector=status.phase=Succeeded,status.phase=Failed

When you are deleting in bulk, you should proceed with caution, so before running the actual command, one suggestion would be to add the --dry-run=client option to and the output you prefer (yaml or json) using the -o yaml or -o json.

Key Points

Pods can be deleted simply using the kubectl delete pod command. However, the challenge is usually to maintain application uptime and avoid service disruption. To do this, you can use the kubectl drain command to gracefully bring pods up on another node before they are deleted. You should also consider the pod disruption budget configuration to avoid errors and consider scaling the number of pods appropriately before deleting them.

Cheers!

The most Flexible CI/CD Automation Tool

Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities for infrastructure management.

Start free trial