Introducing IaCConf 2025, the free virtual event for elevating your IaC skills

➡️ Register Now

Kubernetes

How to Restart Kubernetes Pods With Kubectl

how to restart kubernetes pods

🚀 Level Up Your Infrastructure Skills

You focus on building. We’ll keep you updated. Get curated infrastructure insights that help you make smarter decisions.

A pod is the smallest unit in Kubernetes (K8S). It should run until it is replaced by a new deployment. This means there is no way to restart a pod; it should be replaced instead.

There is no kubectl restart [podname] command for use with K8S (with Docker you can use docker restart [container_id] ), but there are several ways to achieve a pod ‘restart’ with kubectl.

What we will cover in this article:

  1. Why might you want to restart a Kubernetes pod?
  2. Available pod statuses
  3. How to restart a Kubernetes pod

Why might you want to restart a Kubernetes pod?

Here are some situations in which you may need to restart a pod:

  • Applying configuration changes → Updates to the pod’s configuration (configmaps, secrets, environment variables) may require the pod to be restarted for the changes to take effect manually. 
  • Debugging applications → Sometimes, if your application is not running correctly or you are experiencing issues with it, restarting the underlying pods to reset their state and make troubleshooting easier is a good practice.
  • Pod stuck in a terminating state → In this case, a delete and a recreation would usually do the trick. However, there are some cases where a node is taken out of service, and the pods cannot be evicted from it, so a restart will help address the issue.
  • Addressing Out Of Memory (OOM) errors → If a pod is terminated with an Out Of Memory Error (OOM), you will need to restart the pod after making changes to the resource specifications. This may be solved automatically if the pod’s restart policy allows it.
  • Forcing a new image pull → If you are using the latest tag (which is not a best practice), you need to manually restart the pod to force a new image pull to ensure a pod is using the latest version of an image. Of course, if you are making changes to the image parameter in the configuration because you’ve released a new image and want to take advantage of that, a restart will still be required.
  • Resource contention → If a pod is consuming excessive resources, causing performance issues, or affecting other workflows, restarting the pod may release those resources and mitigate the problem. This usually occurs when you are not using memory and CPU restrictions.

Note: Manually deleting and restarting pods in Kubernetes can introduce risks. If not managed carefully, this action may disrupt running applications, especially if the pod is handling live traffic or has not been configured with appropriate replication or readiness probes.

Without proper checks, manual restarts can lead to temporary downtime, data loss, or state inconsistencies. It may also bypass automated orchestration logic, preventing Kubernetes from managing the pod lifecycle as intended.

For safer operations, it’s recommended to use rolling updates or Kubernetes-native tools that respect deployment strategies and maintain service availability.

Available pod statuses

A Kubernetes pod has five possible statuses: pending, running, succeeded, failed, and unknown.

  1. Pending: This state shows that at least one container within the pod has not yet been created.
  2. Running: All containers have been created, and the pod has been bound to a Node. At this point, the containers are running or are being started or restarted.
  3. Succeeded: All containers in the pod have been successfully terminated and will not be restarted.
  4. Failed: All containers have been terminated, and at least one container has failed. The failed container exists in a non-zero state.
  5. Unknown: The status of the pod cannot be obtained.
kubernetes pod lifecycle

If you notice a pod in an undesirable state, with the status showing error, you might try a ‘restart’ as part of your troubleshooting to get things back to normal operations. You may also see the status CrashLoopBackOff, which is the default when an error is encountered, and K8S tries to restart the pod automatically.

How to restart a Kubernetes pod

Kubernetes does not provide a direct kubectl restart pod command. However, you can achieve similar functionality using several methods with kubectl. Below are some common ways to restart a Kubernetes pod:

  1. Rolling restart the deployment (kubectl rollout restart)
  2. Scale deployment replicas (kubectl scale)
  3. Delete an individual pod (kubectl delete)
  4. Force replace a pod (kubectl replace)
  5. Update environment variables (kubectl set env)

Once new pods are re-created, they will have a different name from the old ones. Use the kubectl get pods command to obtain a list of all your pods.

Method 1: Rolling restart the deployment

Quick command reference: kubectl rollout restart

This method is recommended for triggering a restart when you haven’t made changes to the deployment manifest but want the pods to refresh (e.g., to pick up new secrets, reinitialize a process, etc.). A rollout restart will kill one pod at a time, and then new pods will be scaled up. 

This method can be used as of Kubernetes v1.15.

kubectl rollout restart deployment <deployment_name> -n <namespace>

This command tells Kubernetes to restart the Deployment, which causes all the associated pods to be replaced one by one.

What is a rolling restart in Kubernetes?

A rolling restart in Kubernetes is a process where pods in a deployment are gradually terminated and replaced with new ones, ensuring that the application remains available throughout the update. This is done incrementally, typically one pod at a time, so that there is no downtime during the rollout. Rolling restarts are commonly used for applying configuration changes or updating container images without disrupting end users.

Method 2: Scale deployment replicas

Quick command reference: kubectl scale

Scaling a Deployment down to 0 replicas and then back up forces Kubernetes to terminate all existing pods and create fresh ones. This is essentially a “restart” because the new pods are instantiated from the Deployment’s pod template.

However, this method will introduce an outage and is not recommended. If downtime is not an issue, it can be used as a quicker alternative to the kubectl rollout restart  method (your pod may have to run through a lengthy continuous integration/deployment process before it is redeployed).

If there is no YAML file associated with the deployment, you can set the number of replicas to 0.

kubectl scale deployment <deployment name> -n <namespace> --replicas=0

This terminates the pods. Once scaling is complete, the replicas can be scaled back up as needed (to at least 1):

kubectl scale deployment <deployment name> -n <namespace> --replicas=3

Pod status can be checked during the scaling using:

kubectl get pods -n <namespace>

Method 3: Delete an individual pod

Quick command reference: kubectl delete pod and kubectl delete replicaset

If the pod is managed by a Deployment, ReplicaSet, or StatefulSet, you can safely delete the pod with kubectl delete pod since Kubernetes will automatically recreate it.

Each pod can be deleted individually if required:

kubectl delete pod <pod_name> -n <namespace>

Doing this will cause the pod to be recreated because K8S is declarative; it will create a new pod based on the specified configuration. However, when many pods are running, this is not really a practical approach. 

Where many pods have the same label, you could use that label to select multiple pods at once:

kubectl delete pod -l “app:myapp” -n <namespace>

ReplicaSet can be deleted instead if there are many pods:

kubectl delete replicaset <name> -n <namespace>

Method 4: Force replace a Pod

Quick command reference: kubectl get pod | kubectl replace

The pod you want to replace can be retrieved using the kubectl get pod to get the YAML statement of the currently running pod and passed it to the kubectl replace command with the --force flag specified in order to achieve a restart. This is useful if no YAML file is available and the pod has been started.

kubectl get pod <pod_name> -n <namespace> -o yaml | kubectl replace --force -f -

This method only works for manually created pods or those not controlled by higher-level objects like Deployments, StatefulSets, etc. If you try this on a pod that’s part of a Deployment, Kubernetes might immediately recreate a second pod (since the Deployment notices one went missing), leading to duplicates or conflicts.

Method 5: Update environment variables

Quick command reference: kubectl set env

A simple and effective way to restart pods in Kubernetes is by using the kubectl set env command to update an environment variable in a Deployment. Kubernetes triggers a rolling restart whenever the pod template changes, and changing or adding an environment variable is enough to make that happen.

The example below sets the environment variable DEPLOY_DATE to the date specified, causing the pod to restart.

kubectl set env deployment <deployment name> -n <namespace> DEPLOY_DATE="$(date)"

This method is safe, causes no downtime (thanks to the rolling update), and is perfect for triggering restarts after updating ConfigMaps and Secrets or refreshing the application state without changing the app code or deployment image. It’s also a popular approach in automation scripts and CI/CD pipelines.

Best practices for restarting pods in Kubernetes

Best practices for restarting pods in Kubernetes involve using declarative and automated approaches to ensure reliability and minimal disruption:

  • Use readiness and liveness probes: These health checks help Kubernetes detect when a pod is unhealthy and should be restarted, ensuring that only healthy containers receive traffic.
  • Avoid manual deletion: Instead of manually deleting pods, update deployments or use rolling restarts (kubectl rollout restart deployment/<name>) to let Kubernetes handle restarts gracefully.
  • Implement rolling updates: Use rolling updates for deployments to ensure zero downtime by gradually replacing old pods with new ones.
  • Configure resource requests and limits: Proper resource allocation helps prevent pods from being killed unexpectedly by the scheduler or node-level OOM (Out Of Memory) conditions.
  • Use CrashLoopBackOff as a signal: If a pod enters this state, investigate logs and errors before forcing restarts. Configuration or code may be causing the issue.
  • Avoid frequent or unnecessary restarts: Unplanned or frequent restarts can lead to performance issues, cascading failures, or configuration drift. Always aim to identify and fix root causes rather than relying on restarts as a workaround.
  • Log and monitor restarts: Track pod restarts through logs and metrics to understand restart reasons (e.g., crashes, probes failing, OOM errors). Restarting shouldn’t be the first solution to runtime problems; observability helps prevent recurring issues.

Managing Kubernetes with Spacelift

If you need assistance managing your Kubernetes projects, look at Spacelift. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes stacks, and pull requests show you a preview of what they’re planning to change. 

You can also use Spacelift to mix and match Terraform, OpenTofu, Pulumi, AWS CloudFormation, and Kubernetes stacks and have them talk to one another.

To take this one step further, you could add custom policies to reinforce the security and reliability of your configurations and deployments. Spacelift provides different types of policies and workflows that are easily customizable to fit every use case. For instance, you could add plan policies to restrict or warn about security or compliance violations or approval policies to add an approval step during deployments. 

You can try Spacelift for free by creating a trial account or booking a demo with one of our engineers.

logixboard logo

Supply chain management platform Logixboard has found Spacelift easy to install, configure, and maintain. Some of Logixboard’s stacks manage Kubernetes resources, such as CRDs, deployments, and ConfigMaps, via some Terraform providers. The company will be creating a lot more stacks as they increasingly utilize Kubernetes, and Spacelift will help them do that with confidence and ease.

Spacelift customer case study

Read the full story

Key points

Having a range of commands to use when you encounter issues with pods in K8S will enable you to restart them appropriately, depending on how you have deployed the pods, the necessity for application uptime, and the urgency of the restart. 

In general, the best approach is to use the kubectl rollout restart method described above, as it will avoid application downtime.

A restart will not resolve the problem that caused the pods to have issues in the first place, so further investigation into the root cause will be required.

Manage Kubernetes easier and faster

Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.

Learn more

Kubernetes Commands Cheat Sheet

Grab our ultimate cheat sheet PDF

for all the kubectl commands you need.

k8s book
Share your data and download the cheat sheet