Kubernetes is a powerful platform for operating containers at scale. It keeps your containerized workloads running reliably by scaling Pod replicas and restarting them when they fail.
Not all apps need to run forever, however. Sometimes you’ll want to stop or pause a Kubernetes Pod to take an app offline or suspend a redundant environment temporarily.
Stopping a Pod might seem simple, but it can be surprisingly confusing. Kubectl doesn’t have a built-in “stop” command, so other techniques must be adapted. This article will explain your options so you can learn to easily stop Kubernetes Pods with Kubectl.
Here’s what we’ll cover:
- The problem with stopping Kubernetes Pods
- How to stop Kubernetes Pods (Jump here if you need to quickly stop a Pod)
- Stop Kubernetes Pods by scaling down deployments
- Stop Kubernetes Pods by scaling down StatefulSets
- Stop a Kubernetes Pod by deleting it
- Stop Kubernetes Pods created by app operators
Let’s get started.
Kubernetes Pods represent one or more containers running in your cluster. Kubernetes manages the containers so the Pod stays healthy.
Unfortunately, there’s no way to stop a Pod’s containers. The containers will keep running while the Pod exists in your cluster. Kubernetes doesn’t allow you to simply stop the containers and leave the Pod’s configuration intact.
This can seem odd if you’re familiar with other container platforms. For instance, Docker makes it easy to stop and start containers on demand:
# Start a new NGINX container
$ docker run -d --name nginx nginx:latest
0f26a68a4708903c704314aa78581d5b7f381df0a5fd20bbd3042777ac1f8447
# Confirm the container is running
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f26a68a4708 nginx:latest "/docker-entrypoint.…" 19 seconds ago Up 18 seconds 80/tcp nginx
# Stop the container
$ docker stop nginx
nginx
# Confirm the container has stopped
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f26a68a4708 nginx:latest "/docker-entrypoint.…" 46 seconds ago Exited (0) 13 seconds ago nginx
# Restart the same container
$ docker start nginx
nginx
# Confirm the container is now running again
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0f26a68a4708 nginx:latest "/docker-entrypoint.…" About a minute ago Up 8 seconds 80/tcp nginx
Kubernetes Pods are managed differently from Docker containers. (See: Docker vs. Kubernetes) Kubernetes expects that Pods will always run until their process is completed. As a result, Kubectl doesn’t provide a specific mechanism for stopping Pods in your cluster.
If Kubernetes Pods did have a “stopped” state, then Kubernetes features such as Services would have to track which Pods are actually running, adding more complexity. It would also affect the replication consistency of objects such as Deployments and ReplicaSets.
Fortunately, other Kubernetes mechanisms allow you to effectively stop Pods, even if that’s not quite what happens.
Kubectl doesn’t let you stop running Pods directly, but it’s still easy to pause and suspend workloads. Because Pods are usually stateless, they can be deleted when you need to stop them and recreated as new instances when you’re ready to restart.
The following main strategies can be used to stop a Pod:
- Scaling managed objects like Deployments and StatefulSets down to zero deletes their Pods but allows a quick scale-up later, using the same Pod configuration.
- Manually deleting a target Pod provides a solution for unmanaged Pods that aren’t part of a Deployment or StatefulSet, but this comes with some caveats.
- Many Operators that automate installations of popular apps include their own methods for temporarily stopping deployments.
Let’s explore each of these options in more detail.
Kubernetes Pods are easy to stop if they’re being managed by a Deployment object. Deployments declaratively configure a set of identical Pods with a specified number of replicas.
Scaling a Deployment down to zero will stop all of its Pods. However, the Deployment itself will remain in your cluster, allowing you to restart your Pods in the future by simply scaling the Deployment back up. Let’s see this in action.
First, create a simple Deployment object:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-deployment
spec:
replicas: 3
selector:
matchLabels:
app: demo-deployment
template:
metadata:
labels:
app: demo-deployment
spec:
containers:
- name: nginx
image: nginx:alpine
The Deployment will start three identical Pods that run the nginx:alpine
image:
$ kubectl apply -f demo-deployment.yml
deployment.apps/demo-deployment created
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
demo-deployment 3/3 3 3 33s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-deployment-57d9d48457-7cxxn 1/1 Running 0 8s
demo-deployment-57d9d48457-7qnsb 1/1 Running 0 8s
demo-deployment-57d9d48457-rhb8w 1/1 Running 0 8s
To stop these Pods, scale the Deployment down to zero. Either change the replicas: 3
line in your manifest to replicas: 0
, then repeat the kubectl apply
command, or use kubectl scale
:
$ kubectl scale deployment/demo-deployment --replicas 0
deployment.apps/demo-deployment scaled
The Pods will be removed, effectively stopping the service they were providing:
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
demo-deployment 0/0 0 0 115s
$ kubectl get pods
No resources found in default namespace.
To restart the Pods, use the same process to scale the Deployment back up again. New Pods will be created, but the Deployment object ensures they’ll be configured identically to the originals. Any existing Persistent Volumes will also be reattached, preventing the loss of valuable saved data.
$ kubectl scale deployment/demo-deployment --replicas 3
deployment.apps/demo-deployment scaled
$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
demo-deployment 3/3 3 3 23m
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
demo-deployment-57d9d48457-24z85 1/1 Running 0 27s
demo-deployment-57d9d48457-d9zt2 1/1 Running 0 27s
demo-deployment-57d9d48457-kncg2 1/1 Running 0 27s
You’ve successfully “stopped” and “restarted” your Pods using the built-in capabilities of the Kubernetes Deployment object.
StatefulSets are an alternative to Deployments for stateful apps. They assign your Pods stable identifiers and individual Persistent Volume Claims, allowing you to reliably run replicated stateful services such as databases and file servers in Kubernetes.
Pods in a StatefulSet can be “stopped” by scaling the StatefulSet down to zero, similar to Deployments. Kubernetes will remove the Pods but retain the StatefulSet object, allowing you to scale back up again in the future. Persistent Volume Claims will also be kept, ensuring saved data can be reattached to new replicas.
$ kubectl scale sts demo-statefulset --replicas 0
statefulset.apps/demo-statefulset scaled
When scaling down a StatefulSet, the Pods will be removed sequentially in reverse ordinal order. If you’ve got three replicas running, then replica 2
will be deleted first, then 1
, and finally 0
. This ensures secondary replicas can safely terminate before primary replicas.
Deleting a Pod with the kubectl delete
command is another way to stop it from running in your cluster:
$ kubectl delete pod demo-pod
pod "demo-pod" deleted
Kubernetes will remove all traces of the Pod from your cluster.
Pods are allowed to terminate gracefully when they’re deleted. This means your app might not stop immediately. You can instantly delete a Pod by using Kubectl’s --force
and --grace-period
flags:
$ kubectl delete pod demo-pod --force --grace-period=0
pod "demo-pod" deleted
This will immediately remove the Pod without offering it a termination grace period.
Once you’ve deleted your Pod, you can restart it in the future by recreating it from your manifest file:
$ kubectl apply -f pod.yaml
pod/demo-pod created
Stopping Pods by deleting them is only advisable when the Pod’s not part of a Deployment or StatefulSet. Deleting a Pod removes it from your cluster, which may be confusing to other users. It could also trigger automated actions in tools connected to your cluster.
Moreover, manually deleting a Pod when you only intend to stop it significantly raises the risk of unintentional data loss. If you accidentally delete your manifest file — or delete a Pod that was created imperatively, using kubectl run
— you’ll be unable to recreate the Pod’s configuration in the future.
Stop all the pods
To stop all pods, you can delete them using:
kubectl delete pods --all --namespace=<your-namespace>
If you want to stop pods across all namespaces, use --all-namespaces
.
However, simply deleting pods won’t prevent them from restarting because Deployments or ReplicaSets will recreate them. To fully stop pods and ensure they don’t restart, scale the related deployments to zero replicas using:
kubectl scale deployment --all --replicas=0 -n <your-namespace>
Finally, Kubernetes Operators often provide their own way to stop the Pods they create. You can usually change a paused
, suspended
, or replicas
line in an operator’s custom resource manifest files to temporarily stop deployments that it manages.
For example, many teams use Percona Operator to run MySQL-based databases in Kubernetes. By creating a PerconaXtraDBCluster
object, you can easily deploy a new replicated database instance:
apiVersion: pxc.percona.com/v1
kind: PerconaXtraDBCluster
metadata:
name: pxc
spec:
pause: false
pxc:
image: percona/percona-xtradb-cluster:8.0.35
size: 3
volumeSpec:
persistentVolumeClaim:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
$ kubectl apply -f pxc.yaml
perconaxtradbcluster.pxc.percona.com/pxc created
$ kubectl get pxc
NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE
pxc pxc-pxc.default ready 1 8m45s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
pxc-pxc-0 1/1 Running 0 7m2s
pxc-pxc-1 1/1 Running 0 5m30s
pxc-pxc-2 1/1 Running 0 2m21s
pxc-operator-844b4d5cdd-fngn4 1/1 Running 0 11m
Setting the pause
field in the manifest to true
will automatically stop the database cluster and remove its Pods:
$ kubectl get pxc
NAME ENDPOINT STATUS PXC PROXYSQL HAPROXY AGE
pxc pxc-pxc.default paused 9m40s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
pxc-operator-844b4d5cdd-fngn4 1/1 Running 0 12m
When an operator includes a mechanism like this, it’s best to use it instead of manually deleting Pods or scaling down Deployments and StatefulSets. These operations could lead to consistency errors and unexpected bugs in the operator’s behavior. Check the documentation for the operators you’re using to learn how to stop their Pods properly.
What is graceful shutdown?
Graceful shutdown in Kubernetes is the process of safely terminating a pod while allowing running processes to complete and clean up resources before the pod is removed. When a pod is scheduled for termination, Kubernetes first sends a SIGTERM
signal, allowing the application time to finish tasks and shut down properly.
By default, Kubernetes gives the pod 30 seconds (configurable via the terminationGracePeriodSeconds
setting) before forcefully killing it with a SIGKILL
signal. This mechanism ensures minimal disruption, avoids data loss, and helps maintain application stability during scaling or updates.
If you need help managing your Kubernetes projects, consider Spacelift. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change.
With Spacelift, you get:
- Policies to control what kind of resources engineers can create, what parameters they can have, how many approvals you need for a run, what kind of task you execute, what happens when a pull request is open, and where to send your notifications
- Stack dependencies to build multi-infrastructure automation workflows with dependencies, having the ability to build a workflow that can combine Terraform with Kubernetes, Ansible, and other infrastructure-as-code (IaC) tools such as OpenTofu, Pulumi, and CloudFormation,
- Self-service infrastructure via Blueprints, or Spacelift’s Kubernetes operator, enabling your developers to do what matters – developing application code while not sacrificing control
- Creature comforts such as contexts (reusable containers for your environment variables, files, and hooks), and the ability to run arbitrary code
- Drift detection and optional remediation
If you want to learn more about Spacelift, create a free account today or book a demo with one of our engineers.
We’ve looked at ways to stop Kubernetes Pods and ensure they’re ready for a quick restart in the future. Kubectl doesn’t include a built-in “stop” command equivalent to docker stop, a common frustration for new Kubernetes users. But as we’ve seen, you can achieve the same effect by scaling a Deployment down to zero or using the features provided by app-specific Kubernetes Operators.
Want to continue learning Kubernetes and Kubectl? Check out our guides to debugging common Pod errors, or save our Kubectl cheat sheet as a handy command reference. You can also give Spacelift a try to orchestrate your Kubernetes and cloud IaC management.
Manage Kubernetes easier and faster
Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.