In this post, we will delve into Kubernetes (K8s) deployment concepts and some common strategies, looking at the advantages and disadvantages of each. A suitable deployment strategy enables you to minimize downtime, enhance your customer experience, and increase reliability when releasing your application.
A Kubernetes Deployment is a declarative statement usually configured in a YAML file that defines the application lifecycle and how updates to that application should be applied.
When deploying your applications to a K8s cluster, your chosen deployment strategy will determine how those applications are updated to a newer version from an older one. Some strategies will involve downtime. Some will introduce testing concepts and enable user analysis. There are two basic commonly used K8s deployment strategies we will look at in this post:
- Recreating
- Rolling
The following strategies are considered “Advanced deployment strategies” because the flow of traffic can be controlled in various ways:
- Blue/Green
- Canary
- A/B
- Ramped Slow Rollout
- Best-Effort Controlled Rollout
- Shadow Deployment
K8s uses a Rolling deployment strategy as the default, but there are certain use cases when this may not be appropriate. Let’s discuss each in more detail!
See how Spacelift can help you manage the complexities and compliance challenges of using Kubernetes with the newest Kubernetes integration. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change. It also has an extensive selection of policies, which lets you automate compliance checks and build complex multi-stack workflows.
Recreating deployment terminates all the pods and replaces them with the new version. This can be useful in situations where an old and new version of the application cannot run at the same time. The amount of downtime incurred using this strategy will depend on how long the application takes to shut down and start back up. The application state is entirely renewed since they are completely replaced.
An example Spec: section in the manifest file could look like this:
spec:
replicas: 10
strategy:
type: Recreate
Rolling deployments are the default K8S offering designed to reduce downtime to the cluster. A rolling deployment replaces pods running the old version of the application with the new version without downtime.
To achieve this, Readiness probes are used:
- Readiness probes monitor when the application becomes available. If the probes fail, no traffic will be sent to the pod. These are used when an app needs to perform certain initialization steps before it becomes ready. An application may also become overloaded with traffic and cause the probe to fail, preventing more traffic from being sent to it and allowing it to recover.
Once the readiness probe detects the new version of the application is available, the old version of the application is removed. If there is a problem, the rollout can be stopped and rolled back to the previous version, avoiding downtime across the entire cluster. Because each pod is replaced one by one, deployments can take time for larger clusters. If a new deployment is triggered before another has finished, the version is updated to the version specified in the new deployment, and the previous deployment version is disregarded where it has not yet been applied.
A rolling deployment is triggered when something in the pod spec is changed, such as when the image, environment, or label of a pod is updated. A pod image can be updated using the command kubectl set image
.
The spec: -> strategy: section of the manifest file can be used to refine the deployment by making use of two optional parameters — maxSurge
and maxUnavailable
. Both can be specified using a percentage or absolute number. A percentage figure should be used when Horizontal Pod Autoscaling is used.
- MaxSurge specifies the maximum number of pods the Deployment is allowed to create at one time.
- MaxUnavailable specifies the maximum number of pods that are allowed to be unavailable during the rollout.
For example, the configuration below would specify a requirement for 10 replicas, with a maximum of 3 being created at any one time, allowing for 1 to be unavailable during the rollout:
spec:
replicas: 10
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 3
maxUnavailable: 1
Ramped Slow Rollout means the new version is rolled out slowly, creating new replicas replacing the old ones. This is also another term for a Canary deployment that we discuss later in the article.
A blue-green deployment involves deploying the new application version (green) alongside the old one (blue). A load balancer in the form of the service selector object is used to direct the traffic to the new application (green) instead of the old one when it has been tested and verified. Blue/Green deployments can prove costly as twice the amount of application resources need to be stood up during the deployment period.
To enable this, we set up a service sitting in front of the deployments. For example, the service selector section of the manifest file for the blue deployment for an app called web-app with v1.0.0 could look like the below:
kind: Service
metadata:
name: web-app-01
labels:
app: web-app
selector:
app: web-app
version: v1.0.0
And the deployment for the blue web app:
kind: Deployment
metadata:
name: web-app-01
spec:
template:
metadata:
labels:
app: web-app
version: "v1.0.0"
When we want to direct traffic to the new (green) version of the app, we update the manifest to point to the new version, v2.0.0.
kind: Service
metadata:
name: web-app-02
labels:
app: web-app
selector:
app: web-app
version: v2.0.0
The deployment for the green app:
kind: Deployment
metadata:
name: web-app-02
spec:
template:
metadata:
labels:
app: web-app
version: "v2.0.0"
Blue / green deployments may also be referred to as ‘best-effort controlled rollouts’ — again, with the focus on updating your application or microservices with a focus on minimizing downtime and ensuring that the application remains available as much as possible during the deployment process.
Canary or ‘Ramped slow rollout’ strategies are sometimes used interchangeably with the term ‘Shadow Deployment’.
A Shadow Deployment is a strategy where a new version of an application is deployed alongside the existing production version, primarily for monitoring and testing purposes. User traffic is not actively routed to the new version in a shadow deployment.
A Canary deployment can be used to let a subset of the users test a new version of the application or when you are not fully confident about the new version’s functionality. This involves deploying a new version of the application alongside the old one, with the old version of the application serving most users and the newer version serving a small pool of test users. The new deployment is rolled out to more users if it is successful.
For example, in a K8s cluster with 100 running pods, 95 could be running v1.0.0 of the application, with 5 running the new v2.0.0 of the application. 95% of the users will be routed to the old version, and 5% will be routed to the new version. For this, we use 2 deployments side-by-side that can be scaled separately.
The spec section of the old application manifest would look like the following:
spec:
replicas: 95
And the new application manifest:
spec:
replicas: 5
In the example above, it might be impractical and costly to run 100 pods. A better way to achieve this is to use a load balancer such as NGINX, HAProxy, or Traefik, or a service mesh like Istio, Hashicorps Consul, or Linkrd.
For painless and efficient maintenance of your K8s clusters, follow these 15 Kubernetes Best Practices.
Similar to a Canary deployment, using an A/B deployment, you can target a given subsection of users based on some target parameters (usually the HTTP headers or a cookie), as well as distribute traffic amongst versions based on weight. This technique is widely used to test the conversion of a given feature, and then the version that converts the most is rolled out.
This approach is usually taken based on data collected on user behavior and can be used to make better business decisions. Users are usually uninformed of the new features during the A/B testing period, so true testing can be done, and experiences between the users using the old version and those using the new version can be compared. Rollouts can be slower using A/B deployments due to the additional testing period and analysis of the user experience.
A/B deployments can be automated using Istio and Flagger, check out the tutorial here for more information on how to set it up.
In this article, we discussed eight common K8s deployment strategies. Having an understanding of how these can be used, the Kubernetes tools that can be used to enable each of them, and the advantages and disadvantages of each is key when deciding how to deploy or upgrade your application to a newer version. Choosing the right strategy for your business needs can help reduce downtime, enable testing and improve the customer feedback loop enabling your team to develop a better product over time.
Cheers!
The most Flexible CI/CD Automation Tool
Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities s for infrastructure management.