General

Argo Rollouts – What Is It, How It Works & Tutorial

argo rollouts

Argo Rollouts is a solution for performing progressive delivery of deployments to Kubernetes clusters. It helps you improve deployment reliability and performance using blue-green and canary rollouts.

This article will explain more about Argo Rollouts, how it works, and how to get started using it in your own cluster. We’ll finish by sharing a simple example of how to launch a canary rollout for a Kubernetes deployment.

What we will cover:

  1. What is Argo Rollouts?
  2. How does Argo Rollouts work — deployment strategies
  3. Argo Rollouts use cases
  4. Demo: How to use Argo Rollouts

What is Argo Rollouts?

Argo Rollouts is a Kubernetes tool that implements advanced rollout strategies for deployments in your cluster. This means using techniques such as blue-green deployments and canary deployments to gradually move traffic to a new app release instead of having all requests immediately switch over. It enables you to limit the damage caused by broken deployments because they’ll initially serve only a subset of users.

The tool is implemented as a Kubernetes controller and a collection of Custom Resource Definitions (CRDs). The main CRD is Rollout — it acts as a replacement for the Kubernetes Deployment object and allows you to define deployments that use the advanced update strategies that Argo provides. Without creating a Rollout, you can only use the rolling update and complete recreation deployment strategies that are included with Kubernetes.

When you add a Rollout object to your cluster, the Argo controller detects its presence and then creates, replaces, and removes Pods as required. You can then manage the rollout using Argo’s Kubectl plugin, such as exposing the new deployment to more users or initiating a rollback. These actions can also be automated based on data supplied by external sources — for example, HTTP request metrics collated by an Ingress controller or analysis of network activity exposed by your service mesh.

Although the Kubernetes Deployment object provides useful controls for simple scenarios, it’s not robust enough to support real-world rollouts at scale. Argo Rollouts adds the missing features that let you precisely manage rollout progression and automate more parts of your deployment workflow.

What is the difference between Argo CD and Argo Rollouts?

Argo Rollouts is often used in conjunction with Argo CD, the Argo project’s continuous delivery (CD) tool. Argo CD implements declarative GitOps-driven CD for Kubernetes, while Rollouts offers a controller and CRDs that let you robustly manage blue-green and canary deployments. You can use Argo Rollouts without ArgoCD, or vice versa, but combining them both produces a fully automated end-to-end workflow for safely deploying changes to your apps.

How does Argo Rollouts work?

In this section, we will explain core concepts in Argo Rollout, the differences between the main supported rollout strategies, and the Argo Rollouts deployment workflow.

Understanding app rollout strategies

There are four main strategies used to roll out application changes. The one you select defines what happens when you launch a new version of an app into your Kubernetes cluster:

  • Blue-Green — Blue-green deployments, available in Argo Rollouts, start the new version’s Pods but don’t direct any traffic to them. The old version (blue) remains live and continues to serve your production users. Developers can manually test against the new release (green) to verify it’s functioning correctly.
  • Canary — Canary deployments start the new version and use it to handle a portion of live traffic. You can gradually increase the amount of traffic that’s served by the new release, allowing any problems to be detected and resolved before too many users experience them.
  • Rolling Update — A rolling update starts the new deployment’s Pods, then gradually scales down the old deployment until only the new one is left running. (Note: This is the default behavior of regular Kubernetes Deployments.)
  • Recreate — This strategy removes the old deployment from your cluster, then launches the new release and immediately exposes it to traffic. This can be advantageous when you’re introducing backward-incompatible changes that require a clean break to function correctly, but the gap between the old deployment stopping and the new one starting means some downtime will occur. (Note: Recreate is supported by regular Kubernetes Deployments.)

Because the Rolling Update and Recreate deployment strategies are already available in Kubernetes, Argo Rollouts is mainly used when you need Blue-Green or Canary deployments.

Argo Rollouts deployment workflow

The core Argo Rollouts workflow is as follows:

  1. Deploy your new app release.
  2. Test the new release.
    For blue-green deployments, this will be done by developers, whereas canary deployments will be tested by a small percentage of real users. As you gain confidence in the canary, you can increase the proportion of traffic that’s directed to it.
  3. Once you’re sure the deployment has been successful, promote it to a full rollout.
    Argo will then remove the old deployment and ensure all traffic is directed to the new one. At this point, you can begin iterating on your next change, ready to repeat the cycle.
argo rollouts workflow

One of the key benefits of Argo Rollouts is that these steps can be automated, so you don’t have to keep checking your deployments before they proceed. For example, you could configure Argo to automatically increase the percentage of traffic that targets your canary deployment every 10 minutes, or specify that a rollout should be aborted if there’s a spike in HTTP error codes. This facilitates greater DevOps efficiency without compromising deployment reliability.

Argo Rollouts use cases

As we’ve outlined above, Argo Rollouts enables more advanced Kubernetes deployment techniques. Here are some key use cases:

  • Expose new app releases to a limited group of users via blue-green or canary deployments.
  • Take control over rollout speed and progression for example, by expanding access to an additional 20% of traffic each hour, but only if no errors have occurred.
  • Automate rollbacks in the event of failures using metrics generated by external systems (such as request latency or failure data sourced from a Prometheus instance).
  • Use GitOps and declarative configuration to define your deployment rollouts and easily apply changes using IaC methods.

These benefits illustrate how Argo Rollouts fills in the blanks left by the built-in Kubernetes Deployment object.

Demo: How to use Argo Rollouts

Let’s walk through a tutorial on how to use Argo Rollouts to implement a canary deployment workflow in your Kubernetes cluster.

Before continuing, ensure you’ve got Kubectl actively connected to a cluster. You’ll also need Helm to make it easier to install Argo Rollouts.

1. Install Argo Rollouts

First, register Argo’s Helm chart repository in your local Helm configuration:

$ helm repo add argo https://argoproj.github.io/argo-helm

Next, use Helm to install Argo Rollouts from its official chart:

$ helm install argo-rollouts argo/argo-rollouts \
  -n argo-rollouts \
  --create-namespace
NAME: argo-rollouts
LAST DEPLOYED: Tue Mar 19 14:37:31 2024
NAMESPACE: argo-rollouts
STATUS: deployed
REVISION: 1
TEST SUITE: None

Wait a few moments while Kubernetes pulls the required container images and starts your Pods.

In the meantime, run the following commands to add the Argo Rollouts Kubectl plugin to your local system:

# If not on Linux, you can find alternative platform download links via the Argo Rollouts GitHub releases page:
# https://github.com/argoproj/argo-rollouts/releases
$ curl -LO https://github.com/argoproj/argo-rollouts/releases/latest/download/kubectl-argo-rollouts-linux-amd64

$ chmod +x kubectl-argo-rollouts-linux-amd64

$ mv kubectl-argo-rollouts-linux-amd64 /usr/local/bin/kubectl-argo-rollouts

You should now be able to run kubectl argo rollouts commands successfully:

$ kubectl argo rollouts version
kubectl-argo-rollouts: v1.6.6+737ca89

Now, you’re ready to begin creating rollouts in your cluster!

2. Create a Rollout object

You need a Rollout object to configure the rollout strategy that will be used for your deployment. The Rollout should include either a template that configures the Pods to deploy or a reference to an existing Kubernetes Deployment. In this example, we’re creating the Deployment using a template within the Rollout object:

apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
  name: demo-rollout
spec:
  replicas: 10
  selector:
    matchLabels:
      app: demo-app
  template:
    metadata:
      labels:
        app: demo-app
    spec:
      containers:
        - name: nginx
          image: nginx:latest
  strategy:
    canary:
      steps:
        - setWeight: 20
          pause: {}
        - setWeight: 40
        - pause:
            duration: 1m
        - setWeight: 60
        - pause:
            duration: 1m
        - setWeight: 80
        - pause:
            duration: 1m

This Rollout object specifies that 10 replicas of a Pod running the nginx:latest container image will be deployed. The Canary rollout strategy is used with four different traffic weighting steps: on each rollout, 20% of Pods are replaced with the new deployment, then the rollout is paused until it’s manually resumed. Thereafter, the deployment is expanded to replace another 20% of Pods each minute, until an 80% rollout is achieved. 

At this point, you should promote the rollout so it becomes the new live one — we’ll see how to do this in the following steps.

3. Deploy the Rollout

Save the manifest shown above, then use Kubectl to create the Rollout object in your cluster:

$ kubectl apply -f rollout.yml
rollout.argoproj.io/demo-rollout created

Now you can use the kubectl get rollout command to monitor the rollout’s progress:

$ kubectl get rollout
NAME           DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
demo-rollout   10        10        10           10          3m41s

You can get more detailed information about the rollout by using the commands provided by the Argo Rollouts Kubectl plugin:

$ kubectl argo rollouts get rollout demo-rollout
Name:            demo-rollout
Namespace:       default
Status:          ✔ Healthy
Strategy:        Canary
  Step:          8/8
  SetWeight:     100
  ActualWeight:  100
Images:          nginx:latest (stable)
Replicas:
  Desired:       10
  Current:       10
  Updated:       10
  Ready:         10
  Available:     10

NAME                                   KIND        STATUS     AGE  INFO
⟳ demo-rollout                         Rollout     ✔ Healthy  21s  
└──# revision:1                                                    
   └──⧉ demo-rollout-ddb657d           ReplicaSet  ✔ Healthy  21s  stable
      ├──□ demo-rollout-ddb657d-4jd9c  Pod         ✔ Running  21s  ready:1/1
      ├──□ demo-rollout-ddb657d-7blsc  Pod         ✔ Running  21s  ready:1/1
      ├──□ demo-rollout-ddb657d-fx9jv  Pod         ✔ Running  21s  ready:1/1
      ...

Because this is the deployment’s initial rollout, the canary stage is immediately progressed through all the steps defined in the manifest. This results in the 10 Pod replicas becoming available. 

Now, let’s see what happens when an update is applied.

4. Update the Rollout

Changing the container image that your Rollout deploys is an easy way to simulate a change to your application. Open the manifest you created above, then change the value of the spec.template.spec.containers.image field to nginx:1.24. This will cause an older NGINX version to be deployed.

spec:
  containers:
    - name: nginx
      image: nginx:1.24

Next, use Kubectl to apply the change to your cluster and initiate a new rollout:

$ kubectl apply -f rollout.yml
rollout.argoproj.io/demo-rollout configured

You can now repeat the kubectl argo rollouts get rollout command to monitor the rollout’s progress:

$ kubectl argo rollouts get rollout
Name:            demo-rollout
Namespace:       default
Status:          ॥ Paused
Message:         CanaryPauseStep
Strategy:        Canary
  Step:          1/8
  SetWeight:     20
  ActualWeight:  20
Images:          nginx:1.24 (canary)
                 nginx:latest (stable)
Replicas:
  Desired:       10
  Current:       10
  Updated:       2
  Ready:         10
  Available:     10

NAME                                      KIND        STATUS     AGE  INFO
⟳ demo-rollout                            Rollout     ॥ Paused   13m  
├──# revision:2                                                       
│  └──⧉ demo-rollout-55b9b67888           ReplicaSet  ✔ Healthy  10s  canary
│     ├──□ demo-rollout-55b9b67888-2ctw8  Pod         ✔ Running  9s   ready:1/1
│     └──□ demo-rollout-55b9b67888-b9svl  Pod         ✔ Running  9s   ready:1/1
└──# revision:1                                                       
   └──⧉ demo-rollout-ddb657d              ReplicaSet  ✔ Healthy  13m  stable
      ├──□ demo-rollout-ddb657d-4jd9c     Pod         ✔ Running  13m  ready:1/1
      ├──□ demo-rollout-ddb657d-7blsc     Pod         ✔ Running  13m  ready:1/1
      ├──□ demo-rollout-ddb657d-fx9jv     Pod         ✔ Running  13m  ready:1/1
      ...

You can see that two new Pods have been created in the canary stage, while eight old Pods have been retained. This is because our Rollout manifest specifies new deployments will only replace 20% of the Pods to begin with.

The rollout is Paused due to the manual trigger step we included. To resume the rollout, use the promote command:

$ kubectl argo rollouts promote demo-rollout
rollout 'demo-rollout' promoted

You’ll see that four of your Pods (40% of the total replica count) now run the canary deployment:

$ kubectl argo rollouts get rollout demo-rollout
Name:            demo-rollout
Namespace:       default
Status:          ॥ Paused
Message:         CanaryPauseStep
Strategy:        Canary
  Step:          3/8
  SetWeight:     40
  ActualWeight:  40
Images:          nginx:1.24 (canary)
                 nginx:latest (stable)
Replicas:
  Desired:       10
  Current:       10
  Updated:       4
  Ready:         10
  Available:     10

NAME                                      KIND        STATUS     AGE    INFO
⟳ demo-rollout                            Rollout     ॥ Paused   19m    
├──# revision:2                                                         
│  └──⧉ demo-rollout-55b9b67888           ReplicaSet  ✔ Healthy  6m41s  canary
│     ├──□ demo-rollout-55b9b67888-2ctw8  Pod         ✔ Running  6m40s  ready:1/1
│     ├──□ demo-rollout-55b9b67888-b9svl  Pod         ✔ Running  6m40s  ready:1/1
│     ├──□ demo-rollout-55b9b67888-kf5kb  Pod         ✔ Running  25s    ready:1/1
│     └──□ demo-rollout-55b9b67888-pczmk  Pod         ✔ Running  25s    ready:1/1
└──# revision:1                                                         
   └──⧉ demo-rollout-ddb657d              ReplicaSet  ✔ Healthy  19m    stable
      ├──□ demo-rollout-ddb657d-4jd9c     Pod         ✔ Running  19m    ready:1/1
      ├──□ demo-rollout-ddb657d-rhr6w     Pod         ✔ Running  19m    ready:1/1
      ├──□ demo-rollout-ddb657d-wkx5v     Pod         ✔ Running  19m    ready:1/1
      ...

Wait a couple of minutes, then repeat the command—you should see the rollout has automatically progressed up to 80%, as configured by the steps in your manifest file.

5. Promote the Rollout

Because no further steps are defined after the 80% stage, the rollout will halt indefinitely once it’s reached. To complete the rollout, run the promote command again—this will replace all the remaining old Pods with new ones that run the updated deployment.

$ kubectl argo rollouts promote demo-rollout
rollout 'demo-rollout' promoted

Now, only the latest rollout revision will be active:

Name:            demo-rollout
Namespace:       default
Status:          ✔ Healthy
Strategy:        Canary
  Step:          8/8
  SetWeight:     100
  ActualWeight:  100
Images:          nginx:1.24 (stable)
Replicas:
  Desired:       10
  Current:       10
  Updated:       10
  Ready:         10
  Available:     10

NAME                                      KIND        STATUS        AGE    INFO
⟳ demo-rollout                            Rollout     ✔ Healthy     26m    
├──# revision:2                                                            
│  └──⧉ demo-rollout-55b9b67888           ReplicaSet  ✔ Healthy     13m    stable
│     ├──□ demo-rollout-55b9b67888-2ctw8  Pod         ✔ Running     13m    ready:1/1
│     ├──□ demo-rollout-55b9b67888-b9svl  Pod         ✔ Running     13m    ready:1/1

We’ve successfully used Argo Rollouts to implement a canary deployment workflow in Kubernetes! However, we’ve only covered the basics — there are plenty more features to discover in the documentation, including automatic rollbacks, advanced traffic routing configuration, and the ability to launch experiments that involve multiple app versions.

IaC management for Kubernetes

Although Argo Rollouts provides more control over how app revisions reach your cluster, it doesn’t help you apply changes to the cluster itself. Try a dedicated CI/CD platform like Spacelift to simplify IaC management for Kubernetes and stay on top of your infrastructure. This gives you the most powerful Kubernetes experience when operating your clusters at scale.

Spacelift brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change. It also has an extensive selection of policies, which lets you automate compliance checks and build complex multi-stack workflows.

If you want to learn more, create a free account today or book a demo with one of our engineers.

Key points

Argo Rollouts provides a Kubernetes controller and CRDs that implement support for additional rollout strategies beyond those enabled by plain Deployments and ReplicaSets. It lets you progressively deliver changes to users via canary or blue-green deployments that are only promoted once you’ve verified there are no issues. You can connect metrics sources to automate progression and rollback events, improving deployment performance and safety.

The Most Flexible CI/CD Automation Tool

Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities for infrastructure management.

Start free trial

The Practitioner’s Guide to Scaling Infrastructure as Code

Transform your IaC management to scale

securely, efficiently, and productively

into the future.

ebook global banner
Share your data and download the guide