Elevating IaC Workflows with Spacelift Stacks and Dependencies 🛠️

Register for the July 23 demo →

Kubernetes

Kubernetes Readiness Probe – Guide & Examples

Kubernetes Readiness Probe - Guide & Examples

In this article, we will take a look at Readiness probes in Kubernetes (K8s), explain what they are, when and why you would use them, and explain the difference between readiness, startup, and liveness probes. We will then take a look at common problems and solutions with Readiness probes and finally share some best practices on how to configure them. It’s a Readiness probe 101!

We will cover:

  1. What is a readiness probe in Kubernetes?
  2. When to use readiness probes?
  3. How do readiness probes work?
  4. How to configure readiness probes?
  5. How to disable the readiness probe in Kubernetes?
  6. Kubernetes readiness probes failures and how to fix them
  7. Best practices for using readiness probes

What is a readiness probe in Kubernetes?

Consider you have an application running in K8s that takes a while to start up, because it needs to load data, access large configuration files, or access other services before it is ready. You don’t want to start serving requests to that service before it is fully ready to respond correctly, as sending traffic to it prematurely could result in errors or degraded performance, which is where Readiness probes come in.

A readiness probe is a Kubernetes health check, where you can set conditions that Kubernetes will use to determine if a container is ready to receive traffic. This condition is usually a check on a specific TCP port endpoint or HTTP request that the container should respond to successfully.  Correctly configured readiness probes are an essential part of ensuring the availability and stability of applications running in your cluster.

If the readiness probe succeeds, K8s considers the container ready and directs traffic to it. A pod with containers reporting that they are ‘not ready’ using a readiness probe does not receive traffic.

What is the difference between startup and readiness probes?

Similar to readiness probes, startup probes can be used for legacy applications that take a long time to start on their first initialization.

Startup probes are only used once during the initialization of the app, whereas readiness probes periodically check the app on a defined interval, continuously verifying the container’s health and readiness to serve traffic throughout its lifecycle. Startup probes can be put in place as a mechanism to delay readiness probe checks.

What is the difference between readiness and liveness probes?

Readiness probes and liveness probes in are both mechanisms used to ensure the reliability and availability of containers in a pod, and both check the container periodically after the container starts.

Liveness probes are used to determine whether a container is still running and functioning correctly. They check if the container is alive and responsive and are used to detect and recover from situations where a container becomes unresponsive or gets stuck in an error state. They can help ensure that your application remains available by automatically restarting failed containers.

Readiness probes are used to determine whether a container is ready to accept incoming traffic. They check if the container is in a state where it can safely handle requests.

When to use readiness probes?

Readiness probes are useful in any scenario where you want to ensure that a container is fully prepared to handle incoming network traffic before it starts receiving requests.

They are primarily used whenever your application or container has initialization tasks that take some time to complete before it can handle requests, such as loading configuration, establishing database connections, connecting to message brokers and other microservices, or warming up caches.

Readiness probes can also help orchestrate the sequence in which components of your stateful applications are made available, where you may need to ensure that certain components or services are fully operational before they can interact with other parts of the application.

They can also play a role in ensuring the smooth scaling of pods and conducting rolling updates. By using readiness probes, you can prevent new pods from receiving traffic until they are ready, and you can avoid sending traffic to pods that might be experiencing issues due to an update.

You should also consider resource constraints in your cluster where a container might need extra time to handle requests under high loads or after recovering from resource exhaustion. Readiness probes can be used to delay routing traffic until the container is sufficiently recovered.

How do readiness probes work?

The K8s control plane, which includes the kubelet running on each node, manages the execution of readiness probes. The kubelet is responsible for monitoring and managing containers on the node.

When the readiness probe fails for a container, the K8s control plane ensures that this container is not included in the load balancer’s pool of endpoints that receive incoming traffic. This effectively directs traffic away from containers that are not ready. K8s will periodically recheck the container’s readiness status based on the periodSeconds setting.

If the container becomes ready, it will be included in the load balancer again, and traffic will be directed to it.

Common types of readiness probes include:

  1. HTTP Probe: This sends an HTTP request to a specified endpoint in the container and checks if it receives a successful response (e.g., HTTP status code 200).
  2. TCP Probe: This checks if a specific port is open and listening on the container.
  3. Command Probe: This executes a custom command within the container and considers the container ready if the command returns a zero exit code.

You define a readiness probe in the YAML specification of a K8s pod or container. This configuration includes the probe type (HTTP, TCP, or Command), the probe’s parameters, and timing settings, such as the initialDelaySeconds and periodSeconds. The initialDelaySeconds specifies how long to wait after the container starts before the probe is first executed, and the periodSeconds defines the interval between subsequent probe checks.

How to configure readiness probes?

Let’s take a look at an example YAML configuration for each type of Readiness probe.

HTTP readiness probe example

In this example we have a container running a web server, and the readiness probe is configured to check if the web server is responding with an HTTP status code of 200 on the “/testpath” endpoint.

  • httpGet specifies an HTTP check.
  • path is set to “/testpath,” which is the endpoint where the readiness check will be performed.
  • port is set to 8080, which corresponds to the container’s port.
  • initialDelaySeconds specifies that the probe should start 15 seconds after the container starts.
  • periodSeconds specifies that the probe will be repeated every 10 seconds after the initial delay.
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: example-container
    image: example-image
    ports:
    - containerPort: 8080
    readinessProbe:
      httpGet:
        path: /testpath
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 10

TCP readiness probe example

With this configuration, Kubernetes will periodically check if the container is able to accept TCP connections on port 8080.

  • tcpSocket specifies a TCP check.
  • port is set to 8080, matching the container’s port.
  • initialDelaySeconds specifies that the probe should start 15 seconds after the container starts.
  • periodSeconds specifies that the probe will be repeated every 10 seconds after the initial delay.
apiVersion: v1
kind: Pod
metadata:
  name: example-pod
spec:
  containers:
  - name: example-container
    image: example-image
    ports:
    - containerPort: 8080
    readinessProbe:
      tcpSocket:
        port: 8080
      initialDelaySeconds: 15
      periodSeconds: 10

Command readiness probe example

In this example, the readiness probe is configured to run a custom command to run a script inside the container. Inside the container, the custom script should be responsible for performing the readiness check. The script can return a non-zero exit code if the readiness check fails and a zero exit code if it succeeds.

  • exec specifies a Command check.
  • command is an array of commands to run inside the container. In this example, we run a shell script named “check-script.sh.”
  • initialDelaySeconds specifies that the probe should start 20 seconds after the container starts.
  • periodSeconds specifies that the probe will be repeated every 15 seconds after the initial delay.
apiVersion: v1
kind: Pod
metadata:
  name: my-app-pod
spec:
  containers:
  - name: my-app-container
    image: my-app-image
    ports:
    - containerPort: 80
    readinessProbe:
      exec:
        command:
        - /bin/sh
        - -c
        - check-script.sh
      initialDelaySeconds: 20
      periodSeconds: 15

How to disable the readiness probe in Kubernetes?

Unless you explicitly specify the readinessProbein your YAML configuration, Kubernetes will not perform any readiness checks. To disable a check, you should remove the readinessProbe field.

Kubernetes readiness probes failures and how to fix them

Some common problems with readiness probes include the following.

Container takes too long to start

  • Issue: If your container’s initialization tasks take longer than the initialDelaySeconds of the readiness probe, the probe may fail.
  • Solution: Adjust the initialDelaySeconds to a value that allows the container sufficient time to start and complete its initialization. Additionally, optimize your container’s startup process to reduce the time it takes to become ready.

Service or endpoint is not ready

  • Issue: If your container relies on external services or dependencies (e.g., a database) that are not ready when the readiness probe runs, it can result in a failure. In some cases, race conditions can occur if your application’s initialization relies on external factors. For example, your container may be ready, but external components are not yet in sync.
  • Solution: Ensure that the external services or dependencies are ready before the container starts. You can use tools like Helm Hooks or init containers to coordinate the readiness of these components with your application. Implement proper synchronization mechanisms in your application to handle race conditions. This may involve using locks, retry mechanisms, or coordination with external components.

Incorrect configuration of the readiness probe

  • Issue: A misconfigured readiness probe, such as an incorrect path or port, can cause probe failures.
  • Solution: Double-check the readiness probe configuration in your pod’s YAML file. Ensure that the path, port, and other parameters are correctly specified.

Application bugs or issues

  • Issue: If your application has bugs or issues that prevent it from becoming ready, such as unhandled exceptions, misconfigurations, or issues with external dependencies, it can result in readiness probe failures.
  • Solution: Debug and resolve application issues. Review application logs and error messages to identify the specific problems that prevent the application from becoming ready. Fix any bugs or misconfigurations in your application code or deployment.

Resource constraints

  • Issue: If your container is running with resource constraints (CPU or memory limits), it might not have the resources it needs to become ready, especially under heavy loads.
  • Solution: Adjust the resource limits to provide the container with the necessary resources. You may also need to optimize your application to use resources efficiently.

Competing or conflicting liveness & readiness probes

  • Issue: If you have misconfigured liveness and readiness probes, they might interfere with each other, causing unexpected behavior.
  • Solution: Ensure that your probes are configured correctly and serve their intended purposes. Make sure that the settings of both probes do not conflict with each other.

Cluster issues

  • Issue: Sometimes, Kubernetes cluster issues, such as kubelet or networking problems, can result in probe failures.
  • Solution: Monitor your cluster for any issues or anomalies and address them according to Kubernetes best practices. Ensure that kubelet and other components are running smoothly.

Best practices for using readiness probes

Let’s take a look at some best practices.

1. Define readiness probes for all containers in your pods

As best practice, it is recommended to define readiness probes for all containers in your pods. This ensures that K8s can manage the readiness of each container individually, even in multi-container pods.

2. Choose the right probe type (HTTP, TCP, or Command)

When defining your readiness probes, be sure to choose the right probe type (HTTP, TCP, or Command) based on the nature of your application and what you want to check. HTTP probes are often used for web services, while TCP probes are more suitable for simple connectivity checks.

3. Remember about proper configuration

Once you have decided on the type of probe, proper configuration is key to the proper operation of your cluster.

Be sure to configure the initialDelaySeconds appropriately to allow your containers enough time to start and perform any necessary initialization tasks before the readiness probe begins. The value should be greater than or equal to the time your container needs to become ready.

Set the periodSeconds (the interval at which probes run) to an appropriate value. It should be frequent enough to detect issues but not too frequent to overload your application or the cluster with probe requests.

4. Create lightweight dedicated endpoints

Within your application, in the case of HTTP probes, it can be a good idea to create dedicated endpoints (e.g., “/readiness”) that are lightweight and specifically designed for readiness checks. These endpoints should return a simple “200 OK” response if the container is ready.

Your application should also allow for graceful transitions caused by the readiness probes. For example, stop accepting new requests during the readiness check and complete in-flight requests before marking the container as ready or not ready.

5. Perform regular reviews

You should regularly review, test, and optimize your readiness probe configuration in a non-production environment as your application evolves. Be prepared to adjust parameters and checks to reflect changing requirements, and don’t forget to set up monitoring and alerting for readiness probe failures.

Key points

By using readiness probes, you ensure that your application or microservice deployment is robust and that traffic is routed only to containers that are fully prepared to handle requests.

Using these alongside startup and liveness probes where appropriate will help maintain the stability and availability of your application in a K8s cluster.

If you need any assistance with managing your Kubernetes projects, take a look at Spacelift. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change. It also has an extensive selection of policies, which lets you automate compliance checks and build complex multi-stack workflows. You can check it for free by creating a trial account.

Manage Kubernetes Easier and Faster

Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.

Start free trial

The Practitioner’s Guide to Scaling Infrastructure as Code

Transform your IaC management to scale

securely, efficiently, and productively

into the future.

ebook global banner
Share your data and download the guide