Kubernetes is the most popular container orchestrator and has become a go-to technology for DevOps teams. It simplifies container deployment and management tasks so you can efficiently run apps in production, without worrying too much about infrastructure requirements.
Because Kubernetes has such broad functionality, it can be confusing to work out which situations it applies to. In this article, we’ll explore 12 main use cases so you can evaluate whether Kubernetes is suitable for your team or organization.
Kubernetes (often abbreviated to K8s) is a distributed computing system for containerized applications. Kubernetes installations are termed clusters; they comprise multiple physical compute Nodes that provide strong redundancy and scalability characteristics.
Kubernetes ultimately runs containers using the same technologies shared by developer-oriented platforms like Docker. However, Kubernetes also includes extensive storage, networking, access control, and cloud vendor integration capabilities that makes it ideally suited to the operation of cloud-native apps running in production.
With Kubernetes, you can easily scale hundreds or thousands of replicas of your apps across your Nodes using declarative config files. Your cluster’s control plane components do the hard work of starting containers and ensuring they stay running. These fundamental capabilities—and the range of supporting features available—makes Kubernetes applicable to virtually all cloud computing scenarios where performance, reliability, and scalability are important.
Here are the key ways in which Kubernetes can be useful to DevOps teams. While this is far from an exhaustive list, it’s indicative of the types of workloads and processes that Kubernetes helps to simplify.
What is Kubernetes best used for?
- Deploying microservices
- Running apps at scale
- Creating your own serverless/PaaS platforms
- Making apps portable across clouds
- Executing CI/CD pipelines and DevOps processes
- Running AI, ML, and Big Data workloads
- Hosting developer environments
- Configuring automated workflows and scheduled jobs
- Simplifying cloud networking
- Running multi-tenant apps and services
- Simplifying hybrid/multicloud deployments
- Improving app resiliency and redundancy
Microservices are a natural fit for Kubernetes clusters. Microservices architectures deploy your app as several independent components that are networked together using technologies such as service meshes.
With Kubernetes, you can launch your microservices into your cluster, then scale them independently across your Nodes. High-traffic services such as your authorization layer can be easily configured with more capacity than lesser-used ones. You can interact with and monitor your microservices using the Kubernetes API.
Kubernetes simplifies scaling your apps to accommodate growth. You can scale services horizontally by adding additional Nodes to your cluster, increasing available compute capacity. You can then raise your Kubernetes Deployment’s replica count to roll out new instances of your app onto the extra Nodes.
It’s possible to fully automate this process when you’re running Kubernetes in the cloud. Auto-scaling via cloud integrations allows you to provision new Nodes in your infrastructure provider account, providing real-time vertical scaling when your cluster needs additional resources. For horizontal scaling, the Pod autoscaler continually adjusts your deployment replica counts to match current usage.
Kubernetes is an ideal foundation for developing internal Platform-as-a-Service (PaaS) and serverless solutions. Platform engineering teams can use Kubernetes to develop their own higher-level abstractions that allow developers to rapidly deploy new apps, without learning the intricacies of Kubernetes themselves.
As an example, platform engineers could write an API or CLI tool that lets developers request deployment of a container image they’ve built. That tool could then create templated resources in a Kubernetes cluster that start the actual deployment, without making devs directly responsible for any of the infrastructure.
Kubernetes increases the multi-cloud portability of your applications. It abstracts away most of the differences between clouds, allowing you to reliably run containerized apps with networking and storage resources, in any environment. Kubernetes deployments should perform identically in any cluster they’re added to—from a local environment on your workstation, to clusters deployed from various cloud providers.
Need to move from AWS to Azure, or vice versa? If you’re using Kubernetes, then this migration should be much simpler than when using legacy technologies. You can start a new Kubernetes cluster in your Azure account, then redeploy your apps to it. This takes the pain out of accurately replicating manually provisioned infrastructure after switching providers.
Kubernetes is a good option for executing your CI/CD pipeline jobs and other DevOps processes. Because Kubernetes is so scalable, it’s resilient no matter how many jobs are running. If it’s the weekend and few developers are around, you can scale down and save costs; conversely, on busy days, your cluster can autoscale up to serve thousands of jobs concurrently, without impacting performance.
Kubernetes also supports strong CI/CD security practices through the use of agents that run in your cluster and pull changes from your source repositories. This model minimizes unnecessary exposure of CI/CD environments, compared with push-based workflows where changes are sent from your CI/CD service to job runners.
Why use Spacelift with Kubernetes?
Spacelift helps you manage the complexities and compliance challenges of using Kubernetes. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change. It also has an extensive selection of policies, which lets you automate compliance checks and build complex multi-stack workflows.
You can also use Spacelift to mix and match Terraform, Pulumi, CloudFormation, and Kubernetes Stacks and have them talk to one another. For example, you can set up Terraform Stacks to provision the required infrastructure (like an ECS/EKS cluster with all its dependencies) and then deploy the following via a Kubernetes Stack.
Kubernetes is also a good fit for artificial intelligence (AI), machine learning (ML), and data analysis scenarios. These tasks involve processing huge amounts of data within computationally intensive pipelines. It’s important to attain visibility into what’s being processed and the outputs it’s producing.
Kubernetes clusters support all of these requirements. The ease of scalability ensures stable performance for demanding workloads, while minimizing associated costs. Kubernetes also helps you automate the processing of newly ingested data, such as by using scheduled jobs to periodically import and retrain your models.
Kubernetes is a good fit for on-demand developer environments that allow you to build and test new changes in realistic configurations without requiring dedicated infrastructure to be provisioned. Using Kubernetes, multiple developers are able to work within one cluster, creating and destroying deployments as they work on each change.
This model often proves more efficient than traditional alternatives, such as where developers all work locally on their machines. Using hosted environments in a Kubernetes cluster provides more opportunity to standardize developer tools and configurations while also simplifying collaboration around work-in-progress changes deployed to staging areas.
The container orchestration and scheduling capabilities that are core to Kubernetes render it suitable for many workflow automation tasks. For example, you can use Jobs and CronJobs to execute steps within your workflows, either on-demand or on a regularly recurring schedule. Log output from the containers that run your jobs can then be retrieved using the Kubernetes APIs.
As your Pods and containers are self-contained ephemeral environments, they’re ideal for workflow-driven use cases. You don’t have to maintain dedicated infrastructure to run your jobs and it’s easy to run steps in parallel or based on complex dependency graphs. Separate tools such as Argo Workflows can transform Kubernetes into a purpose-built workflow execution engine.
Kubernetes can help remove the complexity from cloud networking. Networking within your cluster is managed by Kubernetes, allowing you to easily connect different services together without having to configure host-level networking rules. Kubernetes networking is standardized so it’ll work across cloud providers, removing the need to learn per-cloud networking features.
Kubernetes also simplifies how you publicly expose services as HTTP routes. Ingress resources define routes to the services in your cluster and can automatically provision required load balancers in your cloud account. This is another way in which Kubernetes lets you focus on networking outcomes, instead of handcrafting how they’re achieved.
Kubernetes is an efficient way to run multi-tenant systems. Whether you want to deploy multiple apps and environments, or need to segregate resources by team, Kubernetes features such as namespaces and role-based access control (RBAC) can be used to divide clusters into logical slices for each of your tenants.
Containerization also helps ensure tenants stay separated from each other. You’ll still need to follow Kubernetes and container security best practices, but these are often more easily and reliably enforced compared to DIY multi-tenancy on bare metal servers.
Earlier on, we discussed how Kubernetes can help you migrate apps between clouds. However, clusters also facilitate cross-cloud deployments, where different services within your system run in various cloud and on-premises environments.
Once you’ve set up a cluster, you can connect all your compute resources to use them as Nodes—whether they’re in the same cloud or hosted on another provider. Nodes don’t need to be immediately adjacent to your cluster control plane, letting you combine resources across datacenters and even use your existing on-premises equipment as part of a hybrid cluster. This provides more flexibility to benefit from the unique features of each infrastructure option while exposing a single app deployment target.
Using Kubernetes is a surefire way to enhance your app’s resiliency and redundancy. When you declaratively define your Deployments, Kubernetes guarantees that the specified number of replicas will be ready at all times. If one replica fails, then another will immediately start to replace it.
This resilience also extends to outages affecting your compute Nodes. When a Node goes offline, Kubernetes will reschedule its workloads to the other Nodes in your cluster. This all happens automatically so your app benefits from continual reliability and high availability.
This article has explored 12 of the main use cases for Kubernetes, the most popular container orchestration system. From serving your development environments to running massive microservices architectures, much of the success of Kubernetes has been down to its versatility and the ease with which it supports all kinds of cloud workload required by DevOps teams.
Manage Kubernetes Easier and Faster
Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.