Kubernetes is the leading container orchestrator for cloud native systems, but it’s often challenging to get your own cluster up and running. Several Kubernetes distributions are available, each with its own deployment methods and supporting features. Choosing between them can be hard.
In this article, we’ll show four different ways to launch a cluster. Self-hosting Kubernetes helps you understand how the system works and can be more cost-effective than using a managed cloud service such as Amazon EKS. It’s not too difficult either, as this guide should convince you.
We will cover:
Minikube is a tool that’s designed to simplify the Kubernetes deployment experience on your local machine. It provisions an entire Kubernetes environment inside a Docker container or a virtual machine.
You need either Docker or a VM provider installed before you can use Minikube. It supports QEMU, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, and VMWare; the options available on your host will be automatically detected and selected. Docker is the preferred choice.
Step 1: Install Minikube
To install Minikube, first download the latest stable binary:
$ curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
Next, move the binary into a location that’s in your path:
$ sudo install minikube-linux-amd64 /usr/local/bin/minikube
These instructions apply to Linux systems with an x86/64 architecture. You can find the steps for other platforms within the Minikube documentation.
Step 2: Start your cluster
After completing the installation, run
minikube start to bring up your cluster:
$ minikube start
It can take several minutes before your cluster is ready to use. Progress will be shown in your terminal.
If you don’t have Kubectl already installed, you can use the version that’s bundled with Minikube:
$ minikube kubectl -- get nodes NAME STATUS ROLES AGE VERSION minikube Ready control-plane 10s v1.26.1
Step 3: Enable addons
Minikube bundles several optional addons for additional cluster functionality.
ingress, to enable a default Ingress controller,
dashboard to deploy the Kubernetes Dashboard and
registry, which hosts a container image registry within your cluster.
To view all the available addons, run the
minikube addons list command:
$ minikube addons list
Addons are activated with
minikube addons enable:
$ minikube addons enable ingress
Step 4: Stop your cluster
One of the advantages of Minikube is the ease with which you can stop and delete your cluster:
# Stop your cluster $ minikube stop # Stop your cluster and delete all its data $ minikube delete
This makes Minikube a great choice for quick tests and experiments where you want to clean-up your environment immediately afterwards. Running
minikube delete, followed by
minikube start, will restore your cluster to a completely clean slate.
When to use Minikube?
Minikube is a great option when you’re using Kubernetes locally on your development machine. It’s lightweight, supports several virtualization solutions, and makes it easy to enable popular addons such as Ingress, Metrics Server, and Dashboard.
Minikube is less suitable for production workloads because it doesn’t support multi-Node workloads across multiple physical hosts. You can create a multi-Node cluster, but this only produces virtual Nodes on your existing host.
MicroK8s is a lightweight all-in-one Kubernetes distribution that’s maintained by Canonical – the makers of the Ubuntu operating system.
MicroK8s is based on unmodified upstream Kubernetes releases. It’s suitable for production use as it supports multi-Node clusters, comes with sane defaults for commonly used options, and bundles popular addons to simplify your setup experience.
Step 1: Install MicroK8s
Installers for Windows and macOS are available from the MicroK8s website.
On Linux, MicroK8s is only distributed using Canonical’s Snap packaging format.
Run the following command to start your cluster:
$ sudo snap install microk8s --classic
Afterwards, you should add your user account to the
microk8s group so you can run
microk8s commands without encountering permissions errors:
$ sudo usermod -a -G microk8s $USER $ newgrp microk8s
Step 2: Interact with your cluster
MicroK8s bundles a version of Kubectl which you can access using
$ microk8s kubectl get nodes NAME STATUS ROLES AGE VERSION ubuntu22 Ready <none> 103m v1.27.2
Now you can deploy resources into your cluster using familiar Kubectl commands.
To use an existing Kubectl installation, you should run the
microk8s config command to export your cluster’s connection details to a Kubeconfig file:
$ microk8s config > ~/microk8s.kubeconfig $ KUBECONFIG=~/microk8s.kubeconfig kubectl get pods
Step 3: Enable Addons
MicroK8s has several optional addons which enable features such as Ingress, HostPath storage, RBAC, and the Kubernetes Dashboard, in addition to popular community software including Cert-Manager, Minio, and Prometheus.
To view available and enabled addons, run the
microk8s status command:
$ microk8s status microk8s is running high-availability: no datastore master nodes: 127.0.0.1:19001 datastore standby nodes: none addons: enabled: dns # (core) CoreDNS ha-cluster # (core) Configure high availability on the current node helm # (core) Helm - the package manager for Kubernetes helm3 # (core) Helm 3 - the package manager for Kubernetes disabled: cert-manager # (core) Cloud native certificate management community # (core) The community addons repository dashboard # (core) The Kubernetes dashboard
Addons are enabled with
$ microk8s enable dashboard
Enabling an addon will usually output information that helps you to get started using it.
Step 3: Add More Nodes
MicroK8s is ideal for both local and production environments.
To add another Node to your cluster, first, install MicroK8s on the new Node and then run microk8s add-node on your existing host. This will output the command to run on the new Node to join it to your cluster.
$ microk8s add-node From the node you wish to join to this cluster, run the following: microk8s join 192.168.122.210:25000/b346782cc8956830924c04f2cf1b1745/dadf654db615 Use the '--worker' flag to join a node as a worker not running the control plane, eg: microk8s join 192.168.122.210:25000/b346782cc8956830924c04f2cf1b1745/dadf654db615 --worker If the node you are adding is not reachable through the default interface you can use one of the following: microk8s join 192.168.122.210:25000/b346782cc8956830924c04f2cf1b1745/dadf654db615 microk8s join 192.168.123.1:25000/b346782cc8956830924c04f2cf1b1745/dadf654db615 microk8s join 172.17.0.1:25000/b346782cc8956830924c04f2cf1b1745/dadf654db615
When to Use MicroK8s?
MicroK8s is a good choice when you want to standardize on one Kubernetes distribution across all your infrastructure, from developer workstations to production servers. It supports simple multi-Node deployments that require minimal manual configuration.
One downside of MicroK8s is its dependence on the Snap packaging format. If you’re not using Snaps, then you won’t be able to run MicroK8s on Linux. This prevents its use on Linux distributions without Snap support or where an administrator has decided to prevent the use of Snaps.
K3s is a tiny Kubernetes distribution that ships as a single binary under 70MB. Created by SUSE Rancher, it’s now a CNCF sandbox project.
K3s is extraordinarily easy to get started with: download and run the binary to launch your Kubernetes cluster, without any external dependencies. It has minimal hardware requirements (1 CPU core, 512MB of RAM) and supports modern Linux systems using the x86_64, ARM, and S390X architectures.
Step 1: Install K3s
Running the official installation script is the quickest way to start K3s. This will download the binary and register a system service so K3s automatically starts when your host reboots:
$ curl -sfL https://get.k3s.io | sh -
Wait a few seconds for K3s to start, then use the
k3s kubectl command to interact with your cluster with the bundled version of Kubectl:
$ sudo k3s kubectl get nodes NAME STATUS ROLES AGE VERSION ubuntu22 Ready control-plane,master 5s v1.27.3+k3s1
Step 2: Add More Nodes
K3s is production-ready and capable of supporting multi-Node clusters.
To add another Node, you can simply repeat the installation script on your new host. You’ll need to set the
K3S_TOKEN environment variables to connect the Node to your existing cluster.
$ curl -sfL https://get.k3s.io | K3S_URL=https://your-first-host:6443 K3S_TOKEN=your-token sh -
You can find your cluster’s token by reading the file at
/var/lib/rancher/k3s/server/node-token on your main Node.
When to Use K3s?
K3s is a multi-purpose distribution that can be used in any environment.
It’s tiny size and self-contained binary make it ideal for development use, but it’s also great for resource-constrained production infrastructure. As a result, K3s is particularly ideal for IoT and edge computing workloads which would be unable to support larger Kubernetes distributions.
Kubeadm is a tool developed as part of the upstream Kubernetes project. It’s used to provision clusters running pure Kubernetes.
Creating a cluster with Kubeadm is a more involved procedure than Minikube, MicroK8s, or K3s offer. You need to manually install a container runtime on each of your Nodes, then set up the cluster control plane, install a networking plugin, and connect your worker Nodes to the primary. This means you must perform more tasks to set up your cluster, then install any additional software you require.
When to Use Kubeadm?
Kubeadm is the best solution when you want to use upstream Kubernetes as-is and are comfortable with more hands-on system administration. It allows you to customize additional components in your environment, such as the underlying container runtime.
When you don’t need the conveniently bundled addons of third-party distributions, and portability isn’t a factor, then using Kubeadm will expose you to more of how Kubernetes works–which can be a valuable learning experience.
However, Kubeadm is intentionally limited in scope and designed to be used as a foundation for higher-level tools like Minikube and MicroK8s, so opting for these will always offer a simpler management experience.
Once you’ve picked one of these distributions and installed your cluster, you can interact with your environment using any of the tools available in the Kubernetes ecosystem. Here are three popular choices.
1. Kubectl CLI
Kubectl is the standard CLI for interacting with the Kubernetes API. You can use it with any of the cluster installation methods mentioned in this article. It’s configured using Kubeconfig files which define how to connect to your cluster.
Kubectl is available for standalone download and comes bundled with Minikube, MicroK8s, and K3s, as shown above. If you need some tips on using Kubectl, then check out our handy Kubectl cheat sheet.
2. Kubernetes Dashboard
The Kubernetes Dashboard is an official web interface developed as part of the Kubernetes project. It provides a visual overview of the resources in your cluster, allowing you to easily manage deployments and troubleshoot problems.
The official Helm chart is the easiest way to install the dashboard. First, make sure you have Helm installed on your system, then run the following commands. The output will show how to access the dashboard in your browser.
$ helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/ $ helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard
If you’re using Minikube or MicroK8s, you should skip these manual steps and enable the bundled Dashboard addon instead. This makes it even easier to get started.
Lens is an alternative Kubernetes dashboard solution. It’s a powerful desktop app that’s designed as a complete management and visualization platform.
Lens lets you connect to multiple clusters, then inspect the workloads running within them. It uses your existing Kubectl config files so there’s no complicated manual setup procedure.
To get started with Lens, you should download the latest package from the website and then install it on your system.
We’ve looked at four tools you can use to deploy your own Kubernetes cluster: Minikube, MicroK8s, K3s, and Kubeadm. The first three options are the most popular choices for local use as they’re lightweight distributions with a simple setup experience. Choosing Kubeadm gives you upstream Kubernetes without any modifications but is significantly more complex to install and maintain.
Looking for an easy way to automate Kubernetes deployments to your cloud infrastructure? Check out Spacelift, an IaC management platform that lets you consistently provision Kubernetes clusters using CI/CD pipelines driven by the pull requests in your source repositories.
Spacelift helps you manage the complexities and compliance challenges of using Kubernetes. It brings with it a GitOps flow, so your Kubernetes Deployments are synced with your Kubernetes Stacks, and pull requests show you a preview of what they’re planning to change. If you aren’t already using Spacelift, sign up for a free trial.
Manage Kubernetes Easier and Faster
Spacelift allows you to automate, audit, secure, and continuously deliver your infrastructure. It helps overcome common state management issues and adds several must-have features for infrastructure management.