Kubernetes orchestrates containerized app deployments in production environments, but you’re responsible for setting up a robust delivery pipeline that delivers code from your repositories to your cluster.
GitHub Actions is a popular CI/CD service that comes integrated with GitHub projects. It’s an easy way to start releasing to Kubernetes by combining actions that build your container image and then deploy to Kubernetes.
This article provides a guide to developing a simple GitHub Actions workflow that deploys an app to your cluster using a custom Helm chart.
GitHub Actions is a CI/CD (Continuous Integration and Continuous Deployment/Delivery) service provided by GitHub that allows you to automate your software development workflows directly within your GitHub repository. With GitHub Actions, you can create workflows that build, test, and deploy your code automatically when specific events occur, such as pushing changes to a repository or creating a pull request.
GitHub Actions is one of the easiest ways to deploy your apps to Kubernetes while retaining the flexibility to precisely customize your deployments. It lets you write Kubernetes manifests and Helm charts, commit them to your project, and then automatically start a workflow that applies the changes to your cluster.
Because GitHub Actions is included with GitHub projects, no additional tools are required. Furthermore, the Actions marketplace includes prebuilt components for common tasks such as Docker image builds and Kubernetes toolchain setup, simplifying your configuration experience. You just need to create an Actions workflow that uses these components to deploy the changes you make to your repository.
Here are the key benefits of using GitHub Actions for Kubernetes:
- Centralized management: GitHub Actions provides a unified platform to manage workflows, reducing complexity and enhancing visibility for Kubernetes deployments.
- Automation and CI/CD integration: It integrates seamlessly with CI/CD pipelines, automating build, test, and deployment processes for faster and more reliable releases.
- Version control: It leverages Git’s version history to track and manage changes to deployment workflows, ensuring consistency and auditability.
- Scalability and parallelization: It supports running multiple jobs concurrently, enabling efficient scaling and faster execution of deployment tasks.
- Security and secrets management: It provides secure storage and access to secrets, protecting sensitive data in deployment workflows.
Let’s get started using GitHub Actions with Kubernetes.
To follow this guide, you’ll need an empty GitHub repository and access to a Kubeconfig file that contains the connection details for an existing Kubernetes cluster.
Need a new cluster? Try our guide to running Kubernetes on Amazon EKS.
Here are the five steps we’ll follow to get set up GitHub Actions with Kubernetes:
- Write an app
- Create a Dockerfile
- Create a Helm chart
- Prepare a GitHub project
- Create a GitHub Actions workflow
- Test the app deployment
If you’d prefer to jump to the end result, you can clone our sample repository to test the workflow straightaway.
Step 1. Write your app
For this tutorial, we’re keeping things simple. Our app is a Node.js script that uses the Express web server to respond with a “hello world” message to each request it receives.
Copy the following source code and save it to main.js
in your repository:
const express = require("express");
const app = express();
app.get("*", (req, res) => res.send("<h1>Hello World!</h1>"));
app.listen(80, () => console.log("App is listening"));
Add a simple package.json
file that declares the Express dependency, enabling it to be installed from the npm registry during your app’s Docker build:
{
"dependencies": {
"express": "^4.21.0"
}
}
It’s a good idea to also add a .gitignore
file that lists the /node_modules
directory. This will ensure you don’t accidentally commit dependency files to your repository.
Step 2. Create your Dockerfile
Next, create the Dockerfile that will build your app’s container image:
FROM node:20
WORKDIR /app
COPY *.json .
RUN npm install
COPY *.js .
CMD ["node", "main.js"]
This Dockerfile uses the node:20
base image, copies in your package.json
file and runs npm install
to fetch the app’s dependencies, adds your source code, and configures the container to start your main.js
file upon container creation.
Step 3. Create your Helm chart
Now you can create your app’s Helm chart, ready to deploy to Kubernetes. For this simple web server project, a chart that provisions a Kubernetes Deployment and Service will suffice.
Add the following YAML files to a new helm
directory in your repository.
Chart.yaml
apiVersion: v2
name: demo-app
version: 1.1.0
This file contains your Helm chart’s metadata.
values.yaml
containerPort: 80
dockerConfigJson:
secretName: dockerconfigjson
This file sets default values for chart config variables.
templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Namespace }}
namespace: {{ .Release.Namespace }}
spec:
replicas: 3
selector:
matchLabels:
app: {{ .Release.Namespace }}
template:
metadata:
labels:
app: {{ .Release.Namespace }}
spec:
containers:
- name: app
image: {{ .Values.image | required "image is required "}}
ports:
- containerPort: {{ .Values.containerPort }}
imagePullSecrets:
- name: {{ .Values.dockerConfigJson.secretName }}
The Deployment object provides declarative state management for the Kubernetes Pods that will run your containerized app. This manifest specifies that three Pod replicas will run the image specified via a variable when the chart is installed.
The container port that the app listens on is also exposed — so it can be accessed by the Service that will be defined next — and an image pull secret reference is required so Kubernetes can fetch your container image from your private GitHub Container Registry instance. This will be explained in the following step.
templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Namespace }}
namespace: {{ .Release.Namespace }}
spec:
type: ClusterIP
selector:
app: {{ .Release.Namespace }}
ports:
- port: {{ .Values.containerPort }}
The Service routes network traffic to your application’s Pods, targeting the port configured earlier. This simple ClusterIP Service is only accessible within the cluster, but you can access it for testing by using Kubectl port-forwarding. In production, you’ll need to use a LoadBalancer Service or set up Ingress so your deployment can be accessed externally.
templates/dockerconfigjson.yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/dockerconfigjson
metadata:
name: {{ .Values.dockerConfigJson.secretName }}
namespace: {{ .Release.Namespace }}
data:
.dockerconfigjson: {{ .Values.dockerConfigJson.data | b64enc }}
This file defines the Kubernetes Secret that stores the content of the Docker config.json file used to authenticate to your image registry. Your GitHub Actions workflow will populate the data when deploying your Helm chart.
Step 4. Prepare your GitHub project
Before creating your GitHub Actions workflow, a project settings change must be applied, and a few secret values must be created.
Enable read-write workflows access
First, click the Settings tab at the top of the screen, then navigate to the Actions > General section in the left sidebar.
Scroll down to the Workflow permissions section at the bottom of the screen and change the radio button selection to the Read and write permissions option:
This change will allow your workflow to push new Docker images into your project’s GitHub Container Registry instance, without requiring you to set up a separate access token.
Create a registry read token for Kubernetes
Next, create a new GitHub Personal Access Token by heading to Settings > Developer settings > Personal access tokens > Tokens (classic) page from your profile menu. Give your token a name and assign it the read:packages
scope. This token will be provided to your Kubernetes cluster so it can pull your image from the GitHub Container Registry.
Scroll down the page, click Generate token, and copy the token value displayed. Next, return to your project’s settings page, navigate to Security > Secrets and variables, and click New repository secret.
Name your secret REGISTRY_TOKEN
and paste in the Personal Access Token that you generated above.
Add your Kubeconfig file
Complete the setup process by adding another GitHub Actions repository secret for your Kubeconfig file. Name the secret KUBECONFIG
and paste in your Kubeconfig contents. Your workflow will use these credentials to connect to your Kubernetes cluster and complete your deployment.
Note: Avoid configuring your Kubeconfig file with credentials that belong to a cluster-admin
user as this would allow a compromised workflow to perform any action on your cluster. It’s best practice to instead use a ServiceAccount that’s assigned only the RBAC permissions relevant to your workflow.
Step 5. Create your GitHub Actions workflow
It’s time for the real work: Let’s create a GitHub Actions workflow to build your container image and then use Helm to deploy it to Kubernetes.
GitHub Actions workflows are defined using config files placed within your project’s .github/workflows
directory. Create a new file inside this directory called kubernetes.yml
, then add the following content:
jobs:
build:
runs-on: ubuntu-22.04
steps:
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Build and push the Docker image
uses: docker/build-push-action@v3
with:
push: true
tags: ghcr.io/${{ github.repository }}:${{ github.sha }}
deploy:
runs-on: ubuntu-22.04
needs: build
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Install Helm
uses: azure/setup-helm@v4
- name: Configure Kubeconfig
uses: azure/k8s-set-context@v4
with:
method: kubeconfig
kubeconfig: ${{ secrets.KUBECONFIG }}
- name: Deploy the Helm chart
run: |
helm upgrade \
${{ github.event.repository.name }} \
helm \
--install \
--create-namespace \
--namespace ${{ github.event.repository.name }} \
--set image=ghcr.io/${{ github.repository }}:${{ github.sha }} \
--set dockerConfigJson.data="\{\"auths\":\{\"ghcr.io\":\{\"username\":\"${{ github.actor }}\"\,\"password\":\"${{ secrets.REGISTRY_TOKEN }}\"\}\}\}"
on:
push:
if: github.ref_name == github.event.repository.default_branch
This workflow contains two jobs, each with several steps. Let’s break down what’s happening. We’ll focus on explaining how each job works, not the fundamentals of how workflow definitions are structured. You can learn the concepts behind GitHub Actions configuration in our overview tutorial.
- First, the
build
step uses thedocker/login-action
anddocker/build-push-action
marketplace actions to build your Docker image from your Dockerfile, then publish it to your repository’s GitHub Container Registry. TheGITHUB_TOKEN
secret is provided by the job context; it’s a temporary token that the job can use to access the registry. - Next, the
deploy
job uses Helm to deploy your app into your Kubernetes cluster. - The
actions/checkout
action checks out the current Git commit, allowing the Helm chart in your repository’shelm
directory to be referenced. (This step wasn’t necessary in the build job because thedocker/build-push-action
performs a checkout automatically). - The
azure/setup-helm
step makes thehelm
command available in the job context. - Next, the
azure/k8s-set-context
action configures a Kubeconfig context in the job environment, using the credentials you set via your project’sKUBECONFIG
secret earlier on. - Finally, the
helm upgrade
command updates the app release in your cluster, or creates it if it doesn’t exist already. The namespace and release name are both set to your repository’s name. The image referenced by the chart is set to the tag that has just been built and pushed to your repository’s registry, and a Dockerconfig.json
object is constructed from the registry token secret created earlier. Helm uses this to populate thedockerconfigjson
Secret that provides the credentials for authenticating to your registry.
You’re now ready to test your workflow! Commit all your files, then push them to your repository’s main branch.
The on.push
rule specifies that the workflow will run only when you push new commits to your repository’s main branch. This means your app will be deployed when you update main
(or master
, depending on your default branch choice), but not when you’re working within a feature branch.
Navigate to GitHub’s Actions tab and you’ll see a new workflow run beginning. After a few seconds, it should show as successfully completed.
Click the run to view the breakdown of its jobs:
Click a specific job to view that job’s logs and debug any failures that occurred. If you do find your workflow fails, it will probably be due to incorrect configuration of the secrets created in Step 4 above. Reviewing that part of this guide should help you diagnose the problem.
Step 6. Test your app deployment
Once your workflow’s finished running, your app will be live in your Kubernetes cluster.
Use Kubectl to verify that the Deployment’s three Pods are ready:
$ kubectl get deployment -n spacelift-k8s-github-actions-demo
NAME READY UP-TO-DATE AVAILABLE AGE
spacelift-k8s-github-actions-demo 3/3 3 3 5m
Next, use Kubectl’s port-forwarding feature to open a connection to the Service:
$ kubectl port-forward svc/spacelift-k8s-github-actions-demo 8080:80 -n spacelift-k8s-github-actions-demo
Forwarding from 127.0.0.1:8080 -> 80
Forwarding from [::1]:8080 -> 80
This example makes the Service’s port 80 accessible at localhost:8080
. Send a request to that address to prove your app has been successfully deployed:
$ curl localhost:8080
<h1>Hello World!</h1>
Note: You’ll need to replace spacelift-k8s-github-actions-demo
in the examples above with the name of your own GitHub repository.
You’ve now used GitHub Actions to build a container image and deploy it to Kubernetes with a custom Helm chart.
We’ve assembled a functioning GitHub Actions workflow to deploy to Kubernetes, but you can do more to improve this pipeline. The following next steps will further optimize your delivery process by improving speed, efficiency, and security:
- Scan your container images for vulnerabilities before you deploy — by using the Trivy action, for example.
- Run the Kube-Linter action to spot misconfigurations in your Kubernetes manifest files.
- Configure canary or blue-green deployment strategies to safely verify new releases before they’re made available to all users.
- Use IaC tools to implement infrastructure management tasks, including automated cluster provisioning and configuration.
- Support Day 2 operations — by instrumenting your apps for observability and configuring cluster auto-scaling to maintain consistent performance, for example.
When planning your Kubernetes deployment strategy, it’s worth exploring other options besides GitHub Actions. Using a pull-based GitOps tool like Argo CD or Flux CD can be easier to scale as you create more environments, deployments, and projects. These tools automate the delivery process by using in-cluster controllers to continually reconcile your Kubernetes state against that declared by the config files in your repository.
Compared to building a custom and production-grade infrastructure management pipeline with a CI/CD tool like GitHub Actions, adopting a collaborative infrastructure delivery tool like Spacelift feels a bit like cheating.
Many of the custom tools and features your team would need to build and integrate into a CI/CD pipeline already exist within Spacelift’s ecosystem, making the whole infrastructure delivery journey much easier and smoother. It provides a flexible and robust workflow and a native GitOps experience. It detects configuration drift and reconciles it automatically if desired. Spacelift runners are Docker containers that allow any type of customizability.
Security, guardrails, and policies are vital parts of Spacelift’s offering for governing infrastructure changes and ensuring compliance. Spacelift’s built-in functionality for developing custom modules allows teams to adopt testing early in each module’s development lifecycle. Trigger policies can handle dependencies between projects and deployments.
We’ve created a GitHub Actions workflow that deploys an app to a Kubernetes cluster. GitHub Actions simplifies the Kubernetes deployment workflow by letting you utilize prebuilt Marketplace components to build your Docker image, push it to a registry, and then install your Helm chart.
GitHub Actions is a compelling option for deploying your apps, but specialist CI/CD platforms can be a better fit for managing your infrastructure. Give Spacelift a try to automate infrastructure provisioning, configuration, and governance directly from your GitHub pull requests. It works with familiar IaC tools, including Terraform, OpenTofu, Ansible, and more.
To learn more about Spacelift, create a free account today or book a demo with one of our engineers.
Solve your infrastructure challenges
Spacelift is a flexible orchestration solution for IaC development. It delivers enhanced collaboration, automation, and controls to simplify and accelerate the provisioning of cloud-based infrastructures.