In this post we will delve into the basics of CI/CD pipelines, outlining the benefits of using them. We will then look at each stage in the pipeline, and what makes up a good pipeline, along with some examples relating to Terraform.
A CI/CD pipeline is used to automate software or infrastructure-as-code delivery, from source code to production. It can be thought of as a series of steps that needs to be taken for code to be released.
CI stands for Continuous Integration, and CD stands for Continuous Delivery or Deployment. ‘Pipeline’ refers to the automation of the delivery workflow, consisting of build, test, delivery, and deployment stages. Although each of these steps can be executed manually, automating and orchestrating the stages maximizes the benefits of using CI/CD pipelines by minimizing human error and bringing consistency to each release.
CI/CD pipelines are themselves commonly configured in code, sometimes referred to as ‘pipelines-as-code’.
Typically a build server (or build agent) is used to enable CI/CD runs. Typically these take the form of self-hosted virtual machines in the cloud which can be fully configured but also have to be maintained, or virtual machines provided as part of the platform you are using, which are typically less flexible in terms of adding software and plugins to them.
Containers can also be used to enable consistent build environments, further removing the reliance on having to maintain a build server. Each step of the CI/CD pipeline can be run in its own container, allowing each step to be run inside a fully customized container. This also enables pipelines to make full use of all the benefits containerization orchestration affords, such as resilience and scaling where required.
Continuous Integration (CI)
CI covers the build and test stages of the pipeline. Each change in code should trigger an automated build and test, allowing the developer of the code to get quick feedback.
Continuous Delivery / Deployment (CD)
The CD part of a CI/CD pipeline refers to Delivery and Deployment (CI/CDD anyone?!). CD takes place after the code successfully passes the testing stage of the pipeline. Continuous delivery refers to the automatic release to a repository after the CI stage. Continuous deployment refers to the automatic deployment of the artifact that has been delivered.
Starting with writing the source code, and ending up in production, these phases make up the development workflow and form the lifecycle of the CI/CD pipelines. Pipeline runs are usually triggered automatically by a change in the code, but can also be run on a schedule, run manually by a user, or triggered after another pipeline has run.
The four parts of the CI/CD pipeline are:
- Build stage
- Test stage
- Deliver stage
- Deploy stage
The build stage involves the code being written. This is typically done by multiple people in a team, and for larger projects, multiple teams. Code is held in a version control system (VCS), and a Git-based workflow is commonly used to control code being added to the repository (also referred to as GitOps). When used in a pipeline, tools to regulate developer environments and standardize them are particularly useful in order to eliminate any differences between different authors’ code. This will usually take the form of a docker container when using cloud-native software.
As greater resiliency is required and more varied infrastructure is introduced, testing code leads to greater confidence that the code will perform as expected. Testing of your code can be automated. In general, this is normally a repetitive, complex and sometimes tedious process to perform manually. A mistake that a lot of teams make is to skip the test stage or underuse it, as the benefits of properly testing code before delivery and deployment can be huge, contributing to a high-quality product.
Testing can be broken up into multiple types, using a combination of these with a combination of different tools will give the highest code coverage and result in a higher quality product. Where lots of different tests are used, these can be parallelized to reduce the pipeline run time.
Smoke testing — Perform quick sanity checks on the code.
Integration Testing — Integration tests are intended to test all the code, or system as a whole. They validate that newly introduced code does not break the existing code.
Unit Testing — Unit tests are intended to test a particular function, several functions together, or part of the code. Smaller parts of the infrastructure can be isolated, and tests can be run in parallel to shorten the feedback cycle.
Compliance Testing — Compliance testing is used to ensure the configuration follows the policies you’ve defined for the project.
End-to-end Testing (E2E) — E2E tests validate everything works together before deploying to production. It is the complete test of the whole process.
After the code has been tested, the code is packaged up as an artifact and committed to a repository.
The deploy stage allows the orchestration of the artifact release. Usually, teams will deploy to multiple environments, including environments for internal use such as development and staging, and Production for end-user consumption. Using this model, teams can automatically deploy to a staging environment when a pipeline is triggered. Once the staging environment has been reviewed and approved, the code can be merged into the main branch, and then automatically deployed to Production.
Below you can find a diagram of the CI/CD pipeline.
Adopting the use of CI/CD pipelines brings many benefits, including:
- Reduced costs — Reducing the time it takes the app or infrastructure to get coded and deployed means less human resources are taken up.
- Reduced time to deployment — Time to deploy is reduced through automation. The entire process, from coding to deployment, is streamlined, reducing the total process length and making it more efficient. Deployments can be made more often. The rate of release can give companies a significant competitive business advantage over competitors. Deployments can also be rolled back easily, should there be a problem. Developers of the code can remain focused.
- Enabling continuous feedback — Enables teams to improve their code and workflow based on the feedback received. The testing stage highlights problems or faults and immediately gives feedback to allow code to be improved. Detecting errors earlier in the coding process means that these are easier and quicker to fix. Notifications can be set up to alert team members to failures or successes at each stage.
- Improving team collaboration — The team has visibility into problems, can view feedback, and make changes where appropriate.
- Audit trails — Each stage of the pipeline generates logs and can enable accountability and traceability.
A good pipeline should be reliable, accurate, and fast.
A pipeline should run reliably each time, without any errors, or unexpected intermittent errors. Broken pipelines can cause frustration and wasted time. When a pipeline is broken or throwing errors, it is recommended that the owner fix it as soon as possible before continuing work on the product. This approach should save other users from the same issues and aid the overall team. It is for this reason that many organizations have a specified ‘DevOps’ team, that will own the pipelines and monitor their success.
A reliable pipeline will produce the same result, given the same input.
Self-hosted build agents can often be the cause of unreliable pipeline runs, due to the required maintenance required on these (underlying infrastructure, patching, package management, security, etc). However, they are sometimes necessary as they offer greater flexibility when compared to platform-provided build agents.
Ironing out errors in repetitive tasks is where CI/CD pipelines can really shine to improve the overall quality of a product.
Optimizing a pipeline to run as quickly as possible ensures that the developer of the code can get feedback on the success or failure of the pipeline run quickly, making them more efficient, reducing the chance of distractions and ‘context switching’ (switching to another task/meeting/browsing the web).
Fast pipeline runs can also enable deployments more often. For example, if a pipeline takes 1 hour to run, and the working day is 8 hours long, then an absolute maximum of 8 deployments can be made per day. Reduce the pipeline run time down to 30 minutes, now 16 deployments can be made.
A common problem with pipeline runs is that they end up in a queued state, waiting for a build agent to process a previous run before the next can start. For this reason, multiple agents should be provisioned so different pipelines can be run in parallel. A serverless model or container orchestration solution is particularly useful here in order to scale the build agent capacity dynamically when demand is high.
Good pipelines are also quick to create. They should be written in code and kept in a VCS alongside the product code where appropriate. Base templates can be written by providing a common structure, which other templates can reference, speeding up the time required to create a new pipeline.
See our list of the most useful CI/CD tools that define the state of today’s software automation.
And take a look at how Spacelift provides a platform to manage IaC in an automated, easy, controlled, and secure manner. It goes above and beyond the support that is offered by the plain backend system. Spacelift is a sophisticated CI/CD platform for IaC that enables Gitops on infrastructure provisioning workflows. Check it out for free and create a trial account.
Write and store the Terraform Code in VCS.
Testing tools used with Terraform CLI-driven pipeline runs include Tfsec and Checkov. Many infrastructure-as-code-specific CI/CD platforms enable policy as code such as Terraform Cloud, or better still, Spacelift, which uses the open-source Open Policy Agent (OPA), meaning this testing language could be used across multiple projects using different tooling.
Prepare Terraform Execution Environment (could include things like getting secrets from a key vault, or setting environment variables).
Terraform plan (copy output file to a repository, ready to be used by the apply command in the deploy stage.
See also best practices and techniques for managing CI/CD pipelines with Kubernetes.
Developers write Kubernetes configuration files and push them to a Git repository in a VCS. YAML files could be added to define other components of the pipeline, such as ConfigMaps for configuration data or Secrets for sensitive information.
The code is built, and tests are run on it. Kube-bench is a security tool that checks Kubernetes configurations against best practices and industry standards, including CIS benchmarks. Helm unit testing is a testing tool for Helm charts.
Once the tests pass, the CI server creates a Docker container image of the application and pushes it to a container registry such as Docker Hub or Amazon ECR.
Once the container image is available in the registry, Kubernetes can be used to deploy the application. The docker image can be tagged as appropriate.
For more details, check out – Kubernetes CI/CD Pipelines – 7 Best Practices and Tools.
Developers write .NET code and push it to a Git repository in a VCS. The code is built when it is pushed, triggered by CI using a build tool such as MSBuild or dotnet CLI.
The code is built, and tests are run on it. Unit testing can be done using a framework such as NUnit or xUnit.
Once the tests pass, the CI server generates an artifact, such as a NuGet package or a self-contained executable that can be deployed.
The artifact is deployed.
CI/CD pipelines are an important concept for the modern engineer to understand fully, especially in the current cloud-native world. CI/CD pipelines enhance the software delivery process by automating key stages such as building, testing, delivery and deployment. Adopting a pipeline-based workflow helps you ship more quickly by passing all code through a consistent set of steps.
Flexible CI/CD Automation Tool
Spacelift is an alternative to using homegrown solutions on top of a generic CI solution. It allows you to automate, audit, secure, and continuously deliver your infrastructure.