Terraform

How to Implement GitLab CI/CD Pipeline with Terraform

Terraform CI/CD pipeline with Gitlab

Various platforms are available to implement CI/CD automation for the Terraform IaC workflows. In this post, we will explore and implement a CI/CD pipeline for Terraform using GitLab. GitLab is a tool that provides remote git repositories and integrated CI/CD automation capabilities.

We will refer to an example Terraform configuration, which creates an EC2 instance on AWS. Below is the summary of steps we will take to implement CI/CD on Gitlab.

  1. Set up GitLab project repository
  2. Create the Terraform configuration files
  3. Configure GitLab pipeline using .gitlab-ci.yml
  4. Add AWS credentials in GitLab
  5. Set up the remote backend
  6. Configure the backend for local development
  7. Implement pipeline conditions and destroy the pipeline

1. Set up a GitLab project repository

As a prerequisite, we need a Gitlab account. Create one here and log in.

From the homepage, click on the “New Project” button, as shown below.

Create a Gitlab Project Repository - prerequisite

On the following page, choose to create a blank project – which then navigates to the next page (screenshot below). When you provide a name to the project, it is automatically translated as a project slug in the URL.

Create a Gitlab Project Repository create project

We can keep other settings as they are.

Note: Making the repository public would ease access to Git operations – clone, pull, push. Click on “Create project.”

Once the project is created, navigate to the repository and click “Clone.” It has several options, as shown below. Clone this empty repository on the local machine with any of the suggested methods.

Create a Gitlab Project Repository clone

2. Create the Terraform configuration files

In this step, we will create the Terraform configuration in the repository we just cloned. As mentioned earlier, we will create an EC2 instance in AWS using Terraform and Gitlab pipelines.

To begin, we will create the files below:

  1. main.tf
  2. variables.tf
  3. provider.tf
  4. output.tf

The configuration below displays the contents of the main.tf file.

resource "aws_instance" "my_vm" {
  ami           = var.ami //Ubuntu AMI
  instance_type = var.instance_type

  tags = {
    Name = var.name_tag,
  }
}

The resource block defined above would create (manage) an instance of type “t2.micro” using Ubuntu AMI. It also provides a name tag to the instance with the value “My EC2 Instance”.

The values are set as defaults in the variables.tf file below.

variable "ami" {
  type        = string
  description = "Ubuntu AMI ID in eu-central-1 Region"
  default     = "ami-065deacbcaac64cf2"
}

variable "instance_type" {
  type        = string
  description = "Instance type"
  default     = "t2.micro"
}

variable "name_tag" {
  type        = string
  description = "Name of the EC2 instance"
  default     = "My EC2 Instance"
}

Finally, we create the provider configuration in a separate file named provider.tf below.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.18.0"
    }
  }

  backend "http" {
  }
}

provider "aws" {
  region = "eu-central-1"
}

Note that we have declared the backend “http” block, but it is empty. We will discuss this later, but for now, we will keep it as it is.

Also, GitLab requires us to declare a provider block with a region attribute assigned explicitly. Optionally, we may create a file (output.tf) to define output variables as below.

output "public_ip" {
  value       = aws_instance.my_vm.public_ip
  description = "Public IP Address of EC2 instance"
}

output "instance_id" {
  value       = aws_instance.my_vm.id
  description = "Instance ID"
}

3. Configure GitLab pipeline using .gitlab-ci.yml

At this point, our Terraform configuration is ready – although we have not tested it. Before pushing this code to our Gitlab repository, we should create the pipeline YAML file in the same repository. If you are new to Gitlab CI/CD and pipeline configuration, refer to the documentation for all the syntax and conceptual references.

Create a file named “.gitlab-ci.yml” in the project root directory with the following contents. This defines a basic Terraform pipeline in the GitLab CI/CD platform. GitLab provides this template. If you want to customize this pipeline or create it from scratch – please refer to the documentation link above.

For now, this is enough for our use case.

include:
 - template: Terraform/Base.gitlab-ci.yml  
 - template: Jobs/SAST-IaC.gitlab-ci.yml   

stages:
 - validate
 - test
 - build
 - deploy
 - cleanup

fmt:
 extends: .terraform:fmt
 needs: []

validate:
 extends: .terraform:validate
 needs: []

build:
 extends: .terraform:build

deploy:
 extends: .terraform:deploy
 dependencies:
   - build
 environment:
   name: $TF_STATE_NAME

We have included a couple of templates at the beginning. In GitLab, it is possible to reuse other YAML templates stored locally, remotely, or in a different project. This improves readability and promotes code reuse. In the YAML file above, we added a couple of Templates – Base and SAST.

Next, we declared the stage names and defined them in a sequence. If we look at the stages, they are very similar to the Terraform operations we would perform in a workflow. To summarize:

  1. fmt – formats the Terraform config.
  2. validate – validates the code.
  3. build – initializes the code on the runner.
  4. deploy – executes terraform apply command.
  5. cleanup – destroys the resource. We will get back to this later in this post.

Each stage uses the keyword “extends” with a certain .terraform:* value. It is a reference to the constructs from the files included at the very beginning.

With these five files created, create a commit and push the code to the Gitlab project/repository.

4. Add AWS credentials in GitLab

When the code is pushed to the GitLab project, the pipeline is automatically created and triggered based on the .gitlab-ci.yml file. However, the first run failed.

Navigate to: “Project > CI/CD > Pipelines”, and click on the run. Since it is the first run, there should be just one failed entry, as seen below.

Setup AWS Credentials in GitLab failed entry

A broader view:

Setup AWS Credentials in GitLab failed entry broader view

Click on the failed job to see the logs and observe the error message. If you have followed all the steps correctly, then the following error message is valid. If it is not the same, then something else is wrong in your setup.

Setup AWS Credentials in GitLab error message

It is clear that the Terraform has been initialized successfully. However, there are no valid credentials configured for the Terraform AWS Provider. This makes sense since we never configured AWS credentials until now.

To address this issue, navigate to “Settings > CI/CD > Variables”, and click on Expand. Add the AWS Access Key ID and Secret Key here. Since these are project-specific CI/CD settings, this information will be made available to the runners via environment variables.

Gitlab ci cd variables

Re-run the pipeline now, and make sure it succeeds.

Gitlab ci cd pipeline success

If the pipeline run is successful, log in to the AWS console and confirm creating an EC2 instance.

ec2 instance

5. Set up the remote backend

As discussed in the first section, we left the backend “http” block empty. In fact, when we pushed the code to our GitLab repository for the first time, it automatically triggered the pipeline and also initialized Terraform successfully.

GitLab automatically configures the remote “http” backend. The Terraform config is version-controlled in GitLab repositories, the pipelines are run on GitLab runners, and the backend is also managed by GitLab.

To access the remote backend, navigate to “Infrastructure > Terraform.” Here we find the “default” state being managed, as shown in the screenshot below. The state JSON file can also be downloaded and locked manually from here.

gitlab ci cd infrastructure terraform

6. Configure the backend for local development

We have successfully set up:

  1. The configuration that creates an EC2 instance.
  2. CI/CD pipeline that automates the provisioning.
  3. Remote state backend.

As far as updating and committing the changes to the configuration on the web browser is concerned – all of this works well since everything is managed by GitLab.

However, we also have the Terraform configuration files created on the local system. Can we perform local development and perform tests? Perhaps, no, for a couple of reasons.

  1. The Terraform project is not initialized locally. 
  2. Initialization requires us to connect to the remote backend.

For the local copy, the backend “http” block would not work. To confirm the same, try to run the terraform init command in the project’s root directory. It should complain about the backend configuration and authentication.

To make it work, we need to provide the following attribute values to the backend “http” block in the provider.tf file. More details.

  1. address: to access the state information
  2. lock_address: to lock the state file
  3. unlock_address: to unlock the state file

To get this information for our Gitlab project, navigate to the Terraform state (same screenshot as above), and click on “Copy Terraform init command.” It should display the command as shown below.

Terraform init

Update the address, lock_address, and unlock_address attributes in the backend “http” block in provider.tf from the information provided above.

The updated provider config should look like this:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.18.0"
    }
  }

  backend "http" {
    address        = "https://gitlab.com/api/v4/projects/<ProjectID>/terraform/state/default"
    lock_address   = "https://gitlab.com/api/v4/projects/<ProjectID>/terraform/state/default/lock"
    unlock_address = "https://gitlab.com/api/v4/projects/<ProjectID>/terraform/state/default/lock"
  }
}

provider "aws" {
  region = "eu-central-1"
}

To initialize the Terraform project locally, run the remainder of the init command in the project’s root directory. You may be forced to use the -reconfigure flag, as simply running terraform init will first result in auth error, and second, we do not want to migrate the state from GitLab to local.

More details about migrating the state with GitLab are found here. The init command to run looks like this:

terraform init -reconfigure \
  -backend-config=username=<Your Username> \
  -backend-config=password=$GITLAB_ACCESS_TOKEN \
  -backend-config=lock_method=POST \
  -backend-config=unlock_method=DELETE \
  -backend-config=retry_wait_min=5

Once the project is initialized locally, you can test it by running the Terraform plan, apply, and destroy commands. A good test would be to destroy the EC2 instance from the local machine created by the GitLab pipeline.

7. Implement pipeline conditions and destroy the pipeline

It is impossible to create multiple pipelines per project/repository in GitLab. Given the dependency on the state file, it becomes tricky to manage provisioning and de-provisioning activities in the same pipeline. However, GitLab’s pipeline syntax and template libraries enable us to create complex and flexible pipelines which are capable of covering multiple scenarios.

The current pipeline can provision new infrastructure or implement changes. However, the destruction of the same infrastructure needs to be managed from elsewhere (local machine).

To tackle this situation, we can depend on the commit message provided at the time of committing the changes to the repository. This is because that is the last piece of information that is under the control of the user before the pipeline takes control of the automation. The idea is to search for a keyword, e.g. “destroy”, and based on this, selectively run apply and destroy stages.

To implement these conditional runs, we take the help of the rules construct in GitLab YML syntax. Below is the updated .gitlab-ci.yml file.

include:
 - template: Terraform/Base.gitlab-ci.yml  
 - template: Jobs/SAST-IaC.gitlab-ci.yml   

stages:
 - validate
 - test
 - build
 - deploy
 - cleanup

fmt:
 extends: .terraform:fmt
 needs: []

validate:
 extends: .terraform:validate
 needs: []

build:
 extends: .terraform:build

deploy:
 extends: .terraform:deploy
 rules:
   - if: $CI_COMMIT_TITLE != "destroy"
     when: on_success
 dependencies:
   - build
 environment:
   name: $TF_STATE_NAME

cleanup:
 extends: .terraform:destroy
 environment:
   name: $TF_STATE_NAME
 rules:
   - if: $CI_COMMIT_TITLE == "destroy"
     when: on_success

The rules in the deploy stage specify an “if” condition, which checks if the commit message is not “destroy.” Thus, if the commit message contains anything else other than “destroy,” the deploy stage would be executed, and the cleanup (destroy) stage would be skipped. 

To run the destroy pipeline, make sure to have a commit message as “destroy.”

In the screenshot below, notice how the last stage selected for execution is “cleanup” and not “deploy.”

gitlab ci cd destroy

Key Points

Implementing the Terraform workflow using Gitlab CI/CD is good and follows a unique pattern. The vast library for constructing pipelines, the ability to nest and reuse templates offer great flexibility. While on one side, one pipeline per repository may sound limiting and may force a small learning curve, on the other, GitLab’s ability to manage the config, state, remote backend, and automation might outweigh those limitations.

Implementing CI/CD for Terraform projects on platforms like GitLab – which are traditionally built for application layer components – is a complex task. Spacelift is built specifically for IaC automation workflow. Once integrated with the IaC Git repository, infrastructure sets are managed as Stacks. 

The stacks are associated with contexts that provide all the required environment variables for execution. Spacelift also manages Terraform state files efficiently, which reduces the stress associated with managing them in a separate backend. Along with this, Spacelift offers many useful features that are critical to IaC projects today. Sign up for free or book a demo with one of our engineers.

Note: New versions of Terraform will be placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that will expand on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6. OpenTofu retained all the features and functionalities that had made Terraform popular among developers while also introducing improvements and enhancements. OpenTofu is the future of the Terraform ecosystem, and having a truly open-source project to support all your IaC needs is the main priority.

Automate Terraform Deployments with Spacelift

Automate your infrastructure provisioning, and build more complex workflows based on Terraform using policy as code, programmatic configuration, context sharing, drift detection, resource visualization, and many more.

Start free trial
Terraform CLI Commands Cheatsheet

Initialize/ plan/ apply your IaC, manage modules, state, and more.

Share your data and download the cheatsheet