Guide to Balancing Speed and Control in DevOps

➡️ Download

General

Terraform vs. Jenkins: Key Differences Explained

terraform vs jenkins

🚀 Level Up Your Infrastructure Skills

You focus on building. We’ll keep you updated. Get curated infrastructure insights that help you make smarter decisions.

Automation is a core principle in DevOps, helping improve consistency and efficiency in deployment processes. Terraform and Jenkins are widely used to support these goals.

Terraform is focused on infrastructure as code (IaC), enabling the provisioning and management of infrastructure across multiple platforms. On the other hand, Jenkins centers on continuous integration and continuous delivery (CI/CD), which automate the building, testing, and deployment of applications.

Individually, both tools offer strong automation capabilities. Combined, they support fully automated workflows that handle infrastructure provisioning and application deployment with minimal manual input.

This article outlines their core functionalities and demonstrates how they can be integrated to build an end-to-end CI/CD pipeline.

  1. What is Terraform?
  2. What is Jenkins?
  3. What are the differences between Terraform and Jenkins?
  4. When to use Terraform vs. Jenkins?
  5. How to use Terraform with Jenkins

What is Terraform?

Terraform is an IaC tool that automates cloud infrastructure setup using HCL (HashiCorp Configuration Language). It works across providers like AWS, Azure, and Google Cloud, offering a unified way to manage infrastructure.

best gitops tools terraform

By using state files, Terraform keeps your infrastructure aligned with your code, preventing configuration drift. Its modular approach encourages code reuse and scalable deployments.

Hosting Terraform code in a repo supports version control and teamwork. It also integrates well with CI/CD pipelines, making infrastructure changes more consistent and automated.

Here are the key concepts that make Terraform effective at provisioning infrastructure.

Key concepts in Terraform

  • Providers: Allow Terraform to interact with cloud services such as Azure, AWS, Google Cloud, Spacelift, VMware, and more through the use of plugins
  • Resources: The main infrastructure objects you want to deploy, such as a Virtual Machine or virtual network
  • State: The state file (terraform.tfstate) will keep track of your infrastructure as you continue to deploy resources.
  • Variables: Parameterize specific values across your code, promoting a dynamic environment
  • Outputs: Specific values created from resources in your code that can be called later
  • Core steps of Terraform workflows
    • terraform init: Initialize the Terraform working directory and download the necessary provider plugins. This step is required to run the following steps.
    • terraform plan: This is a breakdown of what changes will take place by comparing the current state of infrastructure with the desired state that is defined in your state files, giving you the ability to review and ensure no unexpected changes take place. It is not required, but it is highly recommended for reviewing the changes.
    • terraform apply: This is the key deployment step that pushes to get your infrastructure to the desired state by executing the changes that were listed on the ‘terraform plan’ and updating the state file to reflect the new changes. Therefore, this step is required to fully deploy the resources listed in your Terraform code.
    • terraform plan -destroy: Review the list of resources that will be destroyed.
    • terraform destroy: Delete the resources in your Terraform code.

Now that we understand how some of these key concepts operate, let’s examine how a standard workflow looks in Terraform and why this tool plays an important role in automating your infrastructure.

Exampe: Deploying a resource in Azure with Terraform

To deploy resources into your Azure tenant, you will need to have the following installed: 

  • Terraform
  • Azure CLI (Log in to the Azure tenant with ‘az login’) 

The following is a basic file structure of the different files involved in creating a successful deployment in Terraform. 

projects/
│── provider.tf                  # cloud provider configuration
│── main.tf                      # stores core resource code
│── variables.tf                      # values of Variables
│── outputs.tf                   # output values of core resources
│── terraform.tfvars             # default variable values (optional)
│── .gitignore                   # ignore sensitive Terraform files

Now, we will review these files and the configurations included in them to perform a successful deployment of an Azure Virtual Machine. 

provider.tf

The provider file stores the configuration for the cloud service plugins you will be using, such as azurerm, aws, google, spacelift, etc. This is how Terraform communicates with the cloud provider’s API in order to install and utilize it across your code during deployment:

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "~>4.0"
    }
  }
}

provider "azurerm" {
  features {}
}

main.tf

This is the main code for all the resources you want to deploy. Here, we will deploy an Azure resource group, Virtual Network, subnet, NIC, and virtual machine. 

resource "azurerm_resource_group" "rg" {
  name     = var.resource_group_name
  location = var.location
}

resource "azurerm_virtual_network" "vnet" {
  name                = "vm-vnet"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  address_space       = ["10.0.0.0/16"]
}

resource "azurerm_subnet" "subnet" {
  name                 = "vm-subnet"
  resource_group_name  = azurerm_resource_group.rg.name
  virtual_network_name = azurerm_virtual_network.vnet.name
  address_prefixes     = ["10.0.1.0/24"]
}

resource "azurerm_network_interface" "nic" {
  name                = "vm-nic"
  location            = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name

  ip_configuration {
    name                          = "vm-ipconfig"
    subnet_id                     = azurerm_subnet.subnet.id
    private_ip_address_allocation = "Dynamic"
  }
}

resource "azurerm_windows_virtual_machine" "vm" {
  name                  = var.vm_name
  resource_group_name   = azurerm_resource_group.rg.name
  location              = azurerm_resource_group.rg.location
  size                  = "Standard_B2ms"
  admin_username        = var.admin_username
  admin_password        = var.admin_password
  network_interface_ids = [azurerm_network_interface.nic.id]

  os_disk {
    caching              = "ReadWrite"
    storage_account_type = "Standard_LRS"
  }

  source_image_reference {
    publisher = "MicrosoftWindowsServer"
    offer     = "WindowsServer"
    sku       = "2019-datacenter"
    version   = "latest"
  }
}

variables.tf

As you noticed, we use variables across our main Terraform resource code. In the variable file, we specify the default values and the type that will be used for the variables. 

variable "location" {
  default = "East US"
}

variable "resource_group_name" {
  description = "Resource Group Name"
  type        = string
}

variable "vm_name" {
  description = "The name of the virtual machine"
  type        = string
}

variable "admin_username" {
  description = "Admin username for the VM"
  type        = string
  default     = "adminuser"
}

variable "admin_password" {
  description = "Admin password for the VM"
  type        = string
  sensitive   = true
}

outputs.tf

Outputting the values of our resources is important for creating a dynamic workflow. This approach allows us to output specific values of our main resources so we can perform data calls to them in the future and input them towards creating other resources in Terraform. 

This is also useful when using modules in Terraform, as you can reference specific configuration outputs in another module, which can improve the flexibility of your deployments and prevent you from hard-coding any values: 

output "vm_name" {
  value = azurerm_windows_virtual_machine.vm.name
}

output "vm_private_ip" {
  value = azurerm_network_interface.nic.private_ip_address
}

terraform.tfvars

The tfvars file is mainly used to override the default values in the variables file or pass in the true value needed for the variables. You can also include any sensitive values here. 

resource_group_name = "dev-rg"
vm_name            = "dev-vm"
admin_username     = "adminuser"
admin_password     = "P@ssw0rd!"

Once we have all these files in place, we are now ready to run our Terraform commands to deploy the resources. We can launch a command line window, change into the working directory with all the configuration files, and run the following Terraform commands in order: 

  • terraform init – This will display a message about initializing the backend and installing providers
  • terraform planWill output a detailed plan of what resources will be created
  • terraform applyDeploys all the changes and outputs every step that has been completed. This step will generate a new plan that requires approval

This was a standard approach to deploying a resource in Terraform and automating the infrastructure piece in your workflows. 

Next, we will discuss Jenkins and its approach to automation. 

What is Jenkins?

Built in Java, Jenkins is an open-source automation tool designed to simplify application deployment using CI/CD workflows. It automates building, testing, and deployment across platforms like AWS, Azure, Google Cloud, and Kubernetes.

devops platform jenkins

Jenkins pipelines are written in Groovy and define each stage of development, enabling continuous integration and delivery with minimal manual work. Its plugin-based architecture makes it highly adaptable and easy to integrate with external tools.

By storing pipeline configs in version control systems like Git, Jenkins supports collaboration and consistency in CI/CD practices. Below are key concepts that power Jenkins’ seamless automation:

Key concepts in Jenkins

  • Pipeline:  This is a workflow that incorporates various stages/jobs to build, test, and deliver/deploy an application using CI/CD principles. In Jenkins, the pipeline is stored in a file named “Jenkinsfile.” The file is stored in the root directory of the repository, which Jenkins reads to execute the stages. 
  • Stages: This first layer of tasks in your pipeline breaks up the pipeline tasks into multiple parts, allowing your builds and deployments to be clearer and more concise. Stages can include a single job or multiple ones and can be triggered manually or automatically through the pipeline. 
  • Jobs: These are a set of actions run in a specific order to fulfill your pipeline run. This run can include steps to perform builds, tests, run scripts/commands, and a full deployment. 
  • Steps: A single command that performs a single action, steps tell Jenkins what to do and serve as the building block for the Pipeline.
  • Triggers: These mechanisms initiate the Pipeline. They can be anything from GIT commits, dependencies from specific stages, pull requests, scheduled intervals, or external webhooks. Triggers increase seamless automation across your workflows. 
  • Builds/Test: Application code is compiled, tested, and packaged into an artifact file. This stage can perform various tests, such as smoke tests, unit tests, or source code analysis. It can also be triggered when there are commits/changes in the GIT repository. 
  • Plugins: These are extensions that provide extra functionality. Some common plugins include Git, Terraform, Kubernetes, and more. They allow seamless integration with other services
  • Artifacts: These are files created during the build process. Jenkins allows you to publish or archive artifacts on the Jenkins server locally or utilize an external platform such as Nexus to manage your artifact files. 
  • Agents: Agents are separate servers/containers primarily focused on running your pipeline and tasks, so that the Jenkins local server is not overloaded by executing your build jobs.
  • Security: Jenkins uses RBAC (role-based access control) to manage access. Access can also be managed through AD (Active Directory), LDAP, and SSO (SAML and OAuth)
  • Credential management: Jenkins can store sensitive information such as SSH keys, passwords, and tokens. It can also store sensitive information on a cloud provider’s key store.
  • Backup/disaster recovery: Jenkins supports backing up Jobs, Configurations, and other plugin information.
  • Monitoring/Logging: Jenkins integrates with other monitoring and logging tools for efficient tracking and analysis.

How to install Jenkins

Jenkins requires a server/container to host the tool. Methods for hosting your Jenkins server include Linux, Kubernetes, Docker, Windows, and more. Follow the steps listed here to install and configure Jenkins properly.

Below is an example of how we can perform the CI (continuous integration) piece of the CI/CD workflow by doing a build/test of our application code. 

Build and test

In a software deployment process, it is important to build our artifact file and test it thoroughly with various modes. We will review some of the concepts we covered to demonstrate how a Jenkins pipeline operates. 

In this example, we will use Maven to build our artifact file from our source code, which is hosted in a sample GIT repository. We will then perform a unit test on our project to ensure the code is properly validated and compiled. 

Prerequisites

In a production environment, we will need to leverage a build agent, which is a separate server/container with all the tools installed for each stage in the Pipeline to run successfully. These might include Python, Docker, Maven, GIT, etc. 

However, in the following examples, we will be using our local Jenkins server as our build agent. Therefore, we will need to make sure all of the required applications/plugins are installed on the host before running our pipeline.

Applications

In this example, we will use a Linux Ubuntu machine to host Jenkins, so we will install all of the necessary applications using the APT package manager to create a build/test pipeline with Maven.

We will need to install the following applications on the Jenkins server: 

  • OpenJDK 21
  • Maven
sudo apt update
sudo apt install openjdk-21-jdk -y
sudo apt install maven -y 
sudo apt update 

Plugins

In order to install the plugins, follow the steps below:

  1. Log in to the Jenkins server with your Username/Password
  2. Go to Manage Jenkins > Plugins 
  3. Search for and Install the following Plugins:
    • Git
    • NodeJS
    • Pipeline Utility Steps
    • Pipeline Maven Integration

Pipeline setup

Once we have all the prerequisites installed, we can begin creating our pipeline:

  1. Go to Manage Jenkins from the Jenkins Dashboard, click on New Item, Select Pipeline, and name the pipeline.  
  1. Scroll down to the Pipeline section. Copy/paste the following into the Pipeline Script and click Save:
pipeline {
    agent any

    stages {
        stage('Fetch Code') {
            steps {
                git branch: 'artifact-jenkins-spacelift', url: 'https://github.com/faisalhashem/projects.git'
            }
        }
        stage('Build'){
            steps {
                sh 'mvn install'
            }
        }
        stage('Test'){
            steps {
                sh 'mvn test'
            }
        }
    }
}

Stage Breakdown: 

  • Fetch Code: Steps to retrieve the source code for the sample application (source code is from Visual Path)
  • Build: Create the Artifact file from the source code retrieved from the previous stage.
  • Test: Perform Unit tests on the source code to validate functionality.
  1. Click Build Now on the sidebar to trigger the Pipeline:
  1. Once the Pipeline runs successfully, navigate to the build (click on #1 under the builds section), go to Workspaces, click on the directory link, and browse to the target folder. You should now see your artifact file (vprofile-v2.war). To view the unit tests Maven performed, you can navigate to the Console Output section within the build pane. 
jenkins dashboad workspace

Overall, this demonstrates how we can use Jenkins to automate workflows in the software development lifecycle and create seamless connections with external tools to create a complete CI/CD pipeline.

What are the differences between Terraform and Jenkins?

Terraform and Jenkins serve fundamentally different purposes in DevOps workflows. Terraform handles the infrastructure lifecycle (create, update, destroy), while Jenkins handles the software lifecycle (build, test, release). However, they are often used together in DevOps pipelines.

Terraform vs Jenkins table comparison

The table below summarizes the main differences between Terraform and Jenkins:

Feature Terraform Jenkins
Functionality Infrastructure and code (IaC) – Provisions and manages infrastructure resources Continuous integration and continuous delivery – Automates software deployment workflows to build, test, deploy apps
Language HCL (HashiCorp Configuration Language) Groovy
Open source Yes Yes
CI/CD capability No Yes
Infrastructure provisioning Yes No
State management Manages Resources through the state file Does not require a state file
Configuration management No Yes – Limited
Execution type Declarative Imperative
Plugin/Integration support Integrates with various Cloud Providers Linked with numerousf plugins for automation
Use-case Manage infrastructure resources Build, test, deploy application source code
Community support Large Large
Dependency management Yes, for infrastructure resources only Yes, for build/test/deployment of source code dependencies
Multicloud support Yes (Azure, AWS, GCP, etc) Yes with plugins
Script based execution Yes Yes
Extensibility Support modules for reusability Supports plugins for additional features

Terraform vs. Jenkins vs. Ansible

Terraform, Jenkins, and Ansible serve different purposes in DevOps pipelines: Terraform is for infrastructure provisioning, Jenkins for CI/CD automation, and Ansible for configuration management and orchestration.

In many DevOps pipelines, all three are used together:

  • Jenkins triggers pipelines.
  • Terraform provisions infrastructure.
  • Ansible configures and manages the provisioned servers.

This modularity allows teams to separate concerns while achieving full automation across infrastructure and application layers.

When to use Terraform vs. Jenkins?

Terraform is best used for provisioning and managing infrastructure as code across cloud providers. It handles the lifecycle of cloud resources declaratively, using state management to track changes and dependencies. On the other hand, Jenkins is ideal for orchestrating CI/CD pipelines, automating tasks like building, testing, and deploying applications.

However, a common pattern is using Jenkins pipelines to trigger Terraform runs, for example, to provision infrastructure as part of a deployment process.

These tools are complementary: Jenkins automates pipelines, while Terraform manages infrastructure state and change.

DevOps accelerator Cloud Posse had invested two years of effort trying to adopt tools like Atlantis, Jenkins, and Terraform Cloud, but those tools came nowhere near delivering what was needed. Then, a customer suggested they take a look at a new addition to the IaC CI/CD tools market that had caught their attention — Spacelift. Now, every customer IaC implementation they complete includes Spacelift as a key part of the final DevOps IaC environment they set up.

Spacelift customer case study

Read the full story

How can we use Terraform with Jenkins?

Jenkins uses plugins created by different community members, allowing users to add more automation functionality to their CI/CD pipelines. The Terraform plugin in Jenkins is commonly used to automate the infrastructure deployment piece, giving users the ability to provision infrastructure alongside software deployment in a CI/CD pipeline. 

Typically, before initiating software deployment workflows, you would coordinate with the Infrastructure team to provision necessary resources within cloud platforms such as Azure, AWS, or GCP. Once completed, developers could then trigger the software deployment cycle.

Check out this article for a full Terraform with Jenkins tutorial.

Why use Spacelift for your IaC?

When it comes to infrastructure orchestration, generic CI/CD platforms such as Jenkins or CircleCI are not specialized enough to manage everything you might need in your workflows. That’s where Spacelift shines.

Spacelift is an IaC management platform that helps you implement DevOps best practices. It provides a dependable CI/CD layer for infrastructure tools including OpenTofu, Terraform, Pulumi, Kubernetes, Ansible, and more, letting you automate your IaC delivery workflows.

Spacelift is designed for your whole team. Everyone works in the same space, supported by robust policies that enforce access controls, security guardrails, and compliance standards. This means you can manage your DevOps infrastructure far more efficiently without compromising on safety.

Let’s see how Spacelift avoids the limitations of generic CI/CD tools:

  • Policies to control what kind of resources engineers can create, what parameters they can have, how many approvals you need for a run, what kind of task you execute, what happens when a pull request is open, and where to send your notifications
  • Stack dependencies to build multi-infrastructure automation workflows with dependencies, having the ability to build a workflow that, for example, generates your EC2 instances using Terraform and combines it with Ansible to configure them
  • Self-service infrastructure via Blueprints, or Spacelift’s Kubernetes operator, enabling your developers to do what matters – developing application code while not sacrificing control
  • Creature comforts such as contexts (reusable containers for your environment variables, files, and hooks), and the ability to run arbitrary code
  • Drift detection and optional remediation

To learn more about Spacelift, create a free account today or book a demo with one of our engineers.

Key points

Terraform and Jenkins both support DevOps automation, but they serve distinct roles. Terraform handles infrastructure as code, defining and provisioning cloud resources consistently and scalably. By contrast, Jenkins automates CI/CD pipelines, simplifying build, test, and deployment processes. 

While each can work independently, using them together creates a powerful, automated workflow that unifies infrastructure management and software delivery, boosting efficiency and reducing manual errors.

Automate Terraform Deployments with Spacelift

Automate your infrastructure provisioning, and build more complex workflows based on Terraform using policy as code, programmatic configuration, context sharing, drift detection, resource visualization, and many more.

Learn more