This article will explain how to use Terraform for automation to set up and deploy Docker containers. We will use Docker for Windows Desktop to demonstrate how to deploy a demo container with Terraform Docker provider.
What we will cover:
There are several use cases in which Terraform can be useful in the Docker containers context.
- Automate the provisioning of resources of the infrastructure where Docker containers will run – provision AWS EC2 instances which are used to host Docker containers are
- Build a development environment for your DevOps team – to ensure consistency in the development process, an image can be created with all of your DevOps tools (Terraform included), and your engineers can create containers on their machines without needing to install and configure all of the tools that help with their workflows
- Using Terraform’s Docker provider to manage Docker resources – you can manage the lifecycle of images, containers, volumes, and networks declaratively with Terraform
- Managing Docker containers orchestrator with Terraform – manage the Kubernetes or Docker Swarm setups, plus the actual deployment of the Docker containers
- CI/CD pipeline integration – there are multiple integrations that you could leverage using Terraform and Docker, from simply building a Docker image to pushing it to a registry and the actual creation of the container
By default, the Docker image resource in Terraform pulls an image.
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "3.0.2"
}
}
}
resource "docker_image" "alpine" {
name = "alpine:latest"
}
In the above example, I will just pull the latest Alpine image.
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
Before applying the code, as you can see, we don’t have any images.
Let’s apply this code and see what happens:
Terraform will perform the following actions:
# docker_image.alpine will be created
+ resource "docker_image" "alpine" {
+ id = (known after apply)
+ image_id = (known after apply)
+ name = "alpine:latest"
+ repo_digest = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
docker_image.alpine: Creating...
docker_image.alpine: Creation complete after 4s [id=sha256:0b4426ad4bf25e13fb09112b9dcb5d5b09b3c5684599654583913b2714a705a2alpine:latest]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
If we now check my images, we will see the latest Alpine version downloaded:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
alpine latest 0b4426ad4bf2 6 weeks ago 8.83MB
Now, let’s use the same image Terraform resource to build a Docker image. First things first, we need to define a Dockerfile. In my example, I’m installing a couple of tools in an Alpine image:
FROM alpine:3.20
RUN apk add --no-cache \
curl \
bash \
git \
wget \
unzip \
make \
build-base \
py3-pip \
openssh-client
CMD ["bash"]
We have added a name to my image, specified a context (this would be the working directory that contains the Dockerfile), and a tag for our image.
If we apply the code, the image will be built in under 30 seconds:
docker_image.dev_image: Creating...
docker_image.dev_image: Still creating... [10s elapsed]
docker_image.dev_image: Still creating... [20s elapsed]
docker_image.dev_image: Creation complete after 25s [id=sha256:5e98f06cfb410615bb90e12749cc509ad2c46cf721d4c113d048652ea560982adev_image]
Now, let’s check and see if it is available:
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
dev_image 1.0.0 5e98f06cfb41 29 seconds ago 303MB
The Terraform Docker provider is a plugin within the Terraform ecosystem that allows you to manage the lifecycle of Docker resources. It uses the Docker API to interact with Docker and provision the desired infrastructure.
It supports the following:
- docker_config – manages the configuration of a Docker service in a Swarm
- docker_container – manages a Docker container
- docker_image – pulls a Docker image from a registry, but also can build images
- docker_network – manages Docker networks
- docker_plugin – manages the lifecycle of Docker plugins
- docker_registry_image – manages the lifecycle of an image that is part of a registry
- docker_secret – manages the secrets of Docker services in a Swarm
- docker_service – manages the lifecycle of a Docker service
- docker_tag – creates a Docker tag
- docker_volume – manages Docker volumes
Follow these steps to set up and deploy Docker containers using Terraform:
- Set up Windows Subsystem for Linux 2 (WSL2)
- Install Docker for Windows Desktop
- Configure Terraform Docker provider
- Add a Docker image to the main.tf file
- Initialize the configuration
- Review the running containers
- Add a NGINX server
- Deploy other resources
- Clean up
Firstly we will need to set up the Windows Subsystem for Linux if you have not already done so. To do this, follow the instructions here.
If you don’t already have Docker installed, you can download it for free.
During the installation process, you should also check the ‘Use the WSL 2 based engine’. This uses the Windows Subsystem for Linux, which provides better Docker performance.
Once Docker is installed, for our demo purposes, we will need to expose the daemon without TLS.
Go to the Docker Desktop for Windows settings and make sure ‘Expose daemon on TCP:localhost:2375 without TLS’ is ticked. Apply the settings, and Docker will restart.
To set up the Terraform Docker provider, create a file called main.tf
and add the following provider block (the latest version is version 2.23.1 at the time of writing). You can also clone the file from the GitHub repository.
The localhost:2375 is the default address for the Docker daemon.
terraform {
required_providers {
docker = {
source = "kreuzwerker/docker"
version = "2.23.1"
}
}
}
provider "docker" {
host = "tcp://localhost:2375"
}
Note for reference to connect to a Linux machine with Docker installed you would use the host line:
host = "unix:///var/run/docker.sock"
Next, we need to add the container configuration to the main.tf file:
# Creating a Docker Image ubuntu with the latest as the Tag.
resource "docker_image" "ubuntu" {
name = "ubuntu:latest"
}
# Creating a Docker Container using the latest ubuntu image.
resource "docker_container" "webserver" {
image = docker_image.ubuntu.latest
name = "terraform-docker-test"
must_run = true
publish_all_ports = true
command = [
"tail",
"-f",
"/dev/null"
]
}
The docker_image
resource pulls a Docker image to a given Docker host from a Docker Registry. In this case, the latest Ubuntu image.
The docker_container
resource manages the lifecycle of a Docker container. In the above example, we specify container’s image and name.
Note the must_run
parameter is set to true
, meaning the Docker container will be kept running. publish_all_ports
is used to publish all ports of the container. The command
sets a command to use to start the container.
Here we use the tail
command with the follow option -f
on "/dev/null"
which will constantly run to keep the container alive.
Run terraform init
on the directory that holds the configuration file:
Run terraform plan
and then terraform apply
. Observe the output.
Back in the Docker Desktop for Windows GUI you can see the Container running:
Or on the command line using docker container ls
.
Let’s add an NGINX (webserver) image and container resource to the main.tf
config file:
resource "docker_image" "nginx" {
name = "nginx:latest"
keep_locally = false
}
resource "docker_container" "nginx" {
image = docker_image.nginx.latest
name = "nginx-test"
ports {
internal = 80
external = 8000
}
}
This specifies the internal and external ports that allow the container to be accessed from the localhost. Run terraform apply
again and accept the changes.
You should now see two containers running, not the ports against the nginx-test container.
Browse to http://localhost:8000/ to view the default web page:
You can also use Terraform to deploy other aspects of Docker containers, such as volumes, secrets, tags, and networking.
resource "docker_network" "private_network" {
name = "my_network"
}
resource "docker_secret" "foo" {
name = "foo"
data = base64encode("{\"foo\": \"s3cr3t\"}")
}
resource "docker_volume" "shared_volume" {
name = "shared_volume"
}
#The source image must exist on the machine running the docker daemon.
resource "docker_tag" "tag" {
source_image = "xxxx"
target_image = "xxxx"
}
To clean up all the resources, type terraform destroy
.
To be more effective with Docker, you should keep your images as small as possible. This is the most important best practice that you can implement. To do that, you can:
- Use minimal images such as Alpine
- Minimize Run layers by running multiple commands in a layer using &&
- Use multi-stage builds – build your application in a stage, then copy the binary to a new stage that uses a minimal image
Here are some other best practices you should follow to make your life easier:
- Limit resource usage (CPU & Memory) – This will prevent containers from consuming excessive resources, thus improving the performance of other services on the host.
- Leverage Docker volumes — If your application requires access to data that should be kept regardless of the container’s state, volumes are the answer. Separating application and data logic allows containers to remain stateless.
- Keep containers stateless – The containers should never store the state because this will sacrifice scalability. Keep the state in Redis or a database instead of the container.
- Regularly scan for vulnerabilities—Use Clair, Trivy, or other tools to scan security vulnerabilities and solve them as soon as possible.
- Leverage orchestration tools – Use K8s, Docker Swarm, or AWS ECS to manage the scaling and load balancing of your containers.
Spacelift supports all Terraform FOSS features, and you can easily leverage the Docker provider from within the product.
With Spacelift, you get:
- Policies to control what kind of resources engineers can create, what parameters they can have, how many approvals you need for a run, what kind of task you execute, what happens when a pull request is open, and where to send your notifications.
- Stack dependencies to build multi-infrastructure automation workflows with dependencies, having the ability to build a workflow that, for example, generates your ec2 instances using Terraform and combines it with Ansible to configure them.
- Self-service infrastructure via Blueprints, or Spacelift’s Kubernetes operator, enabling your developers to do what matters – developing application code while not sacrificing control.
- Creature comforts such as contexts (reusable containers for your environment variables, files, and hooks), and the ability to run arbitrary code.
- Drift detection and optional remediation.
If you want to learn more about what you can do with Spacelift, check out this article.
Using Terraform with Docker allows you to automate the provisioning and management of Docker containers, ensuring consistent and repeatable deployments. In this article, we have shown how to use Terraform to build a Docker image and to set up Docker containers on Windows
If you want to elevate your Terraform management, create a free account for Spacelift today or book a demo with one of our engineers.
Note: New versions of Terraform are placed under the BUSL license, but everything created before version 1.5.x stays open-source. OpenTofu is an open-source version of Terraform that expands on Terraform’s existing concepts and offerings. It is a viable alternative to HashiCorp’s Terraform, being forked from Terraform version 1.5.6.
Manage Terraform Better with Spacelift
Build more complex workflows based on Terraform using policy as code, programmatic configuration, context sharing, drift detection, resource visualization and many more.