Simplify infrastructure self-service with Blueprints

→ Register for the webinar

Docker

Docker Tutorial for Beginners – Introduction & Getting Started

Docker tutorial

Docker is a platform for building and running containers. Containers are software units that package source code and its dependencies inside isolated environments. They’re usually ephemeral, with the contents of a container’s filesystem disposed of when it stops, but you can also store data persistently using features such as volumes.

Containers can be started anywhere an OCI container runtime is available, whether on your workstation with Docker, or in the cloud using orchestrators like Kubernetes. This narrows the gap between development and production.

In this guide, we’ll walk through installing and getting started with Docker. You’ll learn the basic concepts and see practical examples of the most common container management tasks.

  1. Getting Started With Docker
  2. Docker Basics
  3. Using Docker
  4. Advanced Topics

Getting Started With Docker

Nowadays Docker comes in two main flavors: Docker Engine and Docker Desktop. Engine is a daemon and CLI for Linux systems, whereas Desktop uses virtualization to provide a consistent Docker experience on Windows, Mac, and Linux. It includes a graphical interface for managing your containers, in addition to the docker CLI from Engine. Although this article focuses on the traditional CLI experience, all the actions you’ll see in the following sections can also be achieved using Desktop’s GUI.

If you’re on Windows or Mac, Desktop is the easiest way to install Docker. Head to the documentation’s installation page and download the correct package for your system. Run the installer and then start Docker Desktop from your applications menu. Now you’re ready to run docker commands in your terminal.

To get Docker Engine on Linux, you can either download a prebuilt static binary, or follow the steps in the documentation to install from your distribution’s package manager. You’ll need to add Docker’s repository and GPG signing key first.

Installing Docker Engine on Ubuntu

The following steps work for Ubuntu and similar distributions. First, make sure you’ve got all the prerequisite packages for the installation process:

$ sudo apt-get update
$ sudo apt-get install -y ca-certificates curl gnupg lsb-release

Next, download and save the GPG key used to sign the Docker repositories:

$ sudo mkdir -p /etc/apt/keyrings
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg

This allows the apt package manager to verify the authenticity of Docker’s packages. Now add the repository to your package lists:

$ echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

The command automatically selects the correct repository for your operating system version and processor architecture.

Update your package lists to include the new repository:

$ sudo apt-get update

Finally, install the docker-ce, docker-ce-cli, and containerd packages:

$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io

They have the following roles:

  • docker-ce – The Docker daemon that manages your containers.
  • docker-ce-cli – The CLI you use to interact with the daemon.
  • containerd.ioContainerd is a container runtime; it wraps operating system features such as cgroups to start and run your containers, on request from the Docker daemon.

Complete the installation by adding yourself to the docker group. This allows you to run docker CLI commands without using sudo. Logout and back in again to apply this change.

$ sudo groupadd docker
$ sudo usermod -aG docker $USER

Checking Docker Is Working

Once you’ve installed Docker, you can check it’s working by starting a basic container. The hello-world image on Docker Hub is a good choice.

Run the following command to start a container with the image:

$ docker run hello-world:latest

You should see output similar to the following:

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
2db29710123e: Pull complete 
Digest: sha256:c77be1d3a47d0caf71a82dd893ee61ce01f32fc758031a6ec4cf1389248bb833
Status: Downloaded newer image for hello-world:latest

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

The first few lines show the Docker daemon recognizing that you’ve never used the hello-world image before, so it has to download it from Docker Hub. It then starts the container and streams its output to your terminal. All the lines from “Hello from Docker” onwards are emitted by the process inside the container.

Your Docker installation is ready to use.

Docker Basics

Before you start properly using Docker, it’s important to understand the two main concepts you’ll encounter:

  • Images – All containers are started from an image. The image defines the initial state of the container’s filesystem. It’s similar to the install media you’d use with traditional virtual machines. Prebuilt images are available in public repositories such as Docker Hub, including options for popular operating systems and applications.
  • Containers – A container is an instance of an image. It’s a running process independent of your other containers and the existing processes on your host. You can start multiple containers that use the same image simultaneously. Containers have a writable filesystem layer applied on top of the image, allowing modifications to be made.

Now you can begin confidently working with containers (running processes) and images, the reusable templates you start containers from.

Check out the Docker architecture overview.

Using Docker

Packaging software as a container makes it more portable, allowing you to eliminate discrepancies between environments. You can use the container on your laptop, in production, and within your CI/CD infrastructure. Take a look at how Spacelift uses Docker containers to run CI jobs. You have the possibility of bringing your own Docker image and using it as a runner to speed up the deployments that leverage third-party tools. Spacelift’s official runner image can be found here.

Ready to use Docker? Let’s jump in!

Running a Container

The docker run command is used to start containers from an image. You ran a simple container in the section above when you checked your installation:

$ docker run hello-world:latest

The command accepts the name of the image to run as its argument. The last part of the image name, after the colon, is the tag, which is used to identify different versions and variants.

The latest tag is a convention for the newest release of the image, but beware, this is not standardized and can expose you to breaking changes. It’s best practice to use a more precise tag when available, such as 1.1.0.

Detaching From Your Container

The hello-world image runs a process and then exits immediately. Most real-world containerized applications aren’t like this, however. They’re normally long-lived processes such as servers that run indefinitely, until the container is stopped.

The -d flag instructs the Docker CLI to detach from the container, leaving the process running in the background. You’ll be dropped back to your terminal prompt.

$ docker run -d nginx:latest

Creating an Image

Images are created from Dockerfiles. These are text files containing a list of instructions that assemble the image’s filesystem. Copy the following sample and save it to a file called Dockerfile in a new directory:

FROM node:latest
COPY app.js .
ENTRYPOINT ["node"]
CMD ["app.js"]

Next, save the following JavaScript code to app.js in the same directory:

console.log("It works!");

Now run the following command to build your image:

$ docker build -t demo-image:latest .
...
Successfully built 80890ed154fa
Successfully tagged demo-image:latest

The command instructs Docker to build the Dockerfile in your working directory (indicated by the . argument) and tag the output as demo-image:latest. The following steps occur:

  1. The FROM statement instructs Docker to use the node:latest image as the base for your new image. The instructions that come afterwards apply on top of the node:latest image’s filesystem.
  2. The COPY statement copies your application’s source code from your working directory, into the container.
  3. The image is configured so node is run when the container starts, with your app’s code as its argument.

Try starting a container using your image:

$ docker run demo-image:latest
It works!

Your code has been executed inside a containerized Node.js environment!

Pushing Images to a Registry

Pushing to a registry is the next step in image authorship. At the moment, your image is confined to your own machine. Registries allow images to be distributed among peers, team members, and the broader community.

Docker Hub is the biggest registry, and you can sign up for free. Before you can push to your account, you’ll need to authenticate the Docker CLI by running docker login and following the prompts.

$ docker login

Next you must specially tag your image so it references your Docker Hub username. The following command adds another tag to your existing image – replace <DOCKER_HUB_USERNAME> with your real Docker Hub username:

$ docker tag demo-image:latest <DOCKER_HUB_USERNAME>/demo-image:latest

Use the docker push command to upload your image. The Docker CLI automatically infers the registry URL from the image’s tag.

$ docker push <DOCKER_HUB_USERNAME>/demo-image:latest

Now other users can start containers from your image, even when it’s not already on their machine!

$ docker run <DOCKER_HUB_USERNAME>/demo-image:latest

Listing Containers on Your Host

You can list the containers on your host with the docker ps command. This defaults to only showing currently running containers:

$ docker ps
CONTAINER ID   IMAGE                           COMMAND                  	CREATED         STATUS     
0000ac4e7eba   nginx:latest                    "/docker-entrypoint...."   	3 seconds ago   Up 1 minute

To see all your containers, including stopped and exited ones, add the -a flag:

$ docker ps
CONTAINER ID   IMAGE                           COMMAND                  	CREATED         STATUS     
0000ac4e7eba   nginx:latest                    "/docker-entrypoint...."   	3 seconds ago   Up 1 minute
8cf6ed7f95fe   hello-world:latest              "/hello"                 	2 minutes ago   Exited (0) 2 minutes ago
a45b50521fc0   hello-world:latest              "/hello"                 	2 minutes ago   Exited (0) 2 minutes ago

The command provides each container’s ID and current status. To get more detailed information about an individual container, pass its ID or name to docker inspect:

$ docker inspect 8cf6ed7f95fe

This produces verbose JSON output that includes all the data associated with the container.

Starting, Stopping, and Deleting Containers

Detached containers with long-living processes can be stopped by passing their ID or name to the docker stop command:

$ docker run --name nginx -d nginx:latest
87a116e59f1815fda7dd666f136e5f9f3d085bbd80889c0e0e5836e95e9511ed

$ docker stop nginx
nginx

Use the docker start command to restart the container with the same configuration. Note that any filesystem modifications made while it was previously running will be lost.

$ docker start nginx

To remove a container, run docker rm with the container’s ID or name:

$ docker rm nginx

If the container’s running, you’ll need to add the --force flag to confirm your intentions:

$ docker rm nginx --force

Listing and Removing Images

The docker images command lists the images on your host, including their age and size:

$ docker images
REPOSITORY                    TAG       IMAGE ID       CREATED         SIZE
nginx                         latest    3964ce7b8458   6 days ago      142MB
hello-world                   latest    feb5d9fea6a5   15 months ago   13.3kB

To delete an image, pass its ID or tag to docker rmi:

$ docker rmi hello-world:latest

Images can’t be deleted if they’re being used by a running container. Remove the container first, then the image.

Check out our Docker CLI commands cheat sheet.

Advanced Topics

Now you’ve learned the basics, here are some more advanced techniques for real-world applications.

Port Forwarding Into Containers

Port forwarding allows you to access applications running inside a container using your host’s network interfaces. In some of the examples above, a container running the NGINX web server was started using the nginx:latest image. As no port was bound, NGINX couldn’t be accessed from outside your Docker host.

To bind a port, add the -p flag when you start a container:

$ docker run --name nginx -p 8800:80 -d nginx:latest

This example binds port 8800 on your host to port 80 inside the container, which NGINX listens on.

Now you can send a request to localhost:8800 and get a response from your containerized web server:

$ curl localhost:8800
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

Using Volumes to Store Container Data

Containers are meant to be ephemeral. Although their filesystems are writable, any changes you make are lost after the container stops. You shouldn’t assume a container will run indefinitely, as it could be stopped due to a crash in your application or a host reboot.

Volumes are a mechanism for persisting data inside containers. Volumes are storage units that you mount into the container’s filesystem. Data written to the volume is stored outside the container, so it can be reattached to a replacement container after the original stops.

You can mount a volume using the -v flag for docker run:

$ docker run -v my_volume:/container/mount/point my-image:latest

Alternatively, you can bind mount a directory in your host’s filesystem directly into the container:

$ docker run -v $PWD:/container/mount/point my-image:latest

This approach is ideal for development use. Changes you make to a bound source code directory will be immediately reflected in the container.

Viewing Container Logs

Docker automatically collects logs from the standard output and error streams of your container’s foreground process. You can access these logs by passing the container’s ID or name to the docker logs command:

$ docker logs nginx
172.17.0.1 - - [20/Dec/2022:21:01:52 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" "-"
172.17.0.1 - - [20/Dec/2022:21:10:02 +0000] "GET /about HTTP/1.1" 404 153 "-" "curl/7.81.0" "-"
172.17.0.1 - - [20/Dec/2022:21:10:04 +0000] "GET /demo HTTP/1.1" 404 153 "-" "curl/7.81.0" "-"
172.17.0.1 - - [20/Dec/2022:21:10:06 +0000] "GET / HTTP/1.1" 200 615 "-" "curl/7.81.0" "-"

This allows you to review your application’s activity and spot any errors.

To continually stream new logs into your terminal, add the --follow flag to the command:

$ docker logs nginx --follow

Hit Ctrl+C to stop watching the logs.

Getting a Shell to a Container

One final tip: sometimes you might need to get inside a container, to debug problems or run admin tools included with an image. You can run a process in an existing container using the docker exec command:

$ docker exec <CONTAINER> <COMMAND>

Pass the container ID or name as <CONTAINER>, then the command and its arguments as <COMMAND>. If you use the name of a shell as the command, you’ll get an interactive terminal session, within the container. For this to work, you must also pass the -i and -t flags to docker exec, to attach your terminal’s standard input stream and set up a TTY.

$ docker exec -it nginx bash
root@58661670f1ad:/# 

Now you can inspect the container’s filesystem and run any commands you need.

Key Points

Docker is the most popular containerization platform for developers. You can run containers and build images with the docker CLI, or opt for Docker Desktop if you prefer a graphical interface. Images created by Docker can be used in any other OCI-compatible container environment too, such as Podman or Kubernetes.

The most Flexible CI/CD Automation Tool

Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities for infrastructure management.

Start free trial

The Practitioner’s Guide to Scaling Infrastructure as Code

Transform your IaC management to scale

securely, efficiently, and productively

into the future.

ebook global banner
Share your data and download the guide