Simplify infrastructure self-service with Blueprints

→ Register for the webinar

Docker

Docker Architecture Overview – Structure & Components

docker architecture

In this article, we’ll explore Docker’s internal architecture and how it works. You’ll learn the roles of different components and what happens when you execute Docker CLI commands. We’ll also explore the connections between key Docker concepts, such as containers, images, registries, and runtimes.

  1. What is Docker?
  2. Docker architecture
  3. Docker architecture components in-depth

What is Docker?

Docker is a containerization platform that provides a complete system for building and running software containers. Containers package applications and their dependencies as ephemeral units that behave similarly to virtual machines, but share your host’s operating system kernel. Running apps in containers makes them more portable by letting you deploy anywhere Docker is available.

Docker Architecture

Running a docker CLI command requires interactions between a few different components. Docker uses a client-server driven architecture, where the client (usually the CLI) sends requests to a separate process that’s responsible for carrying out the required actions.

Docker runs a daemon (dockerd) that exposes a REST API as a Unix socket or network interface. The CLI processes your commands, converts them to API requests, and waits for the response from the Docker daemon. It’s the daemon that’s responsible for actually starting containers, building images, and handling the other Docker operations you invoke with the CLI.

Docker architecture diagram

Here’s a diagram showing the high-level interactions between the components:

docker architecture diagram

The use of a daemon means the CLI’s footprint is kept small. It also makes it easier to integrate other apps with Docker, as you can call the daemon’s REST API from your own scripts.

The daemon is installed alongside the CLI as part of Docker Engine. Because all docker commands are handled by the daemon, its process needs to be running the entire time you use Docker. However, the daemon is automatically configured as a system service after installation, so you don’t usually need to manually start it.

Internally, the daemon relies on lower-level tools to control containers. The container runtime (containerd by default) is the most important component as it provides an interface to the Linux kernel features that enable containerization—we’ll explain what it does in more detail below.

How does Docker architecture work?

What happens when you install Docker Engine?

Docker uses a client-server architecture and depends on several distinct components to handle low-level container interactions. As a result, installing Docker Engine on your host will actually include multiple software packages:

  1. The Docker Engine daemon, dockerd, to provide the API.
  2. The docker CLI that you use to interact with the daemon.
  3. The containerd runtime that manages containers at the host level.
  4. A system service that automatically starts the Docker daemon and your containers after host reboots.
  5. Prepopulated config files that ensure the Docker CLI can connect to your daemon instance.
  6. The docker compose tool that lets you build multi-container applications.

If you use Docker Desktop, then the same components are used, but they’re all delivered using a virtual machine. Desktop’s installer automatically creates the VM for you using the virtualization platform available on your host.

Docker Unix Socket vs. TLS vs. SSH

By default, the Docker CLI is configured to communicate with the daemon using a non-networked Unix socket (usually /var/run/docker.sock). This ensures access is only permitted from devices with access to the socket, which is typically just your host.

It’s possible to optionally expose the daemon on your network over TLS or SSH. These methods allow you to remotely administer your Docker instance, such as to start containers or build images from a different machine. TLS uses certificate files to authorize connections, whereas SSH relies on existing SSH keypairs, which are usually easier to distribute and configure.

You should stick to using the Unix socket if you don’t need remote access to your Docker daemon. Unnecessarily exposing Docker on your network is a potential security risk, as any compromise could let attackers interact with your Docker instance. If you must access Docker remotely, then SSH is generally the simpler option to get started with.

How to configure the Docker daemon?

The Docker daemon supports an extensive array of configuration options that customize its behavior and performance. These include settings for logging, metrics exposure, default networking rules, and how the API is exposed (socket vs network interface).

You can modify your Docker daemon’s config values in the following ways:

  • For Docker Engine — edit /etc/docker/daemon.json, then restart the Docker service using systemctl restart docker.
  • For Docker Desktop — head to the in-app settings page, select the “Docker Engine” tab, and make your config file changes in the interface that’s displayed.

If you’re connecting to a remote Docker daemon instance, then these settings should be configured on that host. Changing the values for your local daemon installation won’t affect other environments you use the Docker CLI with.

Benefits of the Docker architecture

Docker’s client-server, daemon-based architecture places an API between docker commands and their effects. This can seem like it’s adding complexity, but it actually unlocks a few useful benefits for more powerful container workflows:

1. Manage multiple Docker hosts with your local CLI installation

You can add remote environments (such as your production servers) as contexts you can switch between when using your local Docker CLI installation.

2. Build images remotely on a higher-powered machine

You can add remote contexts to run builds on higher performance hardware or hosts that use a different processor architecture to yours. This can improve overall DevOps efficiency.

3. Allow developers to connect to shared environments

Live, staging, and test environments are typically shared between several developers who require their own access to make deployments and monitor operations. Remote Docker daemon connections let everyone work in an environment, without having to open up host-level access.

4. Easily automate container interactions

Docker’s REST API makes it easy to script interactions with your containers. You don’t need to clumsily wrap Docker CLI commands or manually reimplement any of its functionality in order to automate your workflows.

Of course, the drawbacks of the daemon model still exist: running the Docker API server means there’s always an extra process on your host that could fail independently of your CLI. If you’re averse to using daemon-based software in your environments, you might want to consider using an alternative Docker-compatible container platform instead—Red Hat’s Podman, for example, maintains close Docker feature parity without using a daemon.

Docker architecture components in-depth

The Docker internals discussed above are generally transparent during everyday use. However, you should be familiar with the following high-level components and how they affect your Docker experience. They’ll help you recognize core container concepts and the ways in which they relate to each other.

1. Docker daemon (Docker Engine)

The Docker daemon (dockerd) is the process that’s started by Docker Engine to facilitate all container interactions on your host. The daemon serves the Docker API and performs actions in response to incoming API requests, such as running a container, building an image, or creating a network.

The daemon needs to stay alive the entire time you’re using Docker on your host. Failures will prevent you using docker commands because the CLI won’t be able to communicate with your daemon instance.

2. Docker API

The Docker API is an HTTP-based RESTful service that’s exposed by the Docker daemon. Making requests to the API lets you invoke any available action to manage your containers, images, and other Docker resources. It can be reached using a Unix socket or TLS connection.

You can integrate the API with your own tools to automate container workflows. The API is also critical to the standard Docker experience, as it’s how the docker CLI communicates with the Docker Engine daemon.

3. Docker CLI

The Docker CLI is the binary that handles the docker commands you run in your terminal. The CLI itself includes relatively little functionality. It merely converts your commands to Docker API requests that are sent to the configured daemon server. Therefore, the CLI contains no useful capabilities on its own—it always works in tandem with the dockerd daemon.

4. Docker Desktop

Docker Desktop is an all-in-one alternative to Docker Engine that’s designed specifically for developer use. Docker Engine is a purely headless experience (meaning you interact with it using APIs and CLIs) so it’s convenient for server-based deployments. Docker Desktop adds a graphical interface to help devs visualize their container resources.

Downloading Docker Desktop for Windows, Mac, or Linux gives you a comprehensive container experience. Unlike Docker Engine, Desktop packages all your resources inside a VM, which adds overheads but can improve consistency across platforms. Desktop bundles the Docker Engine, CLI, GUI, and optional third-party extensions, in addition to security and analysis tools that aren’t included with a standalone Engine installation.

5. Docker containers

Containers are the fundamental workload units that you run using Docker. They’re isolated environments, created from a Docker image, that generally run a single long-lived server process. Containers are managed at the OS level by container runtimes.

6. Container runtimes

Container runtimes are what Docker uses to actually run your containers. They provide a more accessible interface to the Linux kernel features that underpin containerization (chroot, cgroups namespaces, and other tools, which together isolate processes and enable virtualized root filesystems). Docker uses containerd by default but is compatible with other OCI-compliant runtimes.

7. Docker images

Docker images are the filesystem templates that you use to start your containers. They include the operating system packages, source code, dependencies, and other resources necessary to run specific apps. Images are assembled from Dockerfiles, lists of instructions that create the filesystem by running commands and copying in files from your host.

8. Image registries

Registries are an essential part of the Docker ecosystem. They store and distribute previously created images, providing package manager-like capabilities for content sharing.

The best-known registry is Docker Hub. This is the registry Docker interacts with by default when you reference an image that doesn’t already exist on your host. You can also use third-party registries or start your own private instance.

9. Image stores

Registries centralize image distribution, but they don’t control how images are stored on individual Docker hosts. Images you’ve pulled in Docker or used to start a container are added to your local image store.

Docker currently uses its own basic image store by default but is moving to use containerd for these capabilities too. This enables new image management features, including more efficient snapshotting and storage of other types of software artifact, such as SBOMs and provenance attestation files.

10. Docker networks

Docker networks are software-defined network interfaces that support a variety of networking modes but are most commonly configured as bridge or overlay devices. They provide isolated communication routes between your containers by automatically configuring appropriate iptables filtering rules on your host.

11. Storage volumes

Docker volumes are units of persistent storage that allow you to store data independently of your containers. Containers are ephemeral, meaning the contents of their filesystem is lost when they’re stopped. However, many real-world applications (including databases and file servers) are inherently stateful and require data persistence. Volumes enable this by mounting storage from your host, ensuring data remains accessible after individual containers are restarted.

Key points

This article has taken a look under Docker’s hood, revealing how the container software’s client-server architecture is implemented by the Docker Engine daemon and Docker CLI. We’ve covered how the CLI communicates with the daemon via a Unix socket or TLS connection, a characteristic that lets you easily administer remote Docker instances.

Docker makes containers more accessible to all developers. Although its internals may seem complex, you shouldn’t need to deal with them on a regular basis. However, it’s still important to understand the architecture, as incorrect configuration can affect performance or cause security issues.

We encourage you also to explore how Spacelift offers full flexibility when it comes to customizing your workflow. You have the possibility of bringing your own Docker image and using it as a runner to speed up the deployments that leverage third-party tools. Spacelift’s official runner image can be found here.

The Most Flexible CI/CD Automation Tool

Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities for infrastructure management.

Start free trial

The Practitioner’s Guide to Scaling Infrastructure as Code

Transform your IaC management to scale

securely, efficiently, and productively

into the future.

ebook global banner
Share your data and download the guide