Docker is the most popular containerization platform. It isolates software and its dependencies into self-contained units which run independently of your host machine.
Docker’s isolation model can enhance the security of your containerized workloads. Separating applications into containers makes it harder for errant processes to influence each other. However, Docker can also pose new security risks if you don’t properly harden your environment. So how to ensure Docker security?
In this guide, we’ll share some best practices for improving Docker security. We’ve split the techniques into three main sections: Docker daemon security, image security, and container security. These cover the main ways in which Docker can pose a threat.
We will cover:
Docker’s architecture is daemon-based: the client CLI you interact with communicates with a separate background service to carry out actions such as building images and starting containers. It’s critical to protect the daemon because anyone with access to it can execute Docker commands on your host.
Docker daemon security best practices include:
The Docker daemon is normally exposed via a Unix socket at /var/run/docker.sock
. It’s possible to optionally configure the daemon to listen on a TCP socket too, which allows remote connections to the Docker host from another machine.
This should be avoided because it presents an additional attack vector. Accidentally exposing the TCP socket on your public network would allow anyone to send commands to the Docker API, without first requiring physical access to your host. Keep TCP disabled unless your use case demands remote access.
When there’s no alternative to using TCP, it’s essential to protect the socket with TLS. This will ensure access is only granted to clients that present the correct certificate key.
TCP with TLS is still a potential risk because any client with the certificate can interact with Docker. It’s also possible to use an SSH-based connection to communicate with the Docker daemon, which allows you to reuse your existing SSH keys.
Docker defaults to running both the daemon and your containers as root
. This means that vulnerabilities in the daemon or a container could allow attackers to breakout and run arbitrary commands on your host.
Rootless mode is an optional feature that allows you to start the Docker daemon without using root. It’s more complex to set up and has some limitations, but it provides a useful extra layer of protection for security-sensitive production environments.
One of the easiest ways to maintain Docker security is to stay updated with new releases. Docker regularly issues patches that fix newly found security problems. Running an old version makes it more likely that you’re missing protections for exploitable vulnerabilities.
Regularly apply any updates offered by your OS package manager to ensure you’re protected.
Docker normally allows arbitrary communication between the containers running on your host. Each new container is automatically added to the docker0
bridge network, which allows it to discover and contact its peers.
Keeping inter-container communication (ICC) enabled is risky because it could permit a malicious process to launch an attack against neighboring containers. You should increase your security by launching the Docker daemon with ICC disabled (using the --icc=false
flag), then permit communications between specific containers by manually creating networks.
Ensuring OS-level security systems are active helps defend against malicious activity originating inside containers and the Docker daemon. Docker supports policies for SELinux, Seccomp, and AppArmor; keeping them enabled ensures sane defaults are applied to your containers, including restrictions for dangerous system calls.
Docker security is only as good as the protection surrounding your host. You should fully harden your host environment by taking steps such as regularly updating your host’s OS and kernel, enabling firewalls and network isolation, and restricting direct host access to just the administrators who require it.
Neglecting basic security measures will undermine the strongest container protections.
User namespace remapping is a Docker feature that converts host UIDs to a different unprivileged range inside your containers. This helps to prevent privilege escalation attacks, where a process running in a container gains the same privileges as its UID has on your host.
User namespace remapping assigns the container a range of UIDs from 0 to 65536 that translate to unprivileged host users in a much higher range. To enable the feature, you must start the Docker daemon with a --userns-remap
flag that specifies how the remapping should occur. Some container features aren’t compatible with remapping, but it’s worth enabling it whenever possible.
Once you’ve tightened the security around your Docker daemon installation, it’s important to also review the images that you use. A compromised image can harbor security threats that form the basis of a successful attack.
Docker image security best practices include:
Only select trusted base images for the FROM
instructions in your Dockerfiles. You can easily find these images by filtering using the “Docker Official Image” and “Verified Publisher” options on Docker Hub. An image that’s published by an unknown author or which has few downloads might not contain the content you expect.
It’s also advisable to use minimal images (such as Alpine-based variants) where possible. These will have smaller download sizes and should contain fewer OS packages, which reduces your attack surface.
Regularly rebuild your images from your Dockerfiles to ensure they include updated OS packages and dependencies. Built images are immutable, so package bug fixes and security patches released after your build won’t reach your running containers.
Periodically rebuilding your images and restarting your containers is the best way to prevent stale dependencies being used in production. You can automate the container replacement process by using a tool such as Watchtower.
Scanning your built images for vulnerabilities is one of the most effective ways to inform yourself of problems. Scan tools are capable of identifying which packages you’re using, whether they contain any vulnerabilities, and how you can address the problem by upgrading or removing the package.
You can perform a scan by running the docker scout command (previously docker scan) or an external tool such as Anchore or Trivy. Scanning each image you build will reveal issues before the image is used by containers running in production. It’s a good idea to include these scans as jobs in your CI pipeline.
Before starting a container, you need to be sure that the image you’re using is authentic. An attacker could have uploaded a malicious replacement to your registry or intercepted the download to your host.
Docker Content Trust is a mechanism for signing and verifying images. Image creators can sign their images to prove that they authored them; consumers who pull images can then verify the trust by comparing the image’s public signature.
Docker can be configured to prevent the use of unsigned or unverifiable images. This provides a safeguard against potentially tampered content.
Linting your Dockerfiles before you build them is an easy way to spot common mistakes that could pose a security risk. Linters such as Hadolint check your Dockerfile instructions and flag any issues that contravene best practices.
Fixing detected problems before you build will help ensure your images are secure and reliable. This is another process that’s worth incorporating into your CI pipelines.
The settings you apply to your Docker containers at runtime affect the security of your containerized applications, as well as your Docker host. Here are some techniques which help prevent containers from posing a threat.
Docker container security best practices include:
- Don’t expose unnecessary ports
- Don’t start containers in privileged mode
- Drop capabilities when you start containers
- Set up container resource quotas
- Ensure container processes run as a non-root user
- Prevent containers from escalating privileges
- Use read-only filesystem mode
- Use a dedicated secrets manager
Exposing container ports unnecessarily (using the -p
or --port
flag for docker run
) can increase your attack surface by allowing external processes to probe inside the container. Only ports which are actually needed by the containerized application (typically those listed as Dockerfile EXPOSE
instructions) should be opened.
Read more about exposing Docker ports.
Using privileged mode (--privileged
) is a security risk that should be avoided unless you’re certain it’s required. Containers that run in privileged mode are granted all available Linux capabilities and have some cgroups restrictions lifted. This allows them to achieve almost anything that the host machine can.
Containerized apps very rarely require privileged mode. It’s typically only useful when you’re running an application that needs full access to your host or the ability to manage other Docker containers.
Even the default set of Linux capabilities granted by Docker can be too permissive for production use. They include the ability to change file UIDs and GIDs, kill processes, and bypass file read, write, and execute permission checks.
It’s good practice to drop capabilities that your container doesn’t need. The docker run
command’s --cap-drop
and --cap-add
flags allow you to remove and grant them. The following example drops every capability, then adds back CHOWN
to permit file ownership changes:
$ docker run --cap-drop=ALL --cap-add=CHOWN example-image:latest
Docker doesn’t automatically apply any resource constraints to your containers. Containerized processes are free to use unlimited CPU and memory, which could impact other applications on your host. Setting limits for these resources helps to defend against denial-of-service (DoS) attacks.
Limit a container’s memory allowance by including the -m
or --memory
flag with your docker run
commands:
$ docker run -m=128m example-image:latest
To set a CPU allowance, supply the --cpus
flag. You must specify the number of CPU cores you want to make available:
$ docker run --cpus=2 example-image:latest
More precise options are supported for both of these resource constraints.
Containers default to running as root
but this can be changed by either including a USER
instruction in your Dockerfile or setting the docker run
command’s --user
flag:
$ docker run --user=1000 example-image:latest
Either of these methods ensure your container will execute as a specific non-root user. This minimizes the risk of container breakout attacks by preventing the containerized process from running commands as root
on your host.
Containers can usually escalate their privileges by calling the setuid
and setgid
binaries. This is a security risk because the containerized process could use setuid
to effectively become root
.
To prevent this, you should set the no-new-privileges
security option when you start your containers:
$ docker run –security-opt=no-new-privileges:true example-image:latest
When this flag is used, calls to setuid
and setgid
will have no effect. This prevents the container from acquiring new privileges.
Few containerized applications need to write directly to their filesystem. Opting them into Docker’s read-only mode prevents filesystem modifications, except to designated volume mount locations. This will block an intruder from making malicious changes to the content within the container, such as by replacing binaries or configuration files.
$ docker run --read-only example-image:latest
Sensitive data required by your containers, such as API keys, tokens, and certificates, should be stored in a dedicated secrets management solution. This reduces the risk of accidental exposure that arises when environment variables or regular config files are used.
For the best protection, you should adapt your application so it reads secrets from a separate platform, such as a Hashicorp Vault deployment – which is also integrated with Spacelift. This ensures your secrets are kept safe, independently of your source code, container deployments, and Docker host machine.
Taking steps to harden your Docker environment is critical to maintaining good security for your containers, applications, and host machine. In this guide, we’ve covered some key steps you can take to secure the Docker daemon, create safer images, and protect containers at runtime.
Relatively simple measures such as regularly updating Docker and using Docker secrets for sensitive values can significantly improve your overall security posture. Keep these tips in mind to safeguard your environment, then learn more about Docker by reading our other articles on the Spacelift blog.
We encourage you also to explore how integrating with third party tools is easily done at Spacelift. You have the possibility of installing and configuring them before and after runner phases or you can simply bring your own Docker image and leverage them directly.
The Most Flexible CI/CD Automation Tool
Spacelift is an alternative to using homegrown solutions on top of a generic CI. It helps overcome common state management issues and adds several must-have capabilities for infrastructure management.