Security in Docker

The #1 drawback today to using Docker containers is security. There are a large variety of reasons for this concern. Some are because it is just so new and people do not understand it, some of it is the default configurations could use improvement, and some of it is the very nature of the technology. This post will attempt to give a quick 101 in Docker security and what you need to know going in.

Namespaces

This is one of the two primary forms of isolation in a Docker container. It only allows the container to see processes within its namespace and not processes within the namespace of another container or the host system. This provides some general security isolation but nowhere near what you are accustomed to with a Virtual Machine (VM).

Network

Each container gets its own network stack and does not get privileged access to the sockets or interfaces for another container. However, if you expose public ports, you basically turn your container into a bridge interface. This means that the containers on the host act just like they are physically connected to a physical ethernet switch. Containers can then attack each other on these interfaces if they are malicious as it will not go through an upstream firewall or any security tools.

Control Groups

This is the second major form of isolation in Docker. This is very mature Unix technology (since 2006) and very effective for putting a restrictor on CPU, memory, and I/O. This prevents one container from doing a denial of service attack on other containers by consuming all available resources.

UID 0

The Docker daemon itself runs as root with ring 0 access. Because of this, it is critical that only trusted employees should have access to Docker. It is also important to ensure that you do not run any extraneous services on the server hosting Docker to limit any ability to exploit these permissions. The containers themselves should run as non-privileged users whenever possible to apply least privilege (this can be done by starting the container with a -u or -user option that is non-privileged). If you just do this one step, you will find that you have reduced the risk significantly. In addition, you should whitelist only the capabilities needed (an example list of capabilities are provided here, https://github.com/docker/docker/blob/master/daemon/execdriver/native/template/default_template.go).

Harden the Kernel

If capabilities and running as non-user are still too high a risk, you can also harden the core Kernel itself. A few good examples are below:

GRSEC and PAX – adds safety checks and will randomize memory
AppArmor – provides an extra safety net on Ubuntu
SELinux – developed by NSA and provides a safety net for RHEL
There is a lot of work at Docker and other complimentary vendors to make Docker and the core OS more secure for containers. However, it is still an emerging space and not nearly as mature or secure as VMs. In fact, due to the architecture, it is likely that containers will never be as secure as a VM.

The Really Nasty and Ugly Stuff

It is really important to understand that anyone who can interact with Docker and issue commands should be considered root and trusted. As an example (credit to @reventlov), the command below will mount the working directory into the Docker container and write a copy outside, onto the host, with super user rights:

docker run -v $PWD:/stuff -t my-docker-image /bin/sh -c \ ‘cp /bin/sh /stuff && chown root.root /stuff/sh && chmod a+s /stuff/sh’

From there, a simple command and you have root on the host and own the box:

test-user@vagrant:~/docker-test$ ./sh

You will now have root access. It is important to know that Docker considers this by design. What that means is any developer you give access to push code to Docker you are in effect making them sysadmin on the box. I think it is extremely unfortunate that a developer cannot make a direct push without being an admin. In my mind, that limits a lot of upside in the developer workflow and forces container pushes/management into the infrastructure team (which I think is crazy since the whole idea of containers should be to limit the dependence on infrastructure!). NOTE: SELinux or AppArmor will prevent this vulnerability as it will render -v useless by default.

Recent Good News

Docker 1.8 fixed another huge security hole in the platform. Specifically, they made a great leap forward with Docker Contest Trust and Notary. This is essentially a PKI for signing images or managing the hashes to ensure you are pulling from a trusted source. The system also implements a timestamp feature that prevents it from being vulnerable to replay attacks. This solution is production ready and fixes a huge hole in the platform. If you are using Docker in production, I highly recommend you deploy DCT immediately!

Conclusion

Like any new technology breakthrough, Docker has its pros and cons. The company CISO will inevitably have to accept and new and different risks to take advantage of the technology. However, with appropriate hardening and diligence, the risk can be mitigated to a large extent and the solution can be suitable for most environments.

2018-05-02T00:02:29+00:00