Modern development increasingly involves making use of third parties. Whether it's programming libraries from repositories such as npm or GitHub, using platforms like Azure and AWS, or just copying a code sample from Stack Overflow it's fair to say that most code used in an application will be written by someone other than the main developer.
Docker adds to the third parties that developers can make use of as part of their process. Specifically, Docker Hub provides a great way to quickly get up and running with a wide variety of applications.
With the use of third parties comes the inevitable question, "Do I trust this source?"
What sounds like a fairly straightforward question is actually pretty difficult to answer. What do we mean by "trust," for starters? With software repositories like Docker Hub, we want to be sure that the software provided by the author is the same as what we get from the repository, so we can trust that it hasn't been inadvertently or maliciously modified while it was stored on the repo.
For application repositories, the main way to address the issue of trust is to consider allowing developers to sign the content they host on the system. Properly implemented digital signatures can help assure users that the content they're using is the same content that the original developer created and essentially removes the repository from the trust picture.
So let’s talk about the Docker implementation of this principle with the feature that came out with 1.8, Docker Content Trust. With this, Docker has implemented a number of security controls that let developers confirm that the images they're using haven't been modified by Docker and also that they're getting the latest available version.
In implementing content trust, Docker has taken ideas from The Update Framework, which is an open specification aimed at addressing the risks of a code repository compromise. There has been work done on a number of implementations for things like Python library packaging, but with Content Trust, Docker has perhaps the most mainstream implementation to date.
Implementing Content Trust
So this is all great stuff in theory, but what does it actually mean, practically speaking? How is it implemented, and what does it for users’ Docker image management lifecycles?
The first thing to note is that implementing Content Trust limits how you can deploy images to Docker Hub. The commonly used auto-build approach won't work with Content Trust, because with this method it's not the owner of the repository who creates the images, it's the Docker Hub itself. Obviously, if the Hub does the build, it would be hard to prevent the Hub from modifying it!
So if you're planning to implement Content Trust, you'll need to take the approach of creating an image and then pushing that from your systems to the Docker Hub.
Perhaps the best way of explaining Content Trust, as with most new concepts, is by walking through a worked example. If you want to try this out, you'll need an account on Docker Hub and an up-to-date (1.8+) installation of Docker set up to log in to it.
To keep things simple for this example, I'm going to work on a basic Dockerfile with the default name in the directory I'm in.
First, let’s enable Content Trust, as it isn't enabled by default. To do this, run:
This will only set it for the current bash session. If you want it to be set persistently, ensure that the environment variable is set for the user pushing to the repository on login.
Next, we'll want to build the image and add a tag to it, so we can push it up to the Hub. Here we're specifying the name of the user, the repository, and the tag we want to set. Then we’ll build based on a default named Dockerfile.
docker build -t raesenecttest/sign_test:latest .
So now that we've got our image ready to go, the next step is to push the image to the repository. It's here that things change from the usual workflow.
Now we push the image to Docker Hub using the following standard command:
docker push raesenecttest/sign_test:latest
This will start off looking like any other push, but then you'll get text similar to what I show you next. You'll be asked to create passphrases for an "offline" key and a "tagging" key.
Signing and pushing trust metadata You are about to create a new root signing key passphrase. This passphrase will be used to protect the most sensitive key in your signing system. Please choose a long, complex passphrase and be careful to keep the password and the key file itself secure and backed up. It is highly recommended that you use a password manager to generate the passphrase and keep it safe. There will be no way to recover this key. You can find the key in your config directory. Enter passphrase for new offline key with id 2772ef5: Repeat passphrase for new offline key with id 2772ef5: Enter passphrase for new tagging key with id docker.io/raesenecttest/sign_test (7ca517e): Repeat passphrase for new tagging key with id docker.io/raesenecttest/sign_test (7ca517e): Finished initializing "docker.io/raesenecttest/sign_test"
Obviously, creating a strong, unique passphrase is a good idea here; access to the keys would allow attackers to sign images as yourself, which largely defeats the purpose of Content Trust. This does introduce a bit of a complication to some deployment processes. For example, if you’re automatically creating new builds, you need to be able to supply the passphrase for the key to sign the image. Docker does provide a mechanism to do this using environment variables, but you'll want to be very sure of the security of your build host when doing this. Obviously, anyone who can read the environment variables that are used for this will then have your passphrases!
The keys involved here are a very important part of the Content Trust process, so at this point it's worth discussing key management.
One of the most difficult parts of operational cryptography is managing keys well. Loss of keys can have catastrophic effects on a cryptographic system, both in terms of ruining the security benefits and/or making it inoperable and leading to a lot of remedial work in fixing it.
For the Docker Content Trust system, the two keys that have been generated need to be treated somewhat differently.
The offline key is only needed when creating new repositories for your account and, as the name suggests, should be kept offline when not in use. The reason for this, from a security perspective, is that even with a decent passphrase, you want to reduce the risk of the key being compromised. With access to the key, it may be possible to work out the passphrase (either by keylogging or brute-force attacks), which would allow an attacker to sign images as yourself.
The key can be found in ~/.docker/trust/private/root_keys. To protect it, something like an encrypted USB key (or two, depending on how important it will be) could be a good idea.
The other key that was created is the tagging key. It’s stored in:
This key should be backed up too, but it's more likely to be kept online. You'll need it more frequently than the offline key, and also it's easier to recover from the loss of the key.
So now that we've got our signed image, what does this actually mean? Well, for users of the image, they can enable Docker Content Trust. They’ll have assurance that when they download the image, it hasn't been tampered with since being pushed by the developer and that it's a fresh image for that tag.
Using Content Trust
At the moment, however, enabling Content Trust could be a bit of a frustrating experience for users, as it helpfully blocks you from pulling any tag that isn't signed. If you've enabled Content Trust, and you try to pull a repository that isn't signed, you'll get the following message, and you won't get your image:
Using default tag: latest no trust data available
Hopefully, this will become less of a problem as more repositories enable Content Trust. However if you want to run with it enabled and still use unsigned tags, you can pass the --disable-content-trust switch to the Docker command to carry out individual operations without Content Trust enabled.
Overall, Docker Content Trust is a really useful feature if you're looking to build trusted images based on Docker. It addresses a key security question that can overshadow open source software repositories. Hopefully, it'll see a good uptake amongst users of Docker Hub.