What's New in Docker: Swarm Mode, Built in Orchestration, Services, Healthchecks, .dab files, constraints

Written by: Manuel Weiss

UPDATE: With January 1st, 2017 we rebranded our hosted CI Platform for Docker from “Jet” to what is now known as “Codeship Pro”. Please be aware that the name “Jet” is only being used four our local development CLI tool. The Jet CLI is used to locally debug and test builds for Codeship Pro, as well as to assist with several important tasks like encrypting secure credentials.

To kick off the first day of DockerCon in Seattle, Mike Goelzer and Andrea Luzzardi spoke about what's new with Docker in 2016. Goelzer is the open source product management lead for Docker’s Core Runtime, and Luzzardi is a Software Engineer at Docker and was part of the original team that built the project. The biggest announcement definitely was the release of Docker 1.12.

Docker 1.12: What's new?

I took a few notes from my seat in the audience, and the following is a summary of the most interesting takeaways from this release.

Swarm mode orchestration

With Docker Swarm, you can create a swarm: a self-organizing and self-healing group of engines. Doing this is very simple:

docker swarm init

Creating and scaling swarms

You might be familiar with running a single container with docker run. You now also can start a replicated, load-balanced process that is distributed on a swarm of engines with Docker service:

docker service create --name frontend --replicas 5 -p 80:80/tcp nginx:latest

You can scale this up to as many instances as you want or basically as many instances as your host server machine is able to cope with.

docker service scale frontend=100

If you want to create a typical web application, this is probably going to be a good starting setup for you:

docker network create -d overlay mynet docker service create --name frontend --replicas 5 -p 80:80/tcp \    --network mynet mywebapp docker service create --name redis --network mynet redis:latest


Swarm mode-enabled engines are self-organizing and self-healing, meaning that they are aware of the application you defined and will continuously check and reconcile the environment when things go awry. For example, if you unplug one of the machines running an NGINX instance, a new container will come up on another node.

You can specify what it means for your container to be “healthy,” then check with HEALTHCHECK if your container is healthy. If not, the state of this container will be set to UNHEALTHY, and the routing mesh manager will take over to handle this problem for you. You can find out more about the HEALTHCHECK feature here.

The Routing Mesh

The Routing Mesh Manager manages workers. These workers are holding services that can be instantiated by deploying .dab files (more on services and .dab files later). If one of your services goes down, the routing manager will automatically assign it to other worker. By using HEALTHCHECKs and the Routing Mesh managers, you can set up a self-organizing and self-healing container architecture.


According to Goelzer, one of Docker's biggest ambitions is to provide security out of the box. A core principle for Docker 1.12 is creating a zero-configuration, secure-by-default, out-of-the-box experience.

One of the biggest barriers of adoption of TLS has always been how hard it was to create, configure, and then maintain the Public Key Infrastructure (PKI). In Docker 1.12, everything gets set up and configured. Docker even automates certificate rotation for you now.


You now can have three different instances of Redis that replicate data amongst each other. Right now, Docker only supports container tasks for services, but they're thinking about supporting unikernel tasks or any other type of long-running process in the future.

Stacks and .dab files

Services are grouped into stacks. A stack holds services, which hold tasks, which run inside containers. You can group several services together. Let's say you run a Frontend service, a Redis service, and a Report service; you can now bundle all of them together into a stack (which represents your entire application).

Docker released a new file format for this: the '.dab' file (Distributed Application Bundle). These files are multi-service images that instantiate stacks. You deploy .dab files to bring a stack into existence. Goelzer specifically mentioned that this is very much a work-in-progress, and it's still in experimental mode. They want to have GA by mid-to-late July.

Global services

You now can distribute one copy of a container to every node of a cluster. You can do this by running --mode=global.


With constraints, you can run certain containers only on certain nodes. You can do this by setting up engine labels, which is not new to Docker but they're using it now to set up constraints. You can define labels with:

--label com.example.storage=“ssd”

Constraints are very powerful. For example, think about scheduling a workload that will only run on machines with a SSD drive.

--constraint com.example.storage=“ssd”

More about Docker 1.12

To learn more, check out the full release notes here and read Docker's official blog post about Docker 1.12.

There's a lot to be excited about, and we'll definitely be using some of these new features here at Codeship to improve our CI Platform for Docker, Codeship Jet.

Are you currently at DockerCon? I'd love to meet you in person. Come join us at our booth, S13, and get a shirt, stickers, and a free demo of Codeship Jet!

Tomorrow, I'll be live-tweeting and posting a summary of Michele Titolo's talk, "Making Friendly Microservices."

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.