Using Honcho to Create a Multi-Process Docker Container

Written by: Ben Cane
9 min read

A common misconception is that Docker is only for creating single-process or single-service containers. While it's true that the Dockerfile and docker run command options are designed for running a single process, that doesn't mean that Docker itself doesn't allow for a multi-process Docker container.

In fact, Docker's documentation has a very useful tutorial on how to run multi-process containers using Supervisor to manage the processes within the container.

Why I Don't Use Supervisor for Docker Containers

While Supervisor is a great tool (I'm a big fan), I don't personally like using it for multi-process Docker containers. This is due to how Supervisor handles process failures.

Supervisor is an application designed to start processes and keep those processes running if they fail. This feature can be very useful in some cases if configured appropriately. If left to its default configuration however, you can easily find yourself with a container that is not providing the service it is supposed to provide, and you may not even know that service is down.

By default, Supervisor will auto-restart processes that fail. If the managed process fails three times in a row or exits in a way Supervisor is not configured to handle, the supervisord process will stop restarting the process. The problem is that the supervisord process itself does not exit; it simply logs the event.

This means that the container will still be running, but the important service within the container is dead.

Using Honcho to Create a Multi-Process Container

Honcho is a tool for managing Procfile-based applications. A Procfile is a file that specifies which commands should be executed to start an application via tools like Honcho.

We can use Honcho to start and manage all of the processes defined within a Procfile. The nice thing about Honcho is if any one of these processes exit abnormally, by default the whole Honcho process and sub-processes are stopped. This causes the Docker container to exit, which is the desired effect for this type of scenario.

In this article, we will use Honcho to create a custom multi-process Docker container. This container will be used to host a Redis service that supports TLS connections.

Redis-tls Docker container

Redis is a highly popular in-memory datastore. It does not, however, currently support SSL or TLS. To add TLS support to Redis, we will need to run another application called stunnel. The stunnel service is an SSL/TLS proxy that can be used as a reverse proxy to perform TLS offloading for services that do not natively support TLS.

While it is possible to deploy stunnel as a stand-alone container and have it linked to a Redis container, for this article we will be combining these two processes into a single container.

The redis-tls Docker container we will be launching today will run two processes: the first being stunnel and the second being redis-server. As Redis traffic arrives to the containe, it will first pass through stunnel, which will perform all of the TLS communication. From there, stunnel will then forward the unencrypted traffic to the redis-server instance.

This allows for communications outside of the container to be encrypted, with non-encrypted traffic being contained within the container.

Since a redis-tls container image does not exist today, we will also be creating our own custom Docker image as part of this article. To start this process, we will first need to define a Dockerfile.

Creating a custom Docker image from a Redis base

The first instruction in any Dockerfile is usually FROM. This instruction is used to define what base image the Docker container should be created from.

Since we will be creating a Redis container with TLS support, we can base our container from the standard redis Docker image by specifying redis in the FROM instruction: FROM redis

A benefit of basing our image on the official redis image is that our image will inherit all of the latest features of the official redis image when updates occur. This keeps the redis-server installation up to date with the latest security and bug fixes.

Installing stunnel and pip

With the redis image as our base image, we don't have to go through the effort of installing Redis. We will however need to install the stunnel package. We'll also need to install the python-pip package to install pip. The pip command will be used to install Honcho within the Dockerfile.

To install these packages, we will add a RUN instruction calling the Apt package manager.

FROM redis
RUN apt-get update --fix-missing && \
    apt-get install -y stunnel python-pip && \
    rm -rf /var/lib/apt/lists/*

The above will run three instructions during the Docker build process:

  • First instruction: the apt-get command with the update parameter. This updates the Apt repository cache within the container.

  • Second instruction: the apt-get command, but this time with the install parameter. This instruction will install the stunnel and python-pip packages.

  • Third instruction: rm -rf /var/lib/apt/lists/*, which clears Apt's package cache. This is useful for keeping our Docker container small as the apt-get update command creates quite a bit of cached data.

Installing Honcho

After the installation of packages with Apt, the next build step will be to install Honcho. To do this, we'll use the RUN instruction again but this time to call the pip command.

FROM redis
RUN apt-get update --fix-missing && \
    apt-get install -y stunnel python-pip && \
    rm -rf /var/lib/apt/lists/*
RUN pip install honcho

After this build step, we will have completed all of the installation steps. Next, we'll focus on configuration of both stunnel and Honcho.

Configuring stunnel

With stunnel, some configuration is needed before it can provide the TLS reverse proxy functionality. Luckily, the stunnel configuration is fairly straight forward.

To configure stunnel, we need to specify that stunnel runs in the foreground, which port to accept connections on (6379 the default Redis port), where to forward those connections to (6380 a port unique for this container), and the certificates to use for the TLS encryption.

We will add all of these configurations into the stunnel.conf file within our local build directory.

foreground = yes
debug = 7
[redis]
accept = 0.0.0.0:6379
connect = localhost:6380
cert = /certs/cert.pem
key = /certs/key.pem

Once the stunnel.conf file is created, we need tell Docker to add this file to the container during build. We can do this by using the ADD instruction within the Dockerfile.

FROM redis
RUN apt-get update --fix-missing && \
    apt-get install -y stunnel python-pip && \
    rm -rf /var/lib/apt/lists/*
RUN pip install honcho
ADD stunnel.conf /stunnel.conf

During the Docker build process, Docker will place the stunnel.conf file from the build directory into the / directory within the container. One important item to remember is that when we start the stunnel process, we will need to specify the location of this configuration file. This configuration will go into the Procfile.

Creating a Procfile

As described earlier in this article, Procfiles are used by tools like Honcho to start up applications. As with the stunnel.conf file, we will need to first create a Procfile within our local build directory.

stunnel: /usr/bin/stunnel4 /stunnel.conf
redis: /usr/local/bin/redis-server /etc/redis/redis.conf

The above is the contents of our Procfile; the format is quite simple. The first element in a line is the name of the process, and the second (after the :) is the command to run. One of the useful things about Procfile tools like Honcho is that you can control and launch processes by name. For example, if we wished to start just the Redis process, we could do so by executing honcho start redis.

With the Procfile define, we once again need to add it to the container using the ADD instruction within the Dockerfile.

FROM redis
RUN apt-get update --fix-missing && \
    apt-get install -y stunnel python-pip && \
    rm -rf /var/lib/apt/lists/*
RUN pip install honcho
ADD stunnel.conf /stunnel.conf
ADD Procfile /Procfile

With the stunnel.conf and Procfile defined and added to the Dockerfile build steps, we can now move on to the Dockerfile instructions for starting our applications.

Starting Honcho

To execute commands within our Dockerfile, we previously used the RUN instruction. These instructions are only executed during the build process for Docker containers. To specify how to start our application, we will use the CMD instruction.

FROM redis
RUN apt-get update --fix-missing && \
    apt-get install -y stunnel python-pip && \
    rm -rf /var/lib/apt/lists/*
RUN pip install honcho
ADD stunnel.conf /stunnel.conf
ADD Procfile /Procfile
WORKDIR /
CMD honcho start

The command to start our application is simply honcho start. This tells Honcho to read through the Procfile and start all processes defined within it. You may notice another Dockerfile instruction shown above as well: WORKDIR. The WORKDIR instruction is used to define the working directory that the command within CMD is executed. Since the Procfile is in /, the working directory should also be /. By default, Honcho checks the current working directory.

Building the Container

At this point, our Dockerfile is defined, and configuration files have been created. Our next step is to build the container using the docker build command.

$ docker build -t redis-tls .
Successfully built a7bbe84cb52b

Once the image build is complete, we can go ahead and start the container.

Starting the container

To start the container, we will use docker run just like any other Docker container. However, with this specific container, there are a few other options we need to pass.

$ docker run -d -p 6379:6379 -v /path/to/certs:/certs --name redis-tls redis-tls

When executing the docker run command, we passed the -d flag, which starts the container in "detached" mode, sending the container into the background.

We also passed the -p flag with the options of 6379:6379. This option sets up port forwarding from the Docker host port 6379 to port 6379 within the container. This will be needed to connect to the stunnel and Redis services.

We also passed the -v flag which sets up Docker volumes. When passing the argument of /path/to/certs:/certs, we are mapping the host directory of /path/to/certs to /certs within the container. This allows for the cert.pem and key.pem to be created on the host and referenced from within the container.

Validating That Everything Is Running

To validate whether or not the processes have started appropriately, we can use the docker logs command.

$ docker logs redis-tls
23:44:59 redis.1   | 13:M 30 Jun 23:44:59.544 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
23:44:59 redis.1   | 13:M 30 Jun 23:44:59.544 * The server is now ready to accept connections on port 6380
23:44:59 system    | stunnel.1 started (pid=12)
23:44:59 stunnel.1 | 2016.06.30 23:44:59 LOG5[17]: Configuration successful
23:44:59 stunnel.1 | 2016.06.30 23:44:59 LOG7[17]: Listening file descriptor created (FD=6)
23:44:59 stunnel.1 | 2016.06.30 23:44:59 LOG7[17]: Service [redis] (FD=6) bound to 0.0.0.0:6379

From the above, we can see that both the redis and stunnel processes are running, but what happens if one of the processes were to stop?

$ docker logs redis-tls
20:39:15 system    | redis.1 stopped (rc=0)
20:39:15 system    | sending SIGTERM to stunnel.1 (pid 12)

If one process stops, Honcho will send a signal to all other processes to stop them as well. This means that we can have a multi-process container stop in the same fashion as a single-process container during failures.

Summary

In today's article, we created a custom multi-process Docker container with Honcho, we created a Redis container with TLS support, and we learned a little about sharing directories from a Docker host to a Docker container.

For those interested in using the redis-tls container, you can do so by simply executing a docker run command using the madflojo/redis-tls image:

$ docker run -d -p 6379:6379 -v /path/to/certs:/certs --name redis-tls madflojo/redis-tls

The contents of the build directory and Dockerfile are also available on GitHub for those interested in contributing or modifying the redis-tls image.

Have another tool for running multi-process containers? Throw a comment below and share your experiences.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.