Building Docker Images Using Codeship Pro and Packer

Written by: Sergey Bahchissaraitsev
7 min read

When Codeship Pro added support for building Docker images using Codeship Jet CLI, it aligned perfectly with the timeline of our infrastructure migration to Kubernetes at Xplenty. We’re migrating our data integration platform as a service from AWS EC2 and Google Compute-based VMs to a fully Dockerized infrastructure on Kubernetes. We’re using Packer to build our images across multiple platforms, and we wanted to use our same infrastructure and configuration when building Docker images too.

Enter Codeship Pro

Starting with Codeship Jet CLI was easy: The documentation is well written, and the configuration part is easy to understand. What we were lacking at the time was documentation about using Packer, and there were only a few examples of how to do Docker in Docker (starting one Docker container from within another) on Codeship.

Those were the parts we had to investigate ourselves. As a result, we wanted to share our experience building Docker images on Codeship Pro using Packer by writing this article.

Local Jet development environment

One of the things that has impressed me since I started working with Codeship Pro is how simple it is to simulate the build process on your local development environment.

Building Packer is no different. To try this build on your local environment, just navigate to the folder containing the sources of this demo in your terminal and type:

jet steps

Flexibility

While working with Codeship Pro, we were able to achieve goals that exceed the standard use cases. We were able to meet all the requirements and build our Docker images as necessary, with little effort.

Some examples of our requirements included:

  • Using Packer instead of Dockerfile

  • Working cross-platform to build AWS AMIs, Google Compute images, and Docker images

  • Pushing to the Docker registry with Packer

  • Deploying on Kubernetes cluster

Docker in Docker (DinD)

I think it’s important to understand the Docker in Docker (DinD) concept, which is, in short, running one Docker container from another.

A simple example of this is just executing a Docker client run command from container A to bring container B up on the same host machine. That’s the case I describe in this post.

A more complicated example would be actually running container B inside container A or on a Docker cluster of some sort. That’s a little less relevant for our case, so I’m not going to elaborate on it.

There’s a good article you can read for more information about DinD here: Docker Can Now Run Within Docker. To be honest, we used a different tactic, but the concept remains the same.

The Dockerfile for Packer

The first thing to produce is the Dockerfile, which can build us a Docker image for Packer. A few things to note about this file:

  1. We decided to build our Docker image with Packer, which does not use Dockerfile. This file is for the Docker container, which will execute the Packer commands.

  2. We needed a working Docker client inside this image, so we decided that using an existing Docker image preloaded with it would be a good starting point.

FROM docker:1.11.2
ENV PACKER_VERSION 0.10.1
RUN curl -s -O https://releases.hashicorp.com/packer/${PACKER_VERSION}/packer_${PACKER_VERSION}_linux_amd64.zip
    unzip packer_${PACKER_VERSION}_linux_amd64.zip -d /usr/bin
    rm packer_${PACKER_VERSION}_linux_amd64.zip

The Packer JSON File Example

Now that we have a Docker image with Packer, we need a Packer JSON file that we can execute.

I wasn’t sure what Docker image to build for this example when writing the article, so I decided to go with my stack-leader repository, which just executes a Bash loop.

{
  "variables": {
    "dockerhub_username": "{{env `DOCKERHUB_USERNAME`}}",
    "dockerhub_password": "{{env `DOCKERHUB_PASSWORD`}}",
    "dockerhub_email": "{{env `DOCKERHUB_EMAIL`}}",
    "dockerhub_push_repository": "{{env `DOCKERHUB_PUSH_REPOSITORY`}}"
  },
  "builders": [
    {
      "type": "docker",
      "image": "bahchis/stack-leader:latest",
      "export_path": "/tmp/bahchis_stack_leader.tar",
      "ssh_pty": true
    }
  ],
  "provisioners": [
    {
      "type": "file",
      "source": "func.sh",
      "destination": "/stack-leader/func.sh"
    }
  ],
  "post-processors": [
    [
      {
          "type": "docker-import",
          "repository": "{{user `dockerhub_push_repository`}}",
          "tag": "{{ timestamp }}"
      },
      {
          "type": "docker-push",
          "login": "true",
          "login_username": "{{user `dockerhub_username`}}",
          "login_password": "{{user `dockerhub_password`}}",
          "login_email": "{{user `dockerhub_email`}}"
      }
    ]
  ]
}

A quick overview of the JSON file:

  • variables: They will be used for pushing the image to the Docker Hub. You can read more in the “Pushing the Image” section later.

  • builders: The part that tells Packer to build a Docker image from another image and some more arguments.

  • provisioners: The commands that will be executed when building the Docker image—just a simple file upload for our example.

  • post-processors: There are two: docker-import, receives the artifact Docker image from the builder and imports it locally to apply the Docker image tag; and docker-push, which pushes the image into the Docker registry.

Configuring Jet Services and Steps YAML Files

The Codeship approach with the Jet platform is simple, but before you do something (i.e., a step), you should configure where you do it or on which service.

In order to successfully run a build on Codeship Pro, you need to configure at least two YAML files, one for services and another for steps.

Services

codeship_pro_packer_demo:
  add_docker: true
  build:
    image: codeship-pro-packer-demo/packer
    dockerfile_path: Dockerfile
  volumes:
    - /root/.packer.d:/root/.packer.d
  env_file: codeship-env.decrypted

In the YAML file above, we define a few interesting parts:

  • add_docker: This is the most important part. By default, Docker CLI is configured to communicate with the local Docker Daemon, resulting in all operations and containers running on the same level as the CLI. This boolean parameter configures Docker environment variables to instruct the Docker client (the one inside our Packer image) to communicate with the host and not to look for a local Docker Daemon.

  • build: Just the image to be used for Packer. We use our Dockerfile to build one, but we could just as easily pull an image from a Docker registry.

  • volumes: This is the standard way to define volumes for containers, but in this case it has a very specific and crucial impact. Packer uses files under this directory internally to get the output and exit code of the commands executed on Docker. If the same folder is not present on both Docker containers (i.e., the one we are building and Packer), the build process will just hang.

  • env_file: Encrypted environment variable file containing credentials for Docker Hub login. To understand how they are used, please follow Encrypting Environment Variables.

Steps

- service: codeship_pro_packer_demo
  name: Build codeship pro packer demo
  command: packer build -machine-readable codeship-pro-packer-demo.json

It’s a very simple YAML file in this case:

  • service: the name of the services we defined above.

  • name: a friendly name for our build step.

  • command: the command we use to execute and build using Packer.

Pushing the Image

The goal is to get the newly created Docker image into a Docker registry so we can easily deploy it on our servers.

The first question you’re probably asking yourself is why we’re using the Packer Docker-push post-processor instead of the Codeship Pro push step.

Well, all the code in this article is a simplified version of our real environment. We have multiple builders (AWS/Google/Docker), multiple provisioners, and multiple post-processors, and we rely on timestamps for our tags. It was a simpler option to just run the same way on Codeship Pro rather than change how things currently work.

The way it works is that the codeship-env file contains the required env variables with the credentials required to push the Docker image into the Docker Hub. The post-processor in the Packer JSON uses those environment variables as parameters and executes the push command with the required login information.

To use the Codeship push step, the procedure is very similar. You need to configure another step, and instead of encrypting an environment file, you need to encrypt your Docker config JSON and pass it as encrypted_dockercfg_path in the steps YAML configuration.

Running the build on Codeship

This is what we’re aiming for: all the configuration above should allow us to run successfully with the Codeship Jet CLI. The Codeship project has to be configured with Docker support enabled and connected to the git repository.

Once all of that is configured, it should be just a matter of minutes for a new Docker image to be available at the Docker registry after a git push is made.

Conclusion

Thanks to Codeship Pro, building Docker images using Codeship and Packer and being able to migrate services from VM to Docker and continue using Codeship for CI was a really easy task for us. Most importantly, it saved us a lot of time and allowed us to focus on building and improving our product.

We were able to choose our preference of builds and deployments in our continuous deployment environment and keep using our tools of choice such as Packer. When it comes to creating both VM and Docker images for multi-cloud, multi-region environments like Xplenty’s data integration platform, Packer is definitely a very good tool for the task.

I hope this article is helpful for the readers who try to build Docker images on Codeship Pro using Packer, and I encourage you to share your experience in the comments.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.