Pushing Images to Docker Hub with Codeship Pro

Written by: Tit Petric

When it comes to delivering software, the accepted method of doing this is to make a build whenever something in your git repository changes. When the build is done and tested, you may release your software; for example, publishing it on GitHub as a versioned release or pushing a Docker image to the Docker Hub or your private registry. This process is called continuous integration.

For our example, we will take an ID generator service. I am going to use Codeship as the CI that will create the build and release the appropriate binaries to GitHub and push a release to Docker Hub.

What will happen every time I push changes to GitHub is:

  1. GitHub will notify Codeship that I made a push via Webhook

  2. Codeship makes a git checkout

  3. Based on codeship-services.yml, a build environment is created (software installed, etc)

  4. Based on codeship-steps.yml, your software is built and deployed

There are some other parts within these steps slightly hidden away.

Starting with Codeship

First off, signing up for Codeship is simple, and their free tier gives you 100 builds per month. We will be using the Codeship Pro product, which has full Docker support, customizable CI environments, and you can run a local build runner. Before getting ahead of myself, let me walk you through creating our first project:

After you click the big green button, Codeship will ask you which project you'd like to create. They support GitHub, Bitbucket, and GitLab repositories out of the box. Our service, sonyflake, is hosted on GitHub, so I'm using that to connect to Codeship.

When connecting your repository, be sure to choose the Codeship Pro plan. At this point, we will need to install their Jet CLI. There's full documentation available on their Running Codeship's Jet Locally for Development page.

Why do we even need Jet?

Jet is used to locally debug and test builds for Codeship Pro, as well as to assist with several important tasks like encrypting secure credentials.

Very simply put, after you install Jet, you can run jet steps in your code checkout to see how your build will perform on Codeship Pro. In development, you'll run it until you're satisfied that your build runs as it should -- when you push your code, GitHub will trigger the build on Codeship.

Since I'm setting up a release to Docker Hub and to GitHub, I'll definitely need to securely add some credentials to the container where I'll be building our app. Codeship Jet provides a way to encrypt these credentials by providing an AES Key for the project.

I saved the key as codeship.aes into the sonyflake repository and made sure that I didn't commit it by mistake by adding it to the .gitignore file.

I will need a number of environment variables that will allow me to build and release sonyflake. Save these environment variables into a file named .env:

DOCKER_REGISTRY_USERNAME=titpetric
DOCKER_REGISTRY_PASSWORD=ImNotTellingYou
GITHUB_TOKEN=somethingsecret

The GITHUB_TOKEN should be generated on GitHub under "Developer Settings," "Personal Access Tokens," or by following this link directly. The "repo" permission should be set for the token as it's marked on the screenshot.

The other environment variables facilitate the login to the Docker Hub, so we can push our built Docker image when we finish building it. Be sure to add /.env to the .gitignore file.

To encrypt these secrets, you can simply issue:

jet encrypt .env .env.encrypted

This will use your codeship.aes key to encrypt your environment variables, and after you do this, you can add and commit the .env.encrypted file. The file will be used from codeship-services.yml definition.

Services with Codeship

Your codeship-services.yml declares a service, which is used to run CI/CD builds with Codeship. Each of these services will produce a Docker image, which will then be used to run individual steps in codeship-steps.yml. As our service is written in Go, we will use the builder pattern to produce a Docker image that will contain only the statically compiled executable.

We will also publish the binaries to GitHub, so we will need to have github-release as a dependency.

FROM golang:1.8-alpine
MAINTAINER Tit Petric <black@scene-si.org>
RUN apk --update add bash make docker &amp;&amp; go get -u github.com/aktau/github-release
WORKDIR /go/src/app

For building the actual application, Dockerfile.build is used. In it, all the development-time dependencies, like make or others, are specified. Even Docker is such a dependency, as we need to include it to enable the build a Docker image.

As I declare the service for our build environment, I chose to use Codeship's Docker integration feature to provide a Docker environment to the build container. The service file for sonyflake looks like this:

sonyflake:
  build:
    dockerfile: Dockerfile.build
  add_docker: true
  encrypted_env_file: .env.encrypted
  volumes:
    - ./:/go/src/app

Looking at both the Dockerfile.build above and the declaration of the sonyflake service, you will notice that the source code for sonyflake is passed via the services volumes option. This enables us to run multiple steps later, which will use the same volume. So we can separate our build step, which will produce binaries from our deploy steps, which will publish them.

!Sign up for a free Codeship Account

CI Steps with Codeship

To use the service that we declared, we need to set up a codeship-steps.yml file. This file will declare all the commands that we want to run in our pipeline. I am creating three steps that will run in sequence. I am separating my "build" into two steps, because I'm interested in how much time it takes to compile my application as well as how much time it takes to create a Docker image. I will see this information via the Codeship dashboard.

- type: serial
  tag: master
  service: sonyflake
  steps:
    - name: 'prepare'
      command: ./codeship-build.sh prepare
    - name: 'build go'
      command: ./codeship-build.sh build-go
    - name: 'build docker'
      command: ./codeship-build.sh build-docker
    - name: 'release'
      command: ./codeship-release.sh

The tag: controller line is important here. We want to target only the commits/pushes that happen on this branch. When we push files to GitHub releases, a new tag is created in the repository -- this in turn triggers another webhook event that triggers another Codeship build. In order to prevent an infinite loop, this line was added.

I did wrap my logic into two scripts. The script codeship-build.sh basically invokes any possible step from the Makefile, while at the same time providing additional information that will be burned into our release container.

#!/bin/bash
set -e
# Get git commit ID
CI_COMMIT_ID=${CI_COMMIT_ID:-$(git rev-list HEAD --max-count=1)}
CI_COMMIT_ID_SHORT=${CI_COMMIT_ID:0:7}
# Get latest tag ID
CI_TAG_ID=$(git tag | tail -n 1)
if [ -z "${CI_TAG_ID}" ]; then
    CI_TAG_ID="v0.0.0";
fi
CI_TAG_AUTO="${CI_TAG_ID}"
if [ -f "build/.date" ]; then
    CI_TAG_AUTO="$(echo ${CI_TAG_ID} | awk -F'.' '{print $1 "." $2}').$(<build/.date)"
fi
make -e CI_TAG_ID=${CI_TAG_ID} \
     -e CI_TAG_AUTO=${CI_TAG_AUTO} \
     -e CI_COMMIT_ID=${CI_COMMIT_ID} \
     -e CI_COMMIT_ID_SHORT=${CI_COMMIT_ID_SHORT} "$@"

The Makefile itself contains all the build commands that produce our binaries.

all:
    @echo 'Usage: make <prepare|build-go|build-docker>'
build-go: build/sonyflake build/sonyflake.exe
    @echo "Build finished"
build/sonyflake:
    CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -o build/sonyflake main.go
    cd build &amp;&amp; tar -zcvf sonyflake_linux_64bit.tgz sonyflake &amp;&amp; cd ..
build/sonyflake.exe:
    CGO_ENABLED=0 GOOS=windows GOARCH=amd64 go build -o build/sonyflake.exe main.go
    cd build &amp;&amp; tar -zcvf sonyflake_windows_64bit.tgz sonyflake.exe &amp;&amp; cd ..
build-docker:
    docker build --rm -t titpetric/sonyflake --build-arg GITVERSION=${CI_COMMIT_ID} --build-arg GITTAG=${CI_TAG_AUTO} .
prepare:
    @rm -rf build &amp;&amp; mkdir build
    @date +"%y%m%d-%H%M" > build/.date
    @echo "Build folder prepared"
.PHONY: all build-docker prepare

With the declared targets, we do a little bit of Go magic; we produce a statically linked binary for 64bit Linux, which we will also pack into a Docker image. We also provide a 64bit Windows executable, which we will push as a release to GitHub.

FROM alpine:3.5
MAINTAINER Tit Petric <black@scene-si.org>
ARG GITVERSION=development
ARG GITTAG=development
ENV GITVERSION=${GITVERSION} GITTAG=${GITTAG}
ADD ./build/sonyflake /sonyflake
ENTRYPOINT ["/sonyflake"]

As I was saying, it's a good idea to tag your Docker images with commit hashes. The commit hash is an indicator as to which commit from git was used to build the image you are pulling. This hash could be used to "lock" your deployment to a specific version, or it could be used to revert some breaking changes in your latest builds. Either way, useful.

The release script provides the same git variables, but they are used to provide GitHub release tags and to tag our produced Docker image and push them to GitHub. I'm going to break down the file codeship-release.sh to better illustrate the individual steps of the release process.

#!/bin/bash
set -e
# Get git commit ID
CI_COMMIT_ID=${CI_COMMIT_ID:-$(git rev-list HEAD --max-count=1)}
CI_COMMIT_ID_SHORT=${CI_COMMIT_ID:0:7}
# Get latest tag ID
CI_TAG_ID=$(git tag | tail -n 1)
if [ -z "${CI_TAG_ID}" ]; then
    CI_TAG_ID="v0.0.0";
fi
CI_TAG_AUTO="${CI_TAG_ID}"
if [ -f "build/.date" ]; then
    CI_TAG_AUTO="$(echo ${CI_TAG_ID} | awk -F'.' '{print $1 "." $2}').$(<build/.date)"
fi
make -e CI_TAG_ID=${CI_TAG_ID} \
     -e CI_TAG_AUTO=${CI_TAG_AUTO} \
     -e CI_COMMIT_ID=${CI_COMMIT_ID} \
     -e CI_COMMIT_ID_SHORT=${CI_COMMIT_ID_SHORT} "$@"

The beginning of the release script takes care of generating the latest commit IDs and a semver-style tag starting at v0.0.0 if no tags already exist in your git repository. This information is used as somewhat of an audit trail that will be used to tag the Docker image, as well as the release on GitHub.

## Login to docker hub on release action
if [ ! -f "/root/.docker/config.json" ]; then
    docker login -u $DOCKER_REGISTRY_USERNAME -p $DOCKER_REGISTRY_PASSWORD
fi

We are using the environment from our encrypted .env.encrypted file to log into Docker Hub. This enables us to use docker push from the release step.

function github_release {
    TAG="$1"
    NAME="$2"
    latest_tag=$(git describe --tags <code>git rev-list --tags --max-count=1

This is a bash function that uses github-release to create a new release on GitHub. For a bit of magic, a short "oneline" diff is added as the description of the release. This provides a minimal changelog, which gives a short overview of the changes between releases, listing all the commits and linking them to the commit detail view.

function github_upload {
    echo "Uploading $2 to $1"
    github-release upload \
        --user titpetric \
        --repo sonyflake \
        --tag "$1" \
        --name "$(basename $2)" \
        --file "$2"
}

The function github_upload adds a new file to the release. This should be invoked once for every file that you want to upload.

## Release to GitHub
github_release ${CI_TAG_AUTO} "$(date)"
FILES=$(find build -type f | grep tgz$)
for FILE in $FILES; do
    github_upload ${CI_TAG_AUTO} "$FILE"
done

We create a new release, tagging it with $CI_TAG_AUTO. We add the files that we created in the build step to the release, with calls to github_upload.

## Release to Docker Hub
docker tag titpetric/sonyflake titpetric/sonyflake:${CI_COMMIT_ID_SHORT}
docker push titpetric/sonyflake:${CI_COMMIT_ID_SHORT}
docker push titpetric/sonyflake:latest

And the final part of the release: we tag our image with the short commit ID, and we push it and the latest image to Docker Hub. This way, we can ensure that latest will always be up to date, while individual tags can be used for locking an image when you need to run a specific version.

As we added various information from GIT to our container, we can inspect this information from the container's environment. We can override the entrypont to run something else in the container -- env, for example. This program will print the existing environment variables:

# docker run --rm --entrypoint=env titpetric/sonyflake | grep GIT
GITVERSION=e22a185d64838209ee3f62faf322fa0e8add3f70
GITTAG=v1.0.170402-2022

So, if you have a running container, you know exactly which version it's running. You can run the exact same container on another host by using the GITVERSION above to pull/run the tagged image.

If you want just the binary, you can use the GITTAG value to find the release on GitHub and download that. This information enables you to be flexible with your releases. The GITVERSION matches both the Docker registry and GitHub, and GITTAG matches a compiled GitHub release if you just need the binary.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.