Deploying to Rancher Using Codeship Pro

Written by: Brendan Fosberry
11 min read

Rancher is a container management platform that helps bridge the gap between container stacks and infrastructure platforms. It supports the ideology of building your own platform through composition by allowing users to connect to custom hosting providers, select an underlying container platform, and run infrastructure containers on top of that platform.

Similarly, Codeship Pro allows you to compose your pipeline and connect custom deployments using containers. Let’s take a look at how to set up services in Rancher and integrate them with your Codeship Pro deployment pipeline.

When it comes to container platforms, there are many options to explore, including customizable, low-level options such as Kubernetes and Mesos, and more abstract application-focused platforms such as Deis and Rancher. Unlike many other options, Rancher allows a high degree of customization, including integrating with almost any hosting provider through Docker Machine drivers. It also allows the creation of Swarm, Mesos, Kubernetes, or Cattle clusters for scheduling and managing containers. You can read more in the Rancher documentation.

Along with this high degree of infrastructure composability, Rancher exposes a host management API and UI. The standard APIs and tooling of the underlying container orchestration layer are also available, such as the Docker API or the Kubernetes UI. Rancher exposes further generic API and UI elements for managing containers, custom registries, SSO, credential management, and other resources. These tools along with the rancher-compose CLI provide a very automation-friendly container management interface.

Using Rancher

At a high level, Rancher separates your infrastructure out into different environments, each tied to a specific container orchestration technology. Rancher supports Mesos, Kubernetes, and its own Cattle out of the box, as well as experimental versions of Swarm and Windows. Any new hosts added to an environment are automatically provisioned and added to the cluster. The selected orchestrator affects how your containers are scheduled and executed, as well as the high-level configuration of your stack.

Rancher's UI supports the various concepts for different container technologies, such as replica sets and deployments for Kubernetes. Since the standard APIs for the underlying container technology are exposed, your Rancher deployment should be compatible with existing tools and libraries. The rancher-compose CLI is also a great way to configure and scale your services. However, it integrates at the stack level, which may not be utilized for standard services with all container orchestrators.

Rancher and Codeship Pro

Since Rancher supports various underlying container platforms, each with unique interfaces, integration will vary based on the platform used. For this example, we will use Rancher's Cattle, however the process should be more or less the same with other platforms, but perhaps with different tooling.

Integrating this tooling with your pipeline is simply a matter of encrypting credentials and running containers with the relevant tooling installed as part of the base image.

Bootstrap application

We can start by defining a simple application. For now, we’ll use a very simple Go program which will respond to an HTTP request on port 8080.


package main
import (
func main() {
        http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
                if _, err := w.Write([]byte("OK")); err != nil {
                        log.Printf("failed to write response: %s\n", err.Error())
        err := http.ListenAndServe(":8080", nil)
        if err != nil {
                log.Fatal("listenAndServe: ", err)

Let’s put this in a Docker image. For the time being, we’ll use a standard Golang container to compile and execute our code, however when deploying Go apps, it’s far more efficient to compile your Go binary and ship it in a scratch or other minimal container. You can read more about this process on this Codeship blog post on minimal Golang containers.

FROM golang
ADD . .
RUN go build -o app .

With this basic framework, we can now use Docker to build and distribute our application. I’ll name the image bfosberry/myapp, however, you should name it something relevant under your Docker Hub username.

$ go build ./
$ docker build -t bfosberry/myapp ./
Sending build context to Docker daemon 5.858 MB
Step 1/5 : FROM golang
 ---> 9ad50708c1cb
Step 2/5 : ADD . .
 ---> 59a5c02609d6
Removing intermediate container d8eab74e1253
Step 3/5 : RUN go build -o app .
 ---> Running in 3ec9a702d326
 ---> 2245dc22b894
Removing intermediate container 3ec9a702d326
Step 4/5 : ENTRYPOINT "./app"
 ---> Running in 29d1f25871c3
 ---> 312fdcdea57c
Removing intermediate container 29d1f25871c3
Step 5/5 : EXPOSE 8080
 ---> Running in e5ed81591f3a
 ---> da75cf231098
Removing intermediate container e5ed81591f3a
Successfully built da75cf231098
$ docker run -it -p 8080:8080 myapp
… [new termina]...
$ curl localhost:8080
$ docker push bfosberry/myapp
The push refers to a repository []
55d8bb8cca51: Pushed
d17d48b2382a: Pushed

Setting up Rancher

In a production system, you would most likely design your application from the ground up and write configuration files directly. However, for demonstration purposes, we can design our application stack in the Rancher UI and download the generated configs. To do this, we’ll need to set up a couple of components to support running our application in Rancher. To follow along with this setup, you can use

First of all, we’ll have to connect a host to run our application on. You can connect a host through the infrastructure menu, and Rancher will provision it using Docker Machine if needed. Any hosts provisioned will have Docker installed and will run the Rancher agent in a container.

One of the strengths of Rancher is that you can use one of the standard integrations, such as Digital Ocean, AWS, a custom Docker Machine driver, or just execute a generated command on a custom host to register it with Rancher.

While a host is provisioning, we can create a new User level stack for our application.

As you can see, you can paste in (or POST to the API) a precreated compose file defining your stack, but in our case we don’t have one yet. For our stack, we’ll define our custom image, in this case bfosberry/myapp, or the name of the image you created and pushed. To simulate a fully fledged application, let’s also attach a currently unused DB and a Rancher LB to route traffic to our set of containers.

Let’s add a service to our stack. It’s possible to reference a private Docker image by adding a registry record with credentials to your Rancher environment. You can find out more in the Rancher documentation.

Next, let's add a sample sidekick container for our DB by clicking on the Add Sidekick Container button at the top. This isn’t really needed, but we can add it for demonstration purposes. Sidekick containers are similar to links except, as with Kubernetes Pods, they execute alongside each primary container on the same host. It’s important to understand this difference, which should be guided by whether you want to scale your linked containers independently or maintain a 1:1 ratio.

With this sidekick container configured, we can create our service. You may notice we did not expose any ports directly. By keeping services restricted to the private Rancher network, we can reduce port conflicts and instead use a Rancher load balancer to route and balance traffic between our containers. We can add this from our Stack view under the Add Service submenu.

We’ll want to create a load balancer on every host. With this in place, we can hit any of the hosts in our Cattle cluster via the external port reserved for our service. The request will be routed and balanced across the cluster and between our services containers.

Under the Stack menu, you can export the config for your stack, which includes Docker Compose and Rancher Compose files. We’ll need to store these in our repository in order to interact with the stack using Rancher Compose, and we can use this as a reference for our stack for development. Extract the archive and store the contained files in your project code folder under the environments/production folder.

At this point, you should be able to ping the health check for our application on any of the nodes in your cluster, but port 8080 should not respond. This is because each container is listening on port 8080 but on a private virtual network. The load balancer we configured routes a request to one of the containers within that network.

$ curl $NODE1_PUBLIC_IP:8081
$ curl $NODE2_PUBLIC_IP:8081

!Sign up for a free Codeship Account

Deploying to Rancher

Let’s define a starter CI/CD pipeline using the Codeship Pro service and step formats to test on our new application. Although no tests have been written, the go test command should pass. This should provide us with a baseline for setting up an automated deployment to Rancher.

Normally, I would strongly advise against setting up continuous deployment until you have a lot of faith in your continuous integration process and overall testing. However, this is for demonstration purposes only.


    image: bfosberry/myapp
    dockerfile: ./Dockerfile
  - db
  - “8080”
  entrypoint: /usr/local/go/bin/go
  image: postgres
- service: app
  command: test ./…

The image for the app service is built using the Dockerfile we added earlier, and for the purposes of this example, constitutes the build artifact we will be pushing to Docker Hub. To establish this as part of our pipeline, we need to:

  • Push the image to Docker Hub;

  • and call Rancher to upgrade the service and pull the latest version of the image.

Pushing an image is simple. We just need to encrypt some Docker Hub credentials that allow us to push, commit them to our repo, and add a push step to our pipeline. To find out more information on how to push to a Docker registry, see the Codeship image push documentation.


- service: app
  command: test ./…
- service: app
  type: push
  image_name: bfosberry/myapp
  encrypted_dockercfg_path: ./dockercfg.encrypted
  tag: master

Finally, we need to use Rancher Compose to trigger a service upgrade. Be sure to check out the options Rancher Compose provides here. To interact with the Rancher APIs using Rancher Compose, we will need access credentials, which you should generate specific to the environment. This limits access and simplifies command execution, since interactions are naturally namespaced within the API.

To roll out an updated image for an existing service with Rancher Compose, you can use the up command:

$ rancher-compose  --p $STACK_NAME --verbose up -d --force-upgrade --pull --confirm-upgrade $SERVICE_NAME

The up command coerces the service into a working state, -d ensures the service executes in the background, --force-upgrade makes sure the service is upgraded, --pull forces Rancher to pull the latest version of the image, and --confirm-upgrade tells Rancher to automatically accept the changes and clean up old containers when complete. You can separate the confirmation out into another command, which allows for reactive actions based on deployment failure such as rollbacks, but for now we will keep it simple.

To run this as part of your pipeline, you need to encrypt your Rancher access credentials and add them to your project code repo and build an image based off of Rancher Compose. When building this image, you’ll need to add in the rancher-compose.yml and docker-compose.yml files you downloaded earlier.

Finally, you can execute this image as part of your steps file triggering the rancher-compose command. Luckily I have already created a Docker image containing Rancher Compose, which is available on Docker Hub as bfosberry/rancher-compose, however creating your own is also fairly easy.


FROM bfosberry/rancher-compose
ADD environments environments
ADD bin/

cd environments/production
rancher-compose  --project-name Myapp --verbose up -d --force-upgrade --pull --confirm-upgrade Myapp

In this example, we have added the rancher-compose.yml and docker-compose.yml from Rancher into ./environments/production. We have also encrypted the endpoint, access key, and secret key for Rancher into environments/production/cideploy.env.encrypted. We then need to add a service for the deploy container.




    image: bfosberry/myapp
    dockerfile: ./Dockerfile
  - db
  - “8080”
  entrypoint: /usr/local/go/bin/go
  image: postgres
    dockerfile_path: Dockerfile.deploy
  encrypted_env_file: environments/production/cideploy.env.encrypted

Finally, we can add a step to execute this container.


- service: app
  command: test ./…
- service: app
  type: push
  image_name: bfosberry/myapp
  encrypted_dockercfg_path: ./dockercfg.encrypted
  tag: master
- service: deploy
  tag: master
  command: bin/

Then we can execute our pipeline with Jet and see Rancher updating our service by running jet steps --tag controller --push or just push our code up to build on Codeship Pro. Within the standard Jet output, you’ll see the Rancher Compose logs walking through the process of connecting to the Rancher endpoint and setting up the service.

DEBU[0000] Environment Context from file : map[]
DEBU[0000] Opening compose files: docker-compose.yml
DEBU[0000] [0/1] [DB]: Adding
DEBU[0000] [0/1] [MyLB]: Adding
DEBU[0000] Opening rancher-compose file: /app/environments/production/rancher-compose.yml
DEBU[0000] [0/3] [myapp]: Adding
WARN[0000] A newer version of rancher-compose is available: 0.12.4
DEBU[0000] Looking for stack myapp
DEBU[0000] Found stack: Myapp(1st17723)
DEBU[0000] Launching action for myapp
DEBU[0000] Project [myapp]: Creating project
DEBU[0000] [0/3] [DB]: Ignoring
INFO[0000] [0/3] [myapp]: Creating
DEBU[0000] Finding service myapp
DEBU[0000] [0/3] [MyLB]: Ignoring
DEBU[0000] Found service myapp
INFO[0001] [0/3] [myapp]: Created
DEBU[0001] Project [myapp]: Project created
DEBU[0001] Launching action for myapp
DEBU[0001] Project [myapp]: Starting project
DEBU[0001] [0/3] [DB]: Ignoring
INFO[0001] [0/3] [myapp]: Starting
DEBU[0001] Finding service myapp
DEBU[0001] [0/3] [MyLB]: Ignoring
DEBU[0001] Found service myapp
INFO[0007] Finished pulling bfosberry/myapp
INFO[0007] Finished pulling postgres
DEBU[0007] Finding service myapp
DEBU[0007] Found service myapp
INFO[0008] Updating myapp
INFO[0019] Upgrading myapp
DEBU[0062] Finding service myapp
DEBU[0062] Found service myapp
INFO[0062] [1/3] [myapp]: Started
DEBU[0062] Project [myapp]: Project started


After Jet has finished, the service may not have finished deploying since we gave Rancher the option to confirm the deployment. Cleanup is handled outside of the rancher-compose command execution, which is visible via the UI. This can cause issues with back-to-back deployments, so it’s generally a good idea to add a polling check to wait for the service to enter an active state.

Deploying to multiple environments

If you are trying to push to multiple environments, the principle should be the same. However, you’ll have several credentials to encrypt, potentially different stack names, and different Docker Compose/Rancher Compose files.

The simplest way to approach this is to have a separate folder for each environment you need to deploy to with the isolated Rancher Compose/Docker Compose files, encrypted env files for deploying to that specific environment (including URL and access keys), and then a unique service loading in that specific folder and encrypted environment in the codeship-service.yml file. You can then reference a different service for each environment you wish to deploy to under different steps, potentially parallelized or with different tags.


Deploying to Rancher from Codeship Pro is relatively easy, especially when dealing with Rancher Compose. Due to the composable nature of Codeship Pro, this slots nicely into any Docker-centric CI/CD pipeline using the simple abstractions listed here. You can take a look at our documentation on Rancher to add Rancher keys, define your service and deploy to Rancher.

Since Rancher Compose coerces your services into a predetermined state defined in your docker-compose.yml and rancher-compose.yml configuration files, you can generally use it as a tool to update or create services within new environments simply by cycling through each service in your stack with the up command.

If you want to try out Rancher, be sure to check out The source code for this blog post is also available at <>.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.