Scale Your Docker Workflow with Codeship and Cloud 66

Written by: Daniël van Gils
9 min read

UPDATE: With January 1st, 2017 we rebranded our hosted CI Platform for Docker from “Jet” to what is now known as “Codeship Pro”. Please be aware that the name “Jet” is only being used four our local development CLI tool. The Jet CLI is used to locally debug and test builds for Codeship Pro, as well as to assist with several important tasks like encrypting secure credentials.

Let's assume you’ve already devoured every Docker 101 tutorial for breakfast. You’re now ready to take that brilliantly crafted application into production. But wait... first you need to test your container-based microservices architecture. What does the whole DevOps workflow look like? What about performance and security? And last but not least, how do I run my microservices in production, and will it scale?

In this blog post, I'll be explaining what a proper Docker workflow looks like and how to manipulate it to make it your own. I'll guide you through the entire Docker development process, test your microservices architecture, and run your app in production on the cloud provider of your choice using Cloud 66 and the new Codeship Docker platform. To wrap things up, I'll also show you how to scale your app to greater heights.

Building a Microservice

To kickstart the process, first we'll need a microservice to get started. Let's create a simple API to store cat names in a MySQL backend.

We know that normally you wouldn't want your microservice to talk directly to a database. I too am a fan of loosely coupled services! But for the purpose of today's tutorial, let's park the conversation around loosely coupled services and containers for another blog post. Now back to the cats.

The Cats API has three responsibilities:

  • provide a list of all the added cat names

  • add a new cat name

  • remove all cat names from the list

The endpoints of our Cats API will look like this:

  • curl -X GET http://<your ip>:3000/api/cats

  • curl -X POST -d '{"name":"rocky"}' http://<your ip>:3000/api/cats

  • curl -X DELETE -d '{}' http://<your ip>:3000/api/cats

The node.js app can be found on GitHub. Please clone it onto your local dev machine, check out the source, and see how the project is structured.

Test First, Ask Questions Later

I'm using the BDD way of testing. Every microservice exposes a contract/software interface, which can be tested from the outside-in.

For the node.js test, we use the Mocha framework for running tests. For those with a Ruby background, you see the similarity with rspec. From an outside-in test approach, all I care about is whether the implementation can provide us with a stable, secure, scalable, and high-performance microservice.

Here's an overview of what a developer workflow will look like using Codeship and Cloud 66.

And this is what's going on under the hood:

  • Build all container images for the Cats API, test and notify service. In the end, only the API will be deployed to production.

  • Run the Cat API microservice and start the dependencies the service relies on. MySQL in this case.

  • Wait for those two guys to warm up.

  • NOTE: You'll never know how long it takes for a service to boot. Make sure to check if a service is running before making calls. Check this generic wait script.

  • Run the test service against the Cat API when everything is up and running.

  • Notify your Docker stack managed by Cloud 66 using the notify service to deploy the new Cats API when the tests are green.

So now you're ready to rock. Let's try this out!

  • git clone https://github.com/cloud66-samples/codeship_docker_workflow.git cats

  • cd cats

Time to test

The best way to get your microservice architecture built and running on your local dev system is by using the Compose (docker-compose) tool.

Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then by using a single command, you create and start all the services from your configuration.

  • Build all the services we need: docker-compose build

  • Run the test suite: docker-compose run test

Watch the generic wait scripts to check if MySQL and the API are up and start firing off those tests. Expected output:

$ docker-compose run test
api [172.17.0.3] 80 (http) open
mysql [172.17.0.2] 3306 (mysql) open
  start all test
    ✓ should remove all existing cats from the database (53ms)
  the wonderful cat API
    ✓ should create a cat
    ✓ should list all cats
  3 passing (83ms)

The works

That's it. You've tested the Cats API service. Take a look at our docker-compose.yml.

What you want to remember is that only the ==api== service will be deployed to the production stack managed by Cloud 66. The rest of the services are for local development and testing and will be running on Codeship Docker.

# the infamous api
api:
  build: services/api
  ports:
  - "3000:80"
  links:
    - mysql:mysql.cloud66.local
# test suite to test the api outside-in
test:
  build: services/test
  links:
    - api:api.cloud66.local
    - mysql:mysql.cloud66.local
  environment:
    - API_ADDRESS=api.cloud66.local
# a mysql db for data storage
mysql:
  image: mysql
# a notifier script to tell Cloud 66 to redeploy our well tested api service
notifier:
  build: services/notifier

You notice we use an alias for our links. By linking the containers in our local and test environment this way, we don't need to change the code finding the services. Depending how the service discovery is implemented on your Docker cluster, using a DNS-based service discovery is a great bet.

Let me make something clear concerning working with images. The API image is the same in dev = test = staging = production. Treat the image in each step the same. Only the database name may change, because you don't want to lose any data.

Use Codeship for Running Your Tests

Nice! Does it run on the new Codeship Docker? Codeship just released a CLI tool called Jet to test your service locally and in the cloud. Follow this instruction and guide to Codeship Docker infrastructure to get Jet installed.

Jet needs a service configuration file called codeship-service.yml. It looks almost the same as docker-compose.yml with a couple of exceptions.

# same but not ports definition (cant be accessed from the interwebs)
api:
  ...
#same
test:
   ...
#same
mysql:
   ...
# little addition to get the secrets in!
notifier:
  ...
  encrypted_env_file: services/notifier/cloud66-deployment.env.encrypted

If you want to know about encrypted secrets and environment variables (and you should if you want to prevent security breaches), read this tutorial on how to use encrypted environment variables with Codeship.

Also, we need all the steps of our CI/CD workflow. You can define them in the codeship-steps.yml. Ours is as simple as a microservice should be.

The codeship-steps.yml configuration:

# run the test suite, because it depends on the api and mysql those will start too
- name: test_step
 tag: master
  service: test
  command: /app/test.sh
# if the test doesn't fall that redeploy on cloud66
- name: redeploy_step
  tag: master
  service: notifier
  command: /app/redeploy_hook_cloud66.sh

Ready to rock? Run the test suite using the Jet CLI to test if it works with Codeship: jet steps

When the test hits green, we're ready to get the real magic in. Follow the steps for creating a project on the Codeship Docker infrastructure, and automagically, a GitHub webhook triggers the tests.

Run the tests: docker-compose run test

Push the code changes to Git, and our test will run on the Codeship Docker infrastructure. Expected Codeship output:

But wait. You bragged about running this thing in production. So let's throw in some more magic. Check the last step called redeploy_step.

Run Docker in Production

If you're not familiar with Cloud 66, the service simplifies Ops for developers by building, configuring, and managing your servers on any cloud. Our Docker support provides a complete toolset for rolling out containers to production on your own infrastructure.

We only need this very simple service.yml to run the whole thing in a real Docker production environment. With the service.yml, you can do a lot of cool things to get the most out of your service.

---
services:
  api:
    git_url: git@github.com:cloud66-samples/codeship_docker_workflow.git
    git_branch: master
    ports:
    - 80:80:443
    build_root: services/api
databases:
- mysql

Okay. Hold on. You checked all the source code on our GitHub repo. But what if you're not into Node.js but more of a Ruby person? How can you start using the workflow right away? We have a Starter project just for that.

Cloud 66 Starter is an open-source command line tool to generate a Dockerfile, docker-compose.yml, and a service.yml file from arbitrary source code. The service.yml file is a Cloud 66 service definition file, which is used to define the service configurations on a stack.

So let's go back to where it all starts from again, because I can hear you asking: How do I run Docker in production? Let me tell you.

Creating a Docker cluster

Register with Cloud 66, dial in with the credentials of your cloud provider of choice, and begin using the UI to start a Docker stack. Pop in the service.yml and off you go! Grab a coffee and sit back to get those servers provisioned.

Or be like me -- just install the CX toolbelt and get your Docker cluster up and running in no time:

$ cx stacks create --name "My Awesome Cat Stack" --environment "production" --service_yaml service.yaml

Once Cloud 66 provisions your server(s) and your Docker cluster is up and running, sit back and enjoy the view:

Scale your cluster

So you want scale? Right then. You can scale vertically to add more Docker servers to your cluster to scale horizontally and spawn more containers running the API image.

You're working hard on your API, so run the test (green! yeah!) and tell Cloud 66 to redeploy the running services. Let's add a redeploy hook to Codeship and trigger that event at Cloud 66.

Hook up Codeship to Cloud 66

We use the notifier service running during tests on Codeship. The notifier only runs one simple script:

#!/bin/bash
curl -X POST -d "" ${CLOUD_66_REDEPLOY_HOOK}

The script needs an environment variable called CLOUD_66_REDEPLOY_HOOK. How do you get one?

Once you’ve deployed your stack with Cloud 66, you’ll see a Redeployment Hook URL on your Stack Information page. To trigger a new deployment via Codeship, simply use the example notifier service (based on the ultra small alpine base image) to the last project step.

Conclusion

With the integration of Codeship and Cloud 66, you get an easy-to-use and fast Docker workflow pipeline, and you don't have to worry about running Docker in production. Let Codeship take care of all the (integration) testing and don't worry about your Docker cluster; Cloud 66 has got your back.

Happy DevOps time.

If you have a Philips Hue hanging in your office, check out this script to get some fancy red/green lights in your office during tests and deployments!

PS: You can also deploy to Cloud66 without using Docker. Check out this article on deploying to Cloud66 in the Codeship documentation.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.