Exploring Codeship Pro
UPDATE: With January 1st, 2017 we rebranded our hosted CI Platform for Docker from “Jet” to what is now known as “Codeship Pro”. Please be aware that the name “Jet” is only being used four our local development CLI tool. The Jet CLI is used to locally debug and test builds for Codeship Pro, as well as to assist with several important tasks like encrypting secure credentials.
UPDATE: We hosted a webinar at the end of February walking in detail through Jet's main features. You can watch a video recording of the webinar at the end of this post.
In our first blogpost, we introduced the thoughts and plans behind our new Docker-based system, Jet. If you haven’t read it yet, take a look at that post first so you fully understand what we’re trying to achieve with our new infrastructure.
Today, we’re going to walk you through the system in more detail so you can fully see the power that our new infrastructure can give you and your team for your builds.
We’ll start by discussing the web UI, then we'll walk through how to build production containers so they're clean and only have what's necessary for running in production. We'll test those containers via an integration test, and finally, we'll walk through deployment with or without containers, and how you can selectively run deployments on specific branches to get to what we call repository-driven infrastructure.
The Web UI
For our new system, we’ve built a completely new web UI. The goal of the UI is to make all the information of your build easily accessible, while not drowning you in it.
As you can see in the following image, the dash on the left contains general metadata about the build, the list of Docker containers used during the build, and a visualization of the build steps.
Click on the name of a Docker container to see the whole log output that was written when the container was created. This is super convenient -- it allows you to inspect not just the output of your test command, but it also gives you insight into the generation of the whole infrastructure as well.
The same is true for a build step, of course. We store the output from the command you’re running for a specific step, but we’re also storing all log output from every other service that's used during that step. This helps to debug any failing tests -- you get to see all logs in timed order and can quickly get an overview as to where a particular problem in the build might be coming from.
If you set up steps to only run on specific branches, which we’ll describe in more detail later, they will be grayed out. You'll also see additional information why they're not running. All log output will be shown in real time while the build is running.
After this brief introduction to the web UI, let’s see how we can build minimal Docker containers for deployment to production.
Building a Minimal Docker Container
To build a Docker container for production release, you want to split the process up in two parts. To begin, build all the artifacts in a separate container that has all your build tools installed. Then make sure those artifacts are available while building the actual production container so it can take those artifacts and put them in the right place, without needing the build tools installed.
We accomplish this in Codeship through volumes. The following example sets up a compile container that's able to write artifacts back to the repository folder and a production container.
compiledemo: build: . volumes: - ./tmp:/artifacts production: build: . dockerfile_path: Dockerfile.production
The compiledemo
container now has access to the tmp
folder of the codebase on the build machine. Any artifact it writes to the /artifacts
when running can be used in the production container build later on. You can also take a look at the volumes documentation to see other use cases.
Now let's look at the codeship-steps.yml
file that we would use to run this build.
- type: parallel steps: - service: compiledemo command: ./scripts/compile_application.sh - service: compiledemo command: ./scripts/compile_static_assets.sh - service: production type: push image_name: username/repository_name registry: https://index.docker.io/v1/ encrypted_dockercfg_path: dockercfg.encrypted
As you can see, we’re starting with a parallel step because we want to compile various pieces of our application in parallel to make the build faster. In this instance, one step is compiling our application code, and the other is compiling our assets. Both containers have access to the /artifacts
volume and can therefore write artifacts back to the repository folder on the host system.
Some teams even go further and build those artifacts already while their tests are running. They can then start building and deploying their container immediately after the tests have passed instead of waiting until the artifacts are built after the tests have passed. Those are just some of the many options our new workflow system gives you.
Once both steps have finished successfully, we hit the push step. Because the production container hasn’t been built before, a Docker build will be started. This build can include the previously created artifacts into the production container. As soon as the container is created, it will be pushed to the Docker Hub. To read more about pushing Docker containers, check out our Docker push tutorial.
Now that we have a finished container that we can push to production, we also want to introduce acceptance testing to make sure this container works as expected.
Integration Testing Production Containers
Because you can fully control the build infrastructure, setting up an integration test is very easy. The following is a codeship-services.yml
file that will use the production container we’ve just built and link it to another container that can run our integration tests.
compiledemo: build: . volumes: - ./tmp:/artifacts production: build: . dockerfile_path: Dockerfile.production integration_test: build: image: demo/integrationtest dockerfile_path: Dockerfile.integrationtest links: - production
So now we can use the integration_test
container to test the production
container, as it can access it on host production. We could use tools like Selenium or CasperJS to run browser tests against it or, if it's an API, send API tests to it and make sure everything works as expected.
This integration testing setup gives you a very powerful way to make sure all your systems are starting and working as expected before they are promoted out of CI into production. To start the testing process, you simply call the command in the integration_test
container that will run the tests against the production container.
- service: integration_test command: ./scripts/test/start_integration_tests.sh
Now that we have a container that's ready for production, let’s look into deploying Docker containers and generally how deployment works.
Setting Up Deployment for Branches
One of the most important features we have always supported on Codeship is being able to define deployments on specific branches. This enables your team to work on their feature branches, and any merge into the controller or a designated production branch could trigger the deployment. It lets your developers focus on the product and code and not worry about the details of deployment every day.
With our new system, we’ve generalized this ability. You can now run any kind of step on selected branches. Deployment is still the most important use case for this of course, but there are many other processes you can support through this as well.
The following examples limits the step to only the controller branch by tagging it.
- service: integration_test command: ./scripts/deploy_me.sh tag: master
After implementing this feature, we decided that we wanted to make this even more powerful by giving you the ability to define a regex for the tag. The following example will only run whenever we’re on a branch (or git tag, as we handle branches and tags the same way) that starts with qa/
.
- service: integration_test command: ./scripts/deploy_me.sh tag: ^qa/.*$
Tagging steps to run only on specific branches, as well as the ability to use regexes, make our already pretty powerful workflow system even more so. You can also take a look at the step documentation for all the details.
Now let's look at an example of deploying with our new Docker-based system.
Deploying a Docker Container
It was one of the goals of our new system to make building and deploying Docker containers incredibly easy. To achieve this, we’ve implemented a specific Docker push step that allows you to deploy your Docker containers. We allow you to set the image name as well as the tag that's used for pushing this container to the registry.
Through the encrypted_dockercfg_path
config value, you can set up authentication to push into your private registry (or private account in a registry service like Docker Hub or quay.io)
- service: production tag: master type: push image_name: codeship/aws-deployment registry: https://index.docker.io/v1/ encrypted_dockercfg_path: dockercfg.encrypted image_tag: {{ .CommitID }}
Check out our Docker push tutorial for all the details on deploying your containers.
Deploying Anything with Codeship
Because you have full control over the build environment, including which tools are installed on the build containers, you can deploy using any tools. As long as it runs on Linux and can be installed in a Docker container, you can deploy with it.
To get started with common deployments, we’ve built a couple of open-source containers. You can pull in those containers and deploy to the various services.
To use the Heroku container, for example, you’d set up the following services file:
herokudeployment: image: codeship/heroku-deployment
And the corresponding steps file that runs the deployment command:
- service: herokudeployment command: codeship_heroku deploy /app/test my-heroku-app
It's important to understand that while we provide those containers, you can include any other container into your build as well.
Our Docker-based system is essentially a plugin system that allows you to plug in any Docker container and run your commands inside those containers. It doesn’t matter if the container was created by us, another developer, or if you built the container yourself to have a standard container for different actions in your build, such as deployment or notifications.
You have the total freedom to include them and use them in the way you want.
Conclusions
Our new Docker-based system, Jet, gives you a lot of flexibility in terms of setting up your build environment and your build workflow. We want you to have total control over every part of the build so you feel safe to release to production after running the build on Codeship.
If you want to take our new system for a spin, sign up for a free 14-day-Jet-trial or check out our Getting Started Guide in our Codeship Documentation. Feel free to watch a short 9 minute Codeship demo video here.
Webinar Recording
Check out starting points and resources for Codeship Jet on our webinar page here.
[vimeo 156815173 w=690 h=431]
Working with Codeship Jet - The Codeship CI Platform for Docker from Codeship on Vimeo.
Stay up to date
We'll never share your email address and you can opt out at any time, we promise.