Automating CD pipelines with Jenkins - Part 2: Infrastructure CI and Deployments with Chef

Written by: Tracy Kennedy

This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Tracy Kennedy, solutions architect, CloudBees, about a presentation given by Dan Stine , Copyright Clearance Center at JUC Boston .

In a world where developers are constantly churning code changes and Jenkins is building those changes daily, there is also a need to spin up test environments for those builds in an equally fast fashion.

Dan Stine

To respond to this need, we’re seeing a movement towards treating “infrastructure as code.” This goes beyond simple BAT files and shell scripts -- instead, “infrastructure as code” means that you can automate the configurations for ALL aspects of your environment, including the infrastructure and the operating system layers, as well as infrastructure orchestration with tools like Chef, Ansible and Puppet.

These tools’ automation scripts are version controlled like the application code, and can even be integrated with the application code itself.

While configuration management tools date back to at least the 1970s, this way of treating infrastructure code like application code is much newer and can be traced to at least CFEngine in the 90s. Even then, these declarative configuration tools didn’t start gaining popularity until late 2011:

Job Trends Indeed
Job Trends Indeed
Job Trends Indeed
Job Trends Indeed

Infrastructure CI
This rise of infrastructure code has created a new use case for Jenkins: as a CI tool for an organization’s infrastructure.

At the 2014 Boston Jenkins User Conference, Dan Stine of the Copyright Clearance Center presented how he and his organization met this challenge. According to Stine, the Copyright Clearance Center’s platform efforts began back in 2011. They saw “infrastructure as code” as an answer to the plight of their “poor IT ops guy,” who was being forced to deploy and manage everything manually.
Stine compared the IT ops guy to the infamous “Brent” of The Phoenix Project : all of their deployments hinged on him, and he became overwhelmed by the load and became the source of their bottlenecks. The Phoenix Project

To solve this problem, they set two goals to improve their deployment process:

  1. Reduce effort
  2. Improve speed, reliability and frequency of deployments

Jenkins and Chef
As for the tools to accomplish this, the organization specifically picked Jenkins and Chef, as they were already familiar and comfortable with Jenkins, and knew both tools had good communities behind them.They also used Jenkins to coordinate with Liquibase to execute schema updates, since Jenkins is a good general purpose job executor.

They installed the Chef client onto nodes they registered on their Chef server. The developers would then write code on their workstations and use tools like Chef’s “knife” to interact with the server.

Their Chef code was stored in GitHub, and they pushed their Cookbooks to the Chef server.

For Jenkins, they would give each application group their own Cookbook CI job and Cookbook release job, which would be run by the same master as the applications’ build jobs. The Cookbook CI jobs ran any time that new infrastructure code was merged.
They also introduced a new class of slaves, which had the required RubyGems installed for the Cookbook jobs and Chef with credentials for the Chef server.

Cookbook CI Jobs and Integration Testing with AWS
The Cookbook CI jobs first prompt static analysis of the code’s syntax with JSON, Ruby and Chef, followed by integration testing using the kitchen-ec2 plugin to spin up an EC2 instance in a way that would mimic the actual deployment topology for an application.

Each EC2 instance was created from an Amazon Machine Image that was preconfigured with Ruby and Chef, and each instance was tagged for traceability purposes. Stine explained that they would also run chef-solo on each instance to avoid having to connect ephemeral nodes to their Chef server.

Cookbook Release Jobs
The Cookbook release jobs were conversely triggered manually. They ran the same tests as the CI jobs, but would upload new Cookbooks to the Chef server.

Application Deployment with Chef
From a workstation, code would be pushed to the Chef repo on GitHub. This would then trigger a separate Jenkins master dedicated to deployments. This deployment master would then pull the relevant data bags and environments from the Chef server. The deployment slaves kept the SSH keys for the deployment nodes, along with the required gems and Chef with credentials.
Stine then explained the two deployment job types for each application:
  1. DEV deploy for development
  2. Non-DEV deploy for operations
DEV deploy for development

Non-dev jobs took an environment job parameters to define where the application would be deployed to, while both took application group version numbers.

These deployment jobs would edit application data bags and application environment files before uploading them to the Chef server, find all nodes in the specified environment with the deploying app’s recipes, run the Chef client on each node and send an email notification of the result of the deployment.

Click here for Part 1 .


Tracy Kennedy

Tracy Kennedy
Solutions Architect

As a solutions architect, Tracy's main focus is reaching out to CloudBees customers on the continuous delivery cloud platform and showing them how to use the platform to its fullest potential. (A Meet the Bees blog post about Tracy is coming soon!) For now, follow her on Twitter .

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.