This is part of a series of blog posts in which various CloudBees technical experts have summarized presentations from the Jenkins User Conferences (JUC). This post is written by Tracy Kennedy, solutions architect, CloudBees, about a presentation given by Dan Stine , Copyright Clearance Center at JUC Boston .
In a world where developers are constantly churning code changes and Jenkins is building those changes daily, there is also a need to spin up test environments for those builds in an equally fast fashion.
To respond to this need, we’re seeing a movement towards treating “infrastructure as code.” This goes beyond simple BAT files and shell scripts -- instead, “infrastructure as code” means that you can automate the configurations for ALL aspects of your environment, including the infrastructure and the operating system layers, as well as infrastructure orchestration with tools like Chef, Ansible and Puppet.
These tools’ automation scripts are version controlled like the application code, and can even be integrated with the application code itself.
While configuration management tools date back to at least the 1970s, this way of treating infrastructure code like application code is much newer and can be traced to at least CFEngine in the 90s. Even then, these declarative configuration tools didn’t start gaining popularity until late 2011:
Infrastructure CI
At the 2014 Boston Jenkins User Conference, Dan Stine of the Copyright Clearance Center presented how he and his organization met this challenge. According to Stine, the Copyright Clearance Center’s platform efforts began back in 2011. They saw “infrastructure as code” as an answer to the plight of their “poor IT ops guy,” who was being forced to deploy and manage everything manually.
To solve this problem, they set two goals to improve their deployment process:
- Reduce effort
- Improve speed, reliability and frequency of deployments
Jenkins and Chef
As for the tools to accomplish this, the organization specifically picked Jenkins and Chef, as they were already familiar and comfortable with Jenkins, and knew both tools had good communities behind them.They also used Jenkins to coordinate with Liquibase to execute schema updates, since Jenkins is a good general purpose job executor.
They installed the Chef client onto nodes they registered on their Chef server. The developers would then write code on their workstations and use tools like Chef’s “knife” to interact with the server.
Their Chef code was stored in GitHub, and they pushed their Cookbooks to the Chef server.
For Jenkins, they would give each application group their own Cookbook CI job and Cookbook release job, which would be run by the same master as the applications’ build jobs. The Cookbook CI jobs ran any time that new infrastructure code was merged.
Cookbook CI Jobs and Integration Testing with AWS
The Cookbook CI jobs first prompt static analysis of the code’s syntax with JSON, Ruby and Chef, followed by integration testing using the kitchen-ec2 plugin to spin up an EC2 instance in a way that would mimic the actual deployment topology for an application.
Each EC2 instance was created from an Amazon Machine Image that was preconfigured with Ruby and Chef, and each instance was tagged for traceability purposes. Stine explained that they would also run chef-solo on each instance to avoid having to connect ephemeral nodes to their Chef server.
Cookbook Release Jobs
The Cookbook release jobs were conversely triggered manually. They ran the same tests as the CI jobs, but would upload new Cookbooks to the Chef server.
From a workstation, code would be pushed to the Chef repo on GitHub. This would then trigger a separate Jenkins master dedicated to deployments. This deployment master would then pull the relevant data bags and environments from the Chef server. The deployment slaves kept the SSH keys for the deployment nodes, along with the required gems and Chef with credentials.
- DEV deploy for development
- Non-DEV deploy for operations
Non-dev jobs took an environment job parameters to define where the application would be deployed to, while both took application group version numbers.
These deployment jobs would edit application data bags and application environment files before uploading them to the Chef server, find all nodes in the specified environment with the deploying app’s recipes, run the Chef client on each node and send an email notification of the result of the deployment.
Click here for Part 1 .
Tracy Kennedy
Solutions Architect
CloudBees