Jenkins, Workflow and Gerrit: Putting the Pieces Together

svanoort's picture

Today you’ll find out how the workflow plugin for Jenkins can simplify an otherwise complex CI pipeline based on a real world Jenkins customer doing mobile development.  During early versions of workflow, plugin support was limited, but with a significant investment of developer effort, wider compatibility now enables workflow to simplify life for many teams.

Specifically:

  • Workflow support (added to gerrit trigger support in version 2.15)
  • Workflow failFast for parallel steps (workflow version 1.3)
  • Broader compatibility enhancements

Out of the box, it now enables functionality that previously needed clever and complex job structuring.

The situation:

  • Larger codebase that builds for multiple platforms simultaneously (linux, android and windows)
  • Codebase lives in a Gerrit server, which provides security (ACLs/roles) and code review
  • To manage multiple code repositories, they use repo, which is similar to git submodules
  • Extensive sets of tests based on the linux build result, including longer automation tests that run all night
  • Build/test results should submit results (“votes”) to gerrit so the developer can see everything in one place

Here’s what the overall infrastructure looks like (in Dockerized demo form).  Note that the repo has its own repository that contains manifest files which describe repositories, and it provides a manifest that the client-side tool uses to orchestrate results.

infrastructure

Nothing terribly unconventional, but for CI, speed matters!  Nothing kills developer productivity more than pushing a bad commit and not seeing the problem until all parts of a long build/test cycle finish… the next day.  In this case, any failure in the pipeline counts as a failure for the code change.

So, how do we achieve fast feedback?  Parallel execution of steps will reduce the total time to the slowest component.
Here’s what our job structure will look like, with parallel steps enabled:

We must go faster, though!  What if we want to send feedback to the developer *as soon* as any failure occurs?   Now, the problem: we need *fail-fast* functionality, to report CI results as soon as any step fails.  Additionally, we need to make sure that there are no orphaned builds; executors and test environments should not be tied up if one component of the build has failed — automation tests can be expensive to run!  As one added fillip, gerrit supports sending information about revised patchsets, and we’d like to kill and rerun the entire flow if someone pushes a revision to their current patchset while the current one is being built.  

Unfortunately, freestyle builds do not make this straightforward: traditional job structures to achieve this require 10 separate jobs chained together, and to work with a new branch one has to copy the whole job hierarchy and change branch references. 

So, let’s demonstrate how we can do this with workflow.  This demonstration is publicly available on GitHub, with instructions for running it via Docker and Docker Compose (support for Docker Machineboot2docker included).

As with any recipe, there are some ingredients:

Jenkins:

Configuration for gerrit:

Configuration for jenkins:

 

Here’s what the workflow looks like:

// Run this with mvn args on each node
def mvn(args) {
    sh “${tool ‘Maven 3.x’}/bin/mvn ${args}”
}

def fetch_repo() {  // Pull down the projects with repo
    sh ‘repo init -u http://gerrit:8080/umbrella -m jenkins.xml’ 
    sh ‘repo sync’
    sh “repo download $GERRIT_PROJECT $GERRIT_CHANGE_NUMBER/$GERRIT_PATCHSET_NUMBER”
}

def builds = [:]
builds[‘workflowrun’] =  {
  stage ‘building’
  node {
    sh ‘rm -rf source’
    // Remove dir component in 1.11, replaced with deletedir
    dir (‘source’) {
      fetch_repo()
      mvn(“clean compile install -f primary/pom.xml”)
      mvn(“clean compile install -Dmaven.test.skip -f secondary/pom.xml”)
      sh “mv */target/*.jar .”
      stash includes: ‘*.jar’, name: ‘jars’
    }
  }

  def slowtests = [:]
  slowtests[‘Functional Tests’] = {
    node {
     // Fetch both artifacts
     unstash name:’jars’
     sleep 2

     // Verify both jars can run successfully
     sh ‘java -jar primary*.jar -delay 1 —length 100’
     sh ‘java -jar secondary*.jar’
    }
  }
  slowtests[‘Integration tests’] = {
    node {
      sleep 15
      unstash name:’jars’
      sh ‘java -jar primary*.jar `java -jar secondary*.jar`’
    }
  }
  slowtests[‘failFast’] = true
  parallel slowtests
}

// PARALLEL BUILD STEP
builds[‘parallelbuild’] = {
  stage ‘building’
  build job: ‘freestylebuild’, parameters: [[$class: ‘StringParameterValue’, name: ‘sample’, value: ‘val’]]
}

builds[‘failFast’] = true
parallel builds

As you can see, we’re doing a slightly simplified version here (only two platforms).  

A couple useful points:

  1. The recent Parallelism and Distributed Builds post gives more context on many of these steps
  2. Note that we can define functions for build steps. This makes it easy to construct complex builds without repeating ourselves
  3. We can nest parallel steps within other parallel steps
  4. It’s quite easy to build freestyle jobs from the workflow (if some functionality will stay in freestyle builds)
    1. But given that a full build job can become just a few lines of code in the workflow… why not inline it?
  5. Stash/unstash offers an easy way to pass artifacts to different steps within the workflow. Copy artifact is another option, if you wish to pass to other jobs.
  6. Parallel steps have ‘failFast’ enabled — this ensures that any failure kills the entire pipeline and all triggered jobs
  7. Stages construct a pipeline and you can limit how many executors can run a given stage (to limit resource use)
  8. CloudBees offers a workflow stage view plugin that provides an elegant view of the build pipeline

Here’s what the trigger setup for the job looks like: (click on the image to enlarge)

Screenshot-Trigger-Config

All you have to do to change the branch here is change a line in the fetch_repo function, or the trigger branch. 

So there you have it: 10 jobs reduced to 2, and in most cases you can inline the entirety of the pipeline into the same workflow script!  

Samuel Van Oort
​Software Engineer
CloudBees

 

Blog Categories: 

Add new comment