Cross Team Collaboration (Part 1)

Written by: Michael Cirioli
6 min read

(Part 1 of a two-part blog post. See part 2 .)

Are you interested in trying to eliminate or reduce the manual handoffs between teams? In this two part blog post, learn about cross team collaboration and the benefits of using pipeline triggers to connect jobs from different teams. Part 2 of the blog takes the discussion a step further with JMESPath, which allows users to add sophisticated logic when connecting jobs together.

Cross Team Collaboration simplifies the cumbersome and complicated tasks of triggering downstream jobs by eliminating the need to identify and maintain the full path for every downstream job. Simply put it, this proprietary feature connects pipelines, increasing automation and collaboration. Prior to this feature, the details of every downstream job (Jenkins instance ID, full job name, Git branch name) all had to meticulously specified in the upstream job. If the job name changed, the upstream job had to be refactored, creating a maintenance burden and discouraging the adoption of event-based triggers.

Now, the upstream job/team can publish a message or event describing what has been produced and the downstream jobs/teams needs to subscribe to the relevant event or message. With the addition of JMESPath, you can apply conditional logic to downstream jobs so they are executed based on the details of an event. Events are no longer tied to brittle objects like job names and the upstream job no longer cares about which jobs are listening. A great use case for this feature would be when the development team’s upstream build job produces an artifact, this pipeline publishes an event, which automatically triggers multiple pipelines from QA to commence automated testing.

How event-based job triggers work

A pipeline can publish an event based on a simple messaging model. This allows your pipeline job to broadcast messages that other jobs can listen to. When a job gets a notification for a message that it is subscribed to, it will trigger that pipeline. The content of the message can be a simple text string (ie. “com.example:supply-chain-lib:1.0-SNAPSHOT:jar”) or a complex JSON object which then can be queried using a JSON query language called “JMESPath”.

Getting started

In order to take advantage of this new feature, you need to ensure that the latest versions of the following plugins are installed on each Jenkins controller which wants to publish or to subscribe to events:

  • notification-api (required) : responsible for publishing events and their messages across teams and jobs.

  • pipeline-event-step (required) : provides a “Pipeline step” to add a new event publisher or trigger. This allows a job to publish and/or subscribe to events and use those to trigger custom pipeline steps.

  • operations-center-messaging (optional) : provides the event router/bus to wire events/messages across different teams. This is required if you want to implement cross team collaboration in a distributed manner. You can still benefit from the cross team collaboration feature without this plugin, by using the local-only mode. Local-only mode allows you to trigger events across different jobs inside the same team, but not across your organization.

Once you have the plugins installed, you will need to enable “Notifications” by navigating to “Manage Jenkins” → “Configure Notification”. Click the checkbox to enable notifications, and then select either “Local only ” if you want events/messages to be broadcasted to a single team or “Operations Center Messaging ” to enable events/messages to be available across your organization.

Simple events

Let’s begin by creating a job that publishes a simple event about the build of a software component called supply-chain-lib, in this case a maven artifact called

com.example:supply-chain-lib:1.1-SNAPSHOT:jar.

When this job is built, we would like to broadcast a message to all subscribed jobs which depend on this library, notifying them about the new build/artifact. To do this, we can use a “simpleEvent ”, whose value is just a string, which in this example will be a set of maven artifact coordinates for the artifact.

The pipeline syntax for publishing a simple event is:

publishEvent simpleEvent(‘com.example:supply-chain-lib:1.1-SNAPSHOT:jar’)

Everytime this job is built it will broadcast the message “com.example:supply-chain-lib:1.1-SNAPSHOT:jar” to all subscribed listeners. A downstream job can listen for this event by creating an event trigger which listens for the string being published by the upstream job. The easiest way to do this is by using a “simpleMatch”, which listens for any event that exactly matches the string it is looking for. To do this, define a trigger that will match whenever it sees a specific string in the event:

eventTrigger simpleMatch('com.example:supply-chain-lib:1.1-SNAPSHOT:jar');

Putting these together, the declarative pipeline script for publishing the event is:

Declarative pipeline for the “supply-chain-lib” component

pipeline {
   agent any
   stages {
       stage('build') {
           steps {
               echo 'building com.example:supply-chain-lib:1.1-SNAPSHOT:jar'
           }
       }
       stage('notify') {
           steps {
               echo 'a new build of com.example:supply-chain-lib:1.1-SNAPSHOT:jar succeeded!'
               publishEvent simpleEvent('com.example:supply-chain-lib:1.1-SNAPSHOT:jar')
           }
       }
   }
}

The declarative pipeline script for a job that wants to listen for this event (e.g. “supply-chain-portal”) might look like this:

pipeline {
   agent any
   triggers {
       eventTrigger simpleMatch('com.example:supply-chain-lib:1.1-SNAPSHOT:jar')
   }
   stages {
       stage(build) {
           steps {
               echo 'build of com.example:supply-chain-lib:1.1-SNAPSHOT:jar was seen!'
           }
       }
   }
}

There are a few points worth mentioning before trying the above examples:

  • You need to build the listening job manually at least once in order to register the trigger and receive events, this is due to the design of the pipeline plugin. If you change the value that the trigger is listening for you will need to manually build the project once more to ensure the trigger change is registered.

  • Our publishing example uses content which is hard coded and would need to be manually updated if, for example, you wanted to reflect a change in the artifact version. This could be enhanced by using a dynamic pipeline variable as the value for your simpleEvent .

  • Similarly, the downstream job in our example is limited to listen to a specific string. For example, if we want to trigger on any version of the supply-chain-lib artifact it would be better to define your trigger based on a JMESPath Query expression (more on that in Part 2).

Conclusion

In part one of this series, we learned how upstream jobs can publish a message that downstream jobs can use in triggering their builds. Although simple in design, this can be a powerful tool for reducing the overhead of managing dependencies between builds and reduce manual handoffs between teams.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.