A Tour of the CloudBees Jenkins Platform: Cluster-Wide Job Triggers

Written by: Andrew Bayer
5 min read

I recently started here at CloudBees, and had the chance to really dig into what the CloudBees Jenkins Platform has to offer. I’d been absorbed in what I could do with the open source Jenkins releases and hadn’t really taken the time to look into CloudBees Jenkins Platform in detail. Well, there’s a lot - honestly, more than you might think! This is the first in a series of posts on features of the CloudBees Jenkins Platform that I think are really exciting, powerful and/or useful. One of my favorite features isn’t a huge one, but it’s a real improvement over what you can do without the CloudBees Jenkins Operations Center. It was introduced in the recent 2015.11 release of the CloudBees Jenkins Platform.

CloudBees Jenkins Operations Center allows for managing multiple Jenkins controllers simultaneously and cluster-wide job triggers were built on top of that functionality to enable cross-controller orchestration. This allows a job on one controller to trigger a job on another controller, with or without parameters, and then wait for that remote job to complete or fire and forget. This is an amazingly useful piece of functionality when you’re spreading out your jobs across multiple controllers but the projects on your controllers aren’t entirely self-contained. It also works for triggering jobs on the same controller, like the classic Parameterized Trigger plugin , but it does so with a unified UI for triggering jobs on other controllers as well as the controller you’re currently on, and with a much better user experience than the classic plugin. Let me show you how it works.

One thing to note - just because it says “Trigger builds on remote projects” doesn’t mean it won’t work for calling jobs on the same controller! As you can see, once you’ve added the build step, you first choose the controller the job you want to trigger is on. Then you’ll choose from the available top-level jobs and folders on that controller, and onward until you find the job you want to run. I’m a big fan of this UI - it’s a lot easier to navigate and find the job you need than any of the other options in this area.

Once you’ve chosen the job to run, you need to decide whether the remote job will always be triggered, or if it will only run in some situations. This can be handy when your remote job is something like a cleanup job for a test environment or a cluster - even if the main job fails, you probably still want to clean up the test environment so it can be used by another build or job.

What really makes this functionality fantastic for build pipelines, like the classic Parameterized Trigger plugin before it, is the ability to control how your calling job treats the downstream job. If the downstream job is at the end of your pipeline, or for whatever reason your calling job doesn’t need to wait for the downstream job to finish before continuing, you can use the “Fire and Forget” option. On the other hand, if you need to wait for the downstream job to complete before proceeding, you can block on that, setting the calling job’s status based on the result of the downstream job.

But what’s different, and dramatically more powerful, here are the more detailed “Wait until…” options. These options allow you to configure the calling job to, for example, wait for the downstream job to get scheduled or start, not just wait ’til it finishes. You can ensure that the downstream job has kicked off, even if you don’t want to wait until it completes before proceeding to the next step in the calling job. In those cases, just like when “waiting for completion,” you can set timeouts such that the calling job doesn’t wait too long for a downstream job that hasn’t launched. Additionally, the “Track progress and wait until finished” option combines all three of those forms of waiting, letting you make sure the downstream job gets scheduled, started *and* completed.

Of course, this couldn’t be a real successor to the Parameterized Trigger plugin if you weren’t able to specify parameters for your downstream job. Cluster-wide job triggers supports exactly that - it comes with five possible ways to specify your parameters currently:

  • Boolean parameters

  • String parameters

  • The current build’s parameters

  • An evaluated Groovy expression, using the Token Macro plugin

  • The ability to fan out a set of values for a string parameter, triggering one downstream build for each value

Currently the cluster-wide job trigger does not support additional parameter types contributed by plugins, but in most cases, this will do the trick for you nicely.

Now, some caveats: cluster-wide job triggers are a new feature, and so they’re not perfect yet. Like I just mentioned above, they’re limited in terms of the kinds of parameters they can pass between projects. Since it’s a new feature, it may have limitations I haven’t encountered in my usage yet. Nothing’s perfect, after all, and that’s even more true of new software features than most things. But so far, I’ve absolutely loved this feature - it’s a more intuitive, multi-controller-capable Parameterized Trigger plugin. As you may have seen in my “Seven Habits of Highly Effective Jenkins Users” talk , I swear by the power that the Parameterized Trigger plugin gives you. If you’re already using the CloudBees Jenkins Platform, give cluster-wide job triggers a shot - and if you’re not using the CloudBees Jenkins Platform, well, give it a try !

Andrew Bayer
Jenkins Evangelist
CloudBees

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.