AI, DevOps, and Employment Law: Thinking Ahead

Written by: Electric Bee

Last summer, CloudBees announced CloudBees Flow DevOps Foresight , which, among other things, uses machine learning to help organizations weigh the value in a release (re: Jira User Stories) against a “release risk score,” based on Developer and Team Contribution. This data can be used to assess the strengths and weaknesses for individuals and teams across multiple criteria. The goal is to help customers learn from experience and assign the most appropriate developers to a particular release, and constructively highlight individual skills gaps to justify training.
While everyone was very enthusiastic about the potential value to be gained by applying AI/ML to DevOps use cases such as release risk scoring, a few people asked if this data could somehow be used to bring punitive action against a particular individual. This was, of course, not our intent, but it’s a great question - one that deserves more discussion!
So, last month we co-hosted a Meetup entitled “AI In DevOps and Associated Employment Law Issues.” Our own CloudBees CTO, Anders Wallgren, Stephen Wu , Shareholder at Silicon Valley Law Group, and Peter Gillespie , Partner at Laner Muchin, Ltd. offered their insights on the intersection of metrics, AI/ML systems, and employment law.
Below are a few highlights from the conversation, with a link to the entire panel below.

The Relevant Metrics

During the discussion, Wallgren reflected on the fact that there hasn’t been any huge breakthroughs in AI over the last 20 years other than faster, more powerful compute resources and tons more data. Even with data on a massive scale, some decisions, like altering the risk of a release, still need to be made by a human. “Software has always been a team sport, he reiterated. “Using these metrics to measure people is foolish.”
Wallgren reminded folks that metrics need to be relevant and designed so they can’t be gamed, whether they are derived from traditional methods or via AI/ML. He referred to his favorite Dilbert cartoon where the pointy-haired manager announced a bug bounty and Wally said “I’m gonna write me a minivan this afternoon.” The point is to focus on the desired outcomes, like lowering the risk of a release, rather than individual behaviors.
He added that software metrics have been around since software began and are inherently objective but have never been effective for managing people. He later said that employees will find a way to game the system so be prepared to watch for it and adjust accordingly.

What About Bias?

At one point in the discussion, Wu asked about the “elephant in the room” – bias in the data or the algorithm. He suggested that if historical datasets are inherently biased, as when the data shows that Group A is inherently better than Group B the datasets will continue to favor Group A, so he turned the discussion to what vendors and AI users need to do to minimize that bias. Wallgren said that algorithms themselves don’t have an opinion, they just offer theories, but those algorithms can be constructed with the wrong math or the wrong data, which would result in the wrong answer. So long as the data that is being fed into the system has not been manipulated, intentionally or otherwise, over time that will improve the quality of the system. He pointed back to his earlier comment about making sure you’re clear about your desired outcome, like measuring releases not people.

GDPR and AI: Not Just for Customer Data

Gillespie also brought up a number of relevant aspects of GDPR that could apply to the use or misuse of this kind of data. For instance, if an employee is subject to GDPR jurisdiction and has a disciplinary action taken against them because of a decision from an AI/ML system, that employee has a right to understand how that algorithm works and how it made that decision. The “black box” nature of AI/ML systems may lead to problems for both vendors and employers if they are used for HR purposes.
In closing, Gillespie recommended that organizations adopting any ML-based tool for decision purposes should have the vendor provide an explanation of how it works so employers and employees understand its limitations.

A Final Thought

Whether or not you use ML in your own company, or if you use software from companies that do, the questions around law and ethics will be important ones to figure out. Watch the video now to learn more.

Stay up to date

We'll never share your email address and you can opt out at any time, we promise.