AI and machine learning hold immense promise for improving lives and DevOps processes. However, they come with pitfalls beyond the challenges of normal technology development. Inherent bias in the algorithms themselves as well as unintended - or even illegal - use of the predictions or decisions are just two examples.
Modeling AI algorithms similar to how the human brain works risks replicating human biases into the algorithm itself, as shown by recent facial modeling systems. Without a solid data model and solid basic data principles, no amount of AI or ML is going to solve this problem. It might actually make it worse. Consider the potential for relying on the AI/ML predictions as justifications for HR actions.
Without forethought and clear vision, some of these issues may be clarified by regulatory agencies or lawsuits. The time to have this discussion is now. Join this town hall discussion as panelists explore how best to weave ethics in and bias out of AI processes.