The imperative for transparency and explainability
Algorithms can be destructive when they produce inaccurate or biased results, an inherent concern amplified by the black box facing any leader who wants to be confident about their use. That is why there is hesitancy in handing over decisions to machines without being confident in how decisions are made and whether they’re fair and accurate. This is an AI trust gap. KPMG's new report, Controlling AI- The Imperative for Transparency and Explainability, explains the urgency and describes methods and tools that can help leaders govern their AI programs.