Image

Explainable AI

Our recent blog post discussing tech trend predictions for 2020 identified Explainable AI as an important focus for the upcoming year. But why is it so important to understand why AI does what it does? Isn’t the whole concept of AI decision making that humans don’t have to think about it because the decisions are automatic?

The truth is, humans still need to understand why AI makes the decisions it does. This understanding not only helps avoid biases in decision making, but it also enables greater trust in the automated decisions that AI algorithms are making.

For decades, AI projects relied on human expertise to create algorithms, which meant that decision making was deeply understood by people. As AI decision making has become more complex over the past decade with deep learning neural networks, big data was used to train algorithms and human knowledge was less involved in the programming of AI, making it more challenging to understand the decision making process.

In order for AI and machine learning systems to be successful, users need to have a greater confidence in decision making algorithms, and therefore need an explanation as to how conclusions are drawn. The concept of explainable AI is to, you guessed it, present an explanation for model-based decisions to humans, presented in a readable way, instead of in code.

Machine learning experts need to understand how the automated systems make decisions to properly assess models, remove biases, and build new approaches. By knowing how AI has made a decision, it increases trust in algorithms, and ensures that AI systems are safe and secure. Without explainable AI, results could not be traced back to provide insight on the factors influencing the output.

For more content regarding technology trends, check out our post: Information Management Trends.