The XAI Paradox: Systems that perform well for the wrong reasons
Explainable AI aims to shed light on the inner workings of many black-box machine learning algorithms; why did the system make the decision that it did? In order to really trust such a system, we would like its reasoning to be sound. This study shows that machine learning systems can have a high performance with an unsound rationale. In other words; the systems perform well, but for the wrong reasons.
Alumni of the University of Groningen and currently employed at Target Holding as a Machine Learning engineer. Interested in responsible, explainable AI.