Assistant Professor, Utrecht University
Explainable AI: explain what to whom?
One of the ideas underlying explainable AI is to use (new) models that are inherently explainable to replace or complement black box models in machine learning. Explainable models exist and have existed for quite some time, and different aspects of these models and their outputs can be explained. But are we providing the right explanations to the right people? What do we really want to accomplish with our explanations?
Silja Renooij (Utrecht University) is a member of the Intelligent Systems group and interested in Probabilistic Graphical Models. Her research focusses on understanding the effects of various precision-complexity tradeoffs in the specification of such models on model output, for the purpose of facilitating construction and explanation of Bayesian networks.