Interpreting and Explaining Deep Models Visually
Interpretation and explanation of deep models is critical towards wide adoption of systems that rely on them. In this talk, I will present a novel scheme to achieve both.
First, a set of relevant features internally encoded by a model are identified. Model interpretation is achieved through average visualizations of these features.
Model predictions are explained by accompanying the predicted class label with supporting visualizations derived from the identified features.
José Oramas is an Assistant Professor at the IDLab - University of Antwerp. He received his PhD at the KU Leuven in 2015. During the last 10 years he has conducted research on understanding how groups of elements from images interact and how the relationships between them can be exploited to improve several Computer Vision problems.