Céline Budding: Evaluating interpretability methods on structural brain MRI data with synthetic lesions

BCCN Berlin / Technische Universität Berlin

 

Abstract

 

Deep learning models are becoming increasingly popular in many fields, such as medical image classification. However, these models have been accused of being `black boxes' and a range of interpretability methods has been developed to 'open' the black box. In this project, we evaluate saliency methods, which are often used for image data and generate a heatmap of input pixels deemed relevant to the prediction. Although the visualizations seem compelling, their quality has only been evaluated qualitatively or using perturbation experiments. Therefore, we generated artificial data, either with a noise or MRI background, with simulated lesions and a known ground truth. For all sets, a CNN was trained to >85% accuracy and heatmaps were computed using Gradient, LRP, DTD, Guided Backprop, DeConvNet, PatternNet, and PatternAttribution. The heatmaps were evaluated using ROC-AUC, mean average precision, and precision at 99% specificity. For datasets with a single, informative lesion, the networks and saliency methods seem to act as edge detectors. When uninformative lesions are present, Gradient and LRP-z indicate the relevant lesions, whereas the other methods either reconstruct the background or indicate all lesions. Interestingly, for a location-based classification problem, the data generation mask was highlighted, rather than the lesions. Lastly, we investigated transfer learning in a VGG16 with ImageNet weights and found that the quality of the heatmaps increases when fine-tuning additional layers of the network. Our results are in line with previously identified issues with saliency methods, such as sensitivity to low-level features and distractor features, unclear interpretation of negative relevance, and input reconstruction. We suggest that the ground truth for nonlinear data should be formalized mathematically and that different definitions of interpretability should be harmonized, and hope that this framework will aid in further evaluating and improving interpretability methods.

Additional Information

Master Thesis Defense

 

Organized by

Dr. rer. nat. Stefan Haufe   & Prof. Dr. rer. nat. Kerstin Ritter   / Lisa Velenosi

Location: The talk will take place digitally via ZOOM - please send an email to graduateprograms@bccn-berlin.de for access

Go back