Document

thumbnail of Comparing User Interface Designs for Explainable Artificial Intelligence

Author
Ionut Danilescu & Chris Baber
Abstract
A well-known approach to Explainable Artificial Intelligence (XAI) presents features from a dataset that are important to the AI system’s recommendation. In this paper, we compare LIME (Local Interpretable Model-free Explanation), to display features from a classifier, with a radar plot, to show relations between these features. Comparative evaluation (with N = 20) shows LIME provides more correct answers, has a higher consistency in answers, and higher rating of satisfaction. However, LIME also showed lower sensitivity (using signal detection), a slightly more liberal response bias, and had a higher rating of subjective workload. Evaluating user interface designs for XAI needs to consider a combination of metrics, and it is time to question the benefit of relying only on features for XAI.