On Selective, Mutable and Dialogic XAI: a Review of What Users Say about Different Types of Interactive Explanations
Résumé
Explainability (XAI) has matured in recent years to provide more human-centered explanations of AI-based decision systems. While static explanations remain predominant, interactive XAI has gathered momentum to support the human cognitive process of explaining. However, the evidence regarding the benefits of interactive explanations is unclear. In this paper, we map existing findings by conducting a detailed scoping review of 48 empirical studies in which interactive explanations are evaluated with human users. We also create a classification of interactive techniques specific to XAI and group the resulting categories according to their role in the cognitive process of explanation: "selective", "mutable" or "dialogic". We identify the effects of interactivity on several user-based metrics. We find that interactive explanations improve perceived usefulness and performance of the human+AI team but take longer. We highlight conflicting results regarding cognitive load and overconfidence. Lastly, we describe underexplored areas including measuring curiosity or learning or perturbing outcomes. CCS CONCEPTS • Human-centered computing → Interaction design theory, concepts and paradigms; • Computing methodologies → Artificial intelligence.
Origine | Fichiers produits par l'(les) auteur(s) |
---|---|
licence |
Copyright (Tous droits réservés)
|