Learning to Land: Counterfactual Explanations in a Computer Game.
Jörg Cassens, Max-Leonard Plack, Rebekah Wegener
Explainable Artificial Intelligence (XAI) has emerged as a crucial field concerned with AI systems that are capable of explaining a decision-making process or provide insights into system inner workings. A variety of approaches to explanation generation exist, and determining the most useful type of explanation for specific users with specific goals and in a specific context remains an active research question within XAI (Adadi and Berrada, 2019; Gunning and Aha, 2019).
Counterfactual explanations have garnered significant attention as promising candidates, drawing motivation from the cognitive sciences (Byrne, 2007; Byrne, 2019; Keane and Smyth, 2019; Kenny et al., 2021). For instance, in a support system for banks, a credit applicant's rejection may be explained by residing in an area with a higher probability of defaults.
However, from a linguistic perspective, counterfactual explanations exhibit specific patterning that may not be suitable for every context. Counterfactual statements often involve conditional clauses with verbs in the subjunctive mood, indicating hypothetical or contrary-to-fact situations (e.g., "If I had studied harder, I would have passed the exam"). While these structures are commonly used in everyday language, their effectiveness and appropriateness vary across different communicative contexts and change over time (Arús-Hita, 2023).
Certain contexts may be less conducive to the use of counterfactuals by humans, suggesting that counterfactual explanations provided by AI systems might not always be beneficial. For instance, in highly technical domains where precision and factual accuracy are paramount, such as engineering or computer programming, counterfactual explanations may be less preferred. In these contexts, individuals might prioritize concrete and factual information over hypothetical or alternative scenarios.
Moreover, cultural and linguistic variations can influence the acceptance and comprehension of counterfactual explanations. Different languages and cultures exhibit varying degrees of familiarity and frequency in the use of counterfactual constructions. Therefore, AI systems that rely on counterfactual explanations should consider the linguistic and cultural backgrounds of the users to ensure the explanations are meaningful and relevant.
This paper introduces recent experiments conducted to investigate the use of counterfactual explanations in situations that involve learning new skills in a safety-critical technical domain. The experiment employs a game-based approach, specifically a simple flight simulation, where participants first build a mental model of the different aircraft systems (Bansal et al, 2019) and then learn to follow standard operating procedures to ensure a successful landing.
First results indicate that counterfactual explanations do not perform better than factual explanations when it comes to helping build appropriate mental models and operational skills. This supports our theoretic argument based on observable linguistic features. The study highlights the importance of considering linguistic perspectives in the development of explainable AI, shedding light on the complex relationship between technology and human cognition within the context of XAI.
Understanding the contextual factors that influence the appropriateness and effectiveness of counterfactual explanations is crucial for developing AI systems that provide explanations tailored to the needs and preferences of users (Lukin et al. 2011; Wegener and Fontaine, in press). By considering linguistic patterns, cultural nuances, and specific communicative contexts, AI systems can deliver explanations that align with human cognitive processes and linguistic expectations. This nuanced approach ensures that counterfactual explanations are utilized in contexts where they are most useful and well-received, enhancing the overall usability and acceptance of explainable AI technologies.
References:
Last modified: Friday, 2023-09-29 22:34 UTC.
Deadlines Anywhere on Earth
Jörg Cassens
University of Hildesheim
DE-31141 Hildesheim
mrc2023@kriwi.de