Abstract

Learning to Land: Counterfactual Explanations in a Computer Game.
Jörg Cassens, Max-Leonard Plack, Rebekah Wegener

Explainable Artificial Intelligence (XAI) has emerged as a crucial field concerned with AI systems that are capable of explaining a decision-making process or provide insights into system inner workings. A variety of approaches to explanation generation exist, and determining the most useful type of explanation for specific users with specific goals and in a specific context remains an active research question within XAI (Adadi and Berrada, 2019; Gunning and Aha, 2019).

Counterfactual explanations have garnered significant attention as promising candidates, drawing motivation from the cognitive sciences (Byrne, 2007; Byrne, 2019; Keane and Smyth, 2019; Kenny et al., 2021). For instance, in a support system for banks, a credit applicant's rejection may be explained by residing in an area with a higher probability of defaults.

However, from a linguistic perspective, counterfactual explanations exhibit specific patterning that may not be suitable for every context. Counterfactual statements often involve conditional clauses with verbs in the subjunctive mood, indicating hypothetical or contrary-to-fact situations (e.g., "If I had studied harder, I would have passed the exam"). While these structures are commonly used in everyday language, their effectiveness and appropriateness vary across different communicative contexts and change over time (Arús-Hita, 2023).

Certain contexts may be less conducive to the use of counterfactuals by humans, suggesting that counterfactual explanations provided by AI systems might not always be beneficial. For instance, in highly technical domains where precision and factual accuracy are paramount, such as engineering or computer programming, counterfactual explanations may be less preferred. In these contexts, individuals might prioritize concrete and factual information over hypothetical or alternative scenarios.

Moreover, cultural and linguistic variations can influence the acceptance and comprehension of counterfactual explanations. Different languages and cultures exhibit varying degrees of familiarity and frequency in the use of counterfactual constructions. Therefore, AI systems that rely on counterfactual explanations should consider the linguistic and cultural backgrounds of the users to ensure the explanations are meaningful and relevant.

This paper introduces recent experiments conducted to investigate the use of counterfactual explanations in situations that involve learning new skills in a safety-critical technical domain. The experiment employs a game-based approach, specifically a simple flight simulation, where participants first build a mental model of the different aircraft systems (Bansal et al, 2019) and then learn to follow standard operating procedures to ensure a successful landing.

First results indicate that counterfactual explanations do not perform better than factual explanations when it comes to helping build appropriate mental models and operational skills. This supports our theoretic argument based on observable linguistic features. The study highlights the importance of considering linguistic perspectives in the development of explainable AI, shedding light on the complex relationship between technology and human cognition within the context of XAI.

Understanding the contextual factors that influence the appropriateness and effectiveness of counterfactual explanations is crucial for developing AI systems that provide explanations tailored to the needs and preferences of users (Lukin et al. 2011; Wegener and Fontaine, in press). By considering linguistic patterns, cultural nuances, and specific communicative contexts, AI systems can deliver explanations that align with human cognitive processes and linguistic expectations. This nuanced approach ensures that counterfactual explanations are utilized in contexts where they are most useful and well-received, enhancing the overall usability and acceptance of explainable AI technologies.

References:

  • Adadi, Amina, and Mohammed Berrada. "Peeking inside the black-box: a survey on explainable artificial intelligence (XAI)." IEEE access 6 (2018): 52138-52160.
  • Arús-Hita, Jorge. "‘If they would have gone that path...’: Counterfactual conditionals on the move". Book of Abstracts ESFLC 2023, Vigo, 2023.
  • Bansal, Gagan, Besmira Nushi, Ece Kamar, Walter S. Lasecki, Daniel S. Weld, and Eric Horvitz. "Beyond accuracy: The role of mental models in human-AI team performance." In Proceedings of the AAAI conference on human computation and crowdsourcing, vol. 7, no. 1, pp. 2-11. 2019.
  • Byrne, Ruth MJ. The rational imagination: How people create alternatives to reality. MIT press, 2007.
  • Byrne, Ruth MJ. "Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning." In IJCAI, pp. 6276-6282. 2019.
  • Cassens, Jörg and Rebekah Wegener. "Explainable AI: Intrinsic, Dialogic, and Impact Measures of Success." In Proceedings of the ACM CHI Workshop on Operationalizing Human-Centered Perspectives in Explainable AI (HCXAI 2021)
  • Gunning, David, and David Aha. "DARPA’s explainable artificial intelligence (XAI) program." AI magazine 40, no. 2 (2019): 44-58.
  • Keane, Mark T., and Barry Smyth. "Good counterfactuals and where to find them: A case-based technique for generating counterfactuals for explainable AI (XAI)." In Case-Based Reasoning Research and Development: 28th International Conference, ICCBR 2020, Salamanca, Spain, June 8–12, 2020, Proceedings 28, pp. 163-178. Springer International Publishing, 2020.
  • Kenny, Eoin M., Courtney Ford, Molly Quinn, and Mark T. Keane. "Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies." Artificial Intelligence 294 (2021): 103459.
  • Lukin, Annabelle, Alison R. Moore, Maria Herke, Rebekah Wegener, and Canzhong Wu. "Halliday's model of register revisited and explored." (2011): 187.
  • Wegener, Rebekah, and Lise Fontaine. "A functional approach to context." In Cambridge Handbook of Language and Context. Cambridge University Press, in press.

Last modified: Friday, 2023-09-29 22:34 UTC.