skip to main contentskip to main menuskip to footer Universität Bielefeld Play Search

Inclusive Explainable AI

Campus der Universität Bielefeld
© Universität Bielefeld

Inclusive Explainable AI

© Universität Bielefeld

Artificial intelligence (AI) and machine learning methods are increasingly relying on complex, black-box models that are not easily understood, especially by everyday users. The goal of explainable AI (XAI) research is to improve the transparency and understandabilty of the decisions made of AI systems. Typically, however, current state-of-the-art XAI methods treat explanations as a static one-way interaction without consideration of the explanation partner's understanding. The aim of the Transregional Collaborative Research Centre “Constructing Explainability” (TRR 318) is to address this, by considering explanations from a social perspective in which explanations are constructed together with the human explainee and machine explainer.

Our sub-project, Co-Constructing Social Signs of Understanding to Adapt Monitoring to Diversity (Project A06), aims to enhance the explanation process by enabling more inclusive monitoring of explainee understanding. In a co-constructive explanation process, a machine explainer should monitor the explainee's understanding of the provided explanation in order to tailor it to the explainee's current level of understanding. One method to do this is to capture and analyze non-verbal signals of understanding expressed by the explainee, such as facial expressions or gaze behavior. These signals, however, differ between individuals and the situation in which the explanation occurs. For example, individuals with social interaction conditions, such as autism or ADHD, or individuals in stressful situations may have reduced facial expressivity. This type of intra- and inter-individual diversity is typically not considered in the development of machine learning methods for social signal analysis. In this project, we address this gap by empirically investigating the intra- and inter-individual variation in signals of understanding expressed during real-world explanations. Using data collected through our research, we aim to evaluate biases in models used for monitoring human understanding and develop novel XAI methods that address these biases by enabling adaptive monitoring methods to enable more accurate monitoring for individuals and situations not typically well represented in social signal analysis research.

We gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft
(DFG, German Research Foundation): TRR 318/1 2021 – 438445824

back to top