The overarching goal of B05 is to achieve context-sensitive explainability in interactive task learning - explainability that should be primarily implicit and not involve pronounced explanations. The projects considers the scenario of an AI system in the form of a humanoid robot equipped with a state-of-the-art learning mechanism. It also learns physical tasks in interaction with human users without deep technical knowledge. This scenario is of great importance in various fields, as robots enter our everyday life as ubiquitous household helpers, medical assistants or factory floor assistants. Therefore, we need to enable them to learn from the users in everyday life.
For such interactive learning to be successful, the human user must be able to form an accurate mental model (i.e. internal representations that people build about things) of how the system learns. Only then can the user understand the robot and provide good training input for learning. At the same time, an explanation of the learning (our explanandum) by the robot (our explainer) for the human user (our explainer) should not dominate the interaction, but happen alongside.
B05 goes beyond current human-in-the-loop systems and aims at what we call co-constructive training (CCT). The user's understanding of how the robot learns is co-constructed by combining two mechanisms: (a) monitoring the user's mental model from the training data and (b) adaptively supporting the user's understanding by visualising internal (architectural) information. The long-term goal of B05 is to algorithmically model the context of a training interaction in a system that is able to tailor co-constructive training to the individual human user. In the first phase, the project will address the establishment of the empirical and conceptual foundation through a series of experimental human-robot interaction studies. B05 has two main research foci, which will be pursued in collaboration with the project Ö. The first focus examines, in terms of the co-constructive approach, the interactions that take place when training a learning robot. Accordingly, explainability can be achieved within a co-constructive interaction, and B05 tests whether an accurate mental model that the user has of the learning process of the AI system can emerge in a CCT. In the second main research focus, for its long-term goal of developing a context-aware system, the project will investigate the social contextualisation of this training to understand whether the required technology concepts differ for people with different roles in society. Specifically, the project will first consider factors such as gender and age, as well as prior knowledge, and seek to understand their influence on the formation of users' mental models and their attitudes towards technology in the CCT environment.