skip to main contentskip to main menuskip to footer Universität Bielefeld Play Search
  • Interactive Robotics in Medicine and Care

    Pepper in a study
    Pepper in a study
    © Bielefeld University

Co-Constructive Robot Task Teaching and Learning Utilizing Narrative Enabled Episodic Memories (NEEMs) Generated in a Virtual Environment

Switch to main content of the section

Prof. Dr.-Ing. Anna-Lisa Vollmer

Professur für Interaktive Robotik in Medizin und Pflege

Robin Helmert

Wissenschaftlicher Mitarbeiter

Project duration

October 2022 - today

Roboter-Lernen
© Universität Bielefeld

This project, developed in collaboration with the Joint Research Center on Cooperative and Cognition-enabled AI, aims to teach a robot tasks interactively in a virtual reality (VR) environment and ultimately execute them on a real robot. The focus lies on both communication and interaction with the virtual robot and the ability to learn new tasks based on acquired foundational knowledge. The robot should be capable of learning from human scaffolding, particularly recognizing and utilizing "important" interaction episodes NEEMs, and using parts of known actions for learning new tasks.

VR Interaktion
© Universität Bielefeld

To accomplish this, a digital twin of a real apartment was created in VR. Both in the digital and real apartments, there is a robot that is intended to be taught tasks. VR is specifically used to enable realistic interaction for humans without the need to directly interact with the real robot. Furthermore, with the help of VR, the robot can perceive the environment independently of real sensors.

For task training, the capture and representation of tasks, so-called NEEMs, are used. These NEEMs not only record movements but also capture semantic relationships. This allows tasks to be segmented, and individual subtasks can be utilized for faster learning of future tasks. Missing information is supposed to be derived from both previous movements and an underlying ontology.

Various interaction modes can be used for human-robot interaction, and they are to be explored within the scope of this project. This includes conventional controllers that are familiar to many people, as well as modern gloves capable of accurately transferring real movements to virtual hands, enabling a lifelike simulation.

In the further course of the project, the robot will be given the ability to communicate verbally, so that it is able to ask users for missing information that cannot be derived. Then, through demonstrations and interactions, the users will be able to fill in these knowledge gaps and thus expand the robot's knowledge. The project Modelling the multimodal dialogue in co-constructive task learning also plays a crucial role in verbal communication and the detection of knowledge gaps.

Research questions

  • Which control medium is most suited for Robot Learning for Robot CoConstructive Task Teaching?
  • How to measure and evaluate the quality of NEEMs?
  • How well a robot can learn from NEEMs?

Platforms and systems

back to top