zum Hauptinhalt wechseln zum Hauptmenü wechseln zum Fußbereich wechseln Universität Bielefeld Play Search
  • CITEC Lecture Series

    Campus der Universität Bielefeld
    Campus der Universität Bielefeld
    © Universität Bielefeld

CITEC Lecture Series

Zum Hauptinhalt der Sektion wechseln

Prof. Dr.-Ing. Stefan Kopp

Stellvertretender CITEC-Koordinator / Vorstandsmitglied

Telefon Sekr.
+49 521 106-12153
lecture hall at Bielefeld University
© Universität Bielefeld

In the CITEC Lecture Series, local and invited speakers highlight different topics of Cognitive Interaction Technology from an interdisciplinary perspective. This includes the development and analysis of A.I. and intelligent systems as well as how cognitive system can cooperate with humans in an open world and in a socially intelligent, trustworthy and sustainable manner.

The Lecture Series takes place biweekly on Mon 16-18 in the CITEC lecture hall.

Lectures in the Summer Semester 2024

Date Speaker Topic
Apr. 08 2024  Benjamin Paaßen Knowledge Representation and Machine Learning for Education
Apr. 22 2024 Helge Rhodin Visual AI for Extended Reality
May 06 2024 Sina Zarrieß Modeling context in situated language generation
June 03 2024 Klaus Neumann Magnetic Levitation, Robotics and Imitation Learning in Automation
June 17 2024 Anna-Lisa Vollmer Robots that learn interactively with lay users
July 01 2024 Markus H. Hefter, Simon A. Schriek, Kirsten Berthold Video-based learning and the benefits of self-explanations and prompts
July 15 2024 Joana Cholin t.b.a. (Psycholinguistics)

Abstracts of the Lecture Talks

Since the release of ChatGPT, the relevance of artificial intelligence research for education has become glaringly obvious. However, ChatGPT is not an educational technology, as such. It is designed to complete text such that the user is satisfied with the result as quickly as possible - without regard for students' misconceptions, learning, or mere factual correctness. By contrast, a system designed for teaching and learning would center students' learning, taking individual student needs, teacher input, and pedagogical theory into account. The talk will present methods developed by the Knowledge Representation and Machine Learning research group at Bielefeld University to support educational goals, from classic educational data mining approaches to infer latent skills from observed exercise performance up to the state-of-the-art in integrating large language models in intelligent tutoring systems for computer programming.
 

I'll start with a general overview of the current state of computer vision and computer graphics and their recent trends. The fields have made tremendous progress in recent years. It is both exciting and scary to imagine what the recent breakthroughs in large generative models will bring about. I'll then introduce my research group, what we have done at UBC, and how the new group at BU fits into these recent developments.

Our everyday communication takes places in rich physical and social contexts. Linguistic research shows that language use in dialogue is highly sensitive and adaptative to this context. Yet, computational approaches to language modeling and generation still have very limited ways of integrating non-linguistic context. In this talk, I will present our work on natural language generation that aims at extending language models towards multimodal settings and context-sensitive reasoning.

The presentation will be divided into two parts. The first part will explore the emerging field of magnetic levitation, a cutting-edge technology that is transforming inline product transport in contemporary manufacturing systems. This technology enables the individualized transportation of products to any processing station, significantly enhancing machine capacities. It consists of specialized movers equipped with complex permanent magnet structures that are controlled in six dimensions through electromagnetic fields, produced by static motor modules. Current developments focus on integrating product transportation and manipulation capabilities, aiming at unleashing the power for more efficient production systems. The second part will cover the field of Imitation learning in robotics and automation. Small and medium-sized enterprises (SMEs) are facing considerable challenges, such as a shortage of skilled workers. As a result, the need for simple, robust and cost-effective robotic systems is becoming increasingly clear. Fortunately, some recent studies in the field of AI suggest that the long-standing dream of robots that learn by imitation could become a reality. In this part, we will look at and discuss the relevant concepts, potentials and challenges for robot-driven automation.

Robots that should assist users in medicine and care face particularly diverse users. To assist them in their individual needs, robot‘s should have the capacity to learn in interaction with humans who do not have a background in machine learning or robotics and, vice versa, users need to be enabled to teach them. This talk will give insights into research of the Interactive Robotics in Medicine and Care Group which is targeting this area from different perspectives ranging from findings on adult-child interactions over measuring and improving the user‘s mental model of the robot to developing and evaluating interfaces for human-in-the-loop robot learning.

Video-based learning’s popularity continues. In the field of technical apprenticeship, video tutorials are a common way to learn how to operate an industrial machine. To ensure sustainable learning outcomes, learners need to deeply process rather than passively consume such videos. Our own studies had already revealed the benefits of self-explanation prompts to let students deeply process given videos. We had also found that self-explanation quality is more important for learning outcomes than the presentation mode—given that the material is carefully designed to avoid extraneous cognitive load. Open questions concern transferring these findings into the field of technical apprenticeship. Which prompt type is most effective when learning from technical tutorials? Should we prompt learners to make retrospective notes about the working steps they have just watched in the video tutorial, or should they anticipate what comes next? To answer these questions, we conducted two experiments with university and high school students (N = 159; N = 206). They learned with authentic technical video tutorials enhanced by different prompt types. Both experiments revealed that retrospective note prompts supported our learners with little prior knowledge best. The learners’ note quality mediated the prompt effect on learning outcomes. Our findings provide empirical support how prompts can support learning from technical video tutorials. They also highlight the importance of processing the videos deeply by generating high quality notes. Building on these results, intelligent tutoring systems might analyze learners’ note quality via large language models. After all, large language models are in vogue as the base of popular chatbots such as ChatGPT. This would allow more individually adjusted instructional support, such as the adequate prompt type or explanations on-demand—just when a learner needs it.


Zum Seitenanfang