skip to main contentskip to main menuskip to footer Universität Bielefeld Play Search

Neurocognition and ­Action - Biomechanics

Neurocognition and Action logo
Crossfade, brain and robot hand
© Universität Bielefeld
Switch to main content of the section

Supervisors

Prof. Dr. Thomas Schack
Prof. Dr. Pia Knoeferle

Responsible Investigator

Katharina Wendler

The influence of recent events in situated language understanding: timing, type of process, and memory

Project duration: 2014/01 - 2017/06

It has been shown in several eye tracking experiments that while listening to sentences people look towards visual referents in the real world or on a screen. But natural language often refers to absent objects. It is a human ability to not only talk about what we see right in front of us but also to talk about past events involving visual scenes we saw, or to talk about hypo-thetical situations. During this, the human brain must find a way to ground language and therefore relate the incoming sounds to meaning, and ensure understanding. Interestingly, after showing people an image of a situation, they continue looking towards the positions of the referents of a sentence on a blank screen, even when the image has disappeared (e.g. Altmann, 2004; Knoeferle & Crocker, 2007).

In these blank screen experiments participants look at a depicted scene, then the computer screen goes blank, and they hear a sentence describing the scene. It is often the case that the eye movements of the participants on the blank screen are remarkably similar to the eye movements they would have carried out while looking at the depicted scene (although they might vary in size - see Johansson et al., 2006 for details).

My experiments are thought to provide insights into three important questions: First, what is the exact time course of blank screen effects? That is, how long do people have to look at a picture in order to temporarily store it in memory, and how long can the interval between picture and auditory stimulus be, so that the scene information can still be used for visual grounding? Second, how do saccades and fixations behave if there is a mismatch between the previously seen picture and the sentence? And third, what is the role of working memory in performing eye movements towards recently inspected referents that are absent when mentioned?

Answering these questions is an important step on the way towards a more complete model of language processing. The results of my experiments can contribute to the Coordinated Interplay Account (CIA) a model developed by Knoeferle & Crocker (2007), which makes an attempt to explain situated language comprehension and scene integration.

© NCA-Group - Universität Bielefeld
back to top