|Universität Bielefeld > Sportwissenschaft > Redirect Arbeitsbereiche > Neurocognition and Action - Biomechanics > research|
|2014/01/01 - 2017/06/30|
It has been shown in several eye tracking experiments that while listening to sentences people look towards visual referents in the real world or on a screen. But natural language often refers to absent objects. It is a human ability to not only talk about what we see right in front of us but also to talk about past events involving visual scenes we saw, or to talk about hypo-thetical situations. During this, the human brain must find a way to ground language and therefore relate the incoming sounds to meaning, and ensure understanding. Interestingly, after showing people an image of a situation, they continue looking towards the positions of the referents of a sentence on a blank screen, even when the image has disappeared (e.g. Altmann, 2004; Knoeferle & Crocker, 2007).
In these blank screen experiments participants look at a depicted scene, then the computer screen goes blank, and they hear a sentence describing the scene. It is often the case that the eye movements of the participants on the blank screen are remarkably similar to the eye movements they would have carried out while looking at the depicted scene (although they might vary in size - see Johansson et al., 2006 for details).
My experiments are thought to provide insights into three important questions: First, what is the exact time course of blank screen effects? That is, how long do people have to look at a picture in order to temporarily store it in memory, and how long can the interval between picture and auditory stimulus be, so that the scene information can still be used for visual grounding? Second, how do saccades and fixations behave if there is a mismatch between the previously seen picture and the sentence? And third, what is the role of working memory in performing eye movements towards recently inspected referents that are absent when mentioned?
Answering these questions is an important step on the way towards a more complete model of language processing. The results of my experiments can contribute to the Coordinated Interplay Account (CIA) a model developed by Knoeferle & Crocker (2007), which makes an attempt to explain situated language comprehension and scene integration.
For more information, see [ here ] .