Project: IP 19
Project duration: 01/2014 - 12/2018
Project funding: 326.000 Euro
How can multimodal brain-machine interfaces be applied to improve human machine interactions qualitatively using resource efficiency and adaptivity? In order to reach this aim, we will introduce a novel analysis approach based on the detection and interpretation of object- and task-related processing difficulties in interaction situations. Based on prior work on brain machine interfaces (BMI) applied to highly controlled, idealized settings, we will pursue a new way of combining eye movement and EEG data in an online classifier tool to investigate its usability for interactive situations by providing adequate feedback (e.g., verbal or visual cues) adapted to the task relevance that objects in more richer and realistic situations have for the human user. The eye-tracking data will substitute for the temporal missing data by using fixations as a temporal reference. Only the integrated analysis of eye movements and EEG data permits the reliable online detection of brain responses. Averaged potentials aligned to fixation onset are called fixation-related potentials (FRPs). When recording eye movements in natural task using FRPs it is possible to infer the objects or words that are of particular relevance for a user or that can be easily processed or cause processing difficulties, without the need to ask them to report.
Interactive exchange with the human will be adapted based on these criteria, providing optimized feedback in terms of modality, level and frequency (repetitions). In order to reach these goals, we will investigate the efficiency of different forms of feedback (e.g., task-specific, -unspecific, imagery, subliminal/unconscious) applied in various realistic situations on different populations (e.g., normal healthy, elderly, patients). More general, in this second part, it will be determined how the BMI tool can be adapted to users? level of experience or familiarity. Here, mental representation does not just facilitate information selection, but also more generally permits a target-related and purposeful adaptation of behavioral potentials to conditions in the environment. This also includes storing the cognitive perceptual outcomes of learning processes as items in long-term memory. Our multimodal approach allows us to investigate the relationship between electroencephalographic signals (EEG, specifically the P300 component of the event-related potentials, ERP), eye movements and mental representation structures (measured by the structure dimension analysis ? motoric (SDA-M) as a measure of the learning.
For more information, see [ here ] .