In the Mobile Action Assistance Lab we combine different mobile techniques, like eye tracking, measurement of mental representation structures, wearable sensor technology, AR (Augmented Reality) -- and Electroencephalography (EEG) techniques with modern diagnostics and corrective intervention techniques. The equipment consists of high cost devices, such as binocular eye tracker, as well as low-cost solutions, such as the Kinect. We capture multi-modal data of people acting in everyday situations, such as assembling a device, doing some sport exercises, or learning new skills. By using adaptive algorithms, such as machine and deep learning, we are able to design mobile cognitive devices which are able to identify problems in actual action processes, to react when mistakes are made, as well as to provide situation and context dependent assistance in auditive, textual, visual or avatar based format, according to people's mental and physical capabilities. The overall aim is to develop mobile cognitive assistive systems which are able to personally adapt to the particular user and action context and to provide individual action assistance in an unobtrusive way.
|Action support for an assembly task. Left: The user is wearing an AR-Eyetracking Glass while assembling LEGO parts. Right: Situation and context dependent assistance is displayed on a transparent virtual plane in users' field of view.|