The research group Human-centered Artificial Intelligence: Multimodal Behavior Processing, headed by Prof. Dr. Hanna Drimalla, is dedicated to the automatic analysis of social interaction signals (e.g., facial expression, gaze behavior, voice etc.) using machine learning as well as speech and image processing. Three aspects are the focus of our research: the detection of positive and negative affect, the measurement of stress, and the analysis of social interaction patterns. All three have in common that they are multimodal and time-dependent phenomena. To address this complexity, we collect innovative training data and develop novel analysis methods.
Join our newsletter or follow us on Twitter, Facebook, Instagram or LinkedIn to hear about ongoing studies, research results, and job openings.