Advances in AI and machine learning (ML) are enabling new methods for more empathetic human-machine interactions and tools for digital mental health. These systems typically rely on the analysis of human behavior. Humans, however, are known to show individual differences in how they express themselves. As AI-based assessment of human behavior relies on data-driven learning, if the datasets used for training don't capture these individual differences they may be biased and untrustworthy.
My research aims to address this challenge through the evaluation and application of explainable AI (XAI) methods to make human behavior assessment more transparent.
Currently, I am leading the project Inclusive Explainable AI with the MBP group. Starting January 1st, I will lead the newly founded group Human-Centered Explainable AI (HCXAI). In this group, I will lead the project "Effective Explainable AI for High-stakes Decisions in Mental Health Assessment", to evaluate how XAI methods should be used to limit over-trusting biased mental health assessment systems.
July 2021 - Present
|Postdoctoral Researcher, Bielefeld University: Multimodal Behavior Processing|
|2019 - 2021||Postdoctoral Researcher, Fraunhofer Institute for Digital Media Technology (IDMT): Industrial Sound Analysis|
|2014 - 2019||PhD and Research Assistant, University of Victoria: New Interfaces for Musical Expression (NIME) and Music Information Retrieval (MIR)
Thesis: MusE-XR: Musical Experiences in Extended Reality to Enhance Learning and Performance
|2012 - 2014||MSc, College of Charleston: Computing in the Arts|