The goal of multimodal affective computing is to predict emotional states from data facial images, speech, or videos. State-of-the-art methods for affective computing are increasingly turning to complex black-box models, making it difficult to comprehend how a model came to a particular decision. In this project, you will implement state-of-the-art XAI methods for an affective computing task. You will then use these methods to perform user studies that evaluate the effectiveness and understandability of the methods for explaining model decisions.
In case the proposal would not attract enough students for a team project, it can be adapted into an individual project or a project for two students (tandem project).
Required skills:
- Python, PyTorch preferred (TensorFlow is also ok)
- Some experience / knowledge of Human Computer Interaction and user studies
- Experience implementing Deep Learning models, especially video or multimodal models
Rhythmus | Tag | Uhrzeit | Format / Ort | Zeitraum |
---|
Modul | Veranstaltung | Leistungen | |
---|---|---|---|
39-M-Inf-GP Grundlagenprojekt Intelligente Systeme | weiteres Projekt | unbenotete Prüfungsleistung
|
Studieninformation |
Die verbindlichen Modulbeschreibungen enthalten weitere Informationen, auch zu den "Leistungen" und ihren Anforderungen. Sind mehrere "Leistungsformen" möglich, entscheiden die jeweiligen Lehrenden darüber.