The goal of multimodal affective computing is to predict emotional states from multimodal data such as videos which contain, audio, visual and possibly textual modalities. State-of-the-art methods for affective computing are increasingly turning to complex multimodal black-box models, including deep learning methods. While XAI has seen increased attention of the last years, most methods have been developed for single modalities with only limited research in explaining the dynamics and importance between different modalities In this project, you will develop a multimodal affective computing model and research and implement methods to explain the importance and dynamics of each model modality.
In case the proposal would not attract enough students for a team project, it can be adapted into an individual project or a project for two students (tandem project).
Required skills
- Python, PyTorch preferred (TensorFlow is also ok)
- Experience implementing Deep Learning models, especially video or multimodal models
Rhythmus | Tag | Uhrzeit | Format / Ort | Zeitraum | |
---|---|---|---|---|---|
nach Vereinbarung | 03.04.-14.07.2023 |
Verstecke vergangene Termine <<
Modul | Veranstaltung | Leistungen | |
---|---|---|---|
39-M-Inf-GP Grundlagenprojekt Intelligente Systeme | weiteres Projekt | unbenotete Prüfungsleistung
|
Studieninformation |
Die verbindlichen Modulbeschreibungen enthalten weitere Informationen, auch zu den "Leistungen" und ihren Anforderungen. Sind mehrere "Leistungsformen" möglich, entscheiden die jeweiligen Lehrenden darüber.