Human Action Recognition aims to automatically understand what people are doing in videos, which is essential for applications like human–robot interaction, workplace safety, and assistive technologies. In this project, you are going to annotate time-series data and apply transfer learning using the resulting labeled dataset. Tasks: 1. Define a set of action classes for the raw video data. 2. Annotate each action instance using an annotation tool such as Label Studio. 3. Fine-tune a pretrained model on the labeled dataset to recognize human actions or detect transitions between actions in video. References: - Torch video classification: https://docs.pytorch.org/vision/0.8/models.html#video-classification - PyTorchVideo (video understanding library): https://pytorchvideo.readthedocs.io/en/latest/model_zoo.html - https://mmaction2.readthedocs.io/en/latest/user_guides/inference.html - https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb
| Rhythmus | Tag | Uhrzeit | Format / Ort | Zeitraum | |
|---|---|---|---|---|---|
| nach Vereinbarung | n.V. | CITEC 0.114 | 13.04.-24.07.2026 |
| Modul | Veranstaltung | Leistungen | |
|---|---|---|---|
| 39-Inf-WP-CIT-x Kognitive Interaktionstechnologie (Schwerpunkt) Kognitive Interaktionstechnologie (Schwerpunkt) | Vertiefendes Projekt | Studieninformation | |
| - | benotete Prüfungsleistung | Studieninformation | |
| 39-Inf-WP-KI-x Künstliche Intelligenz (Schwerpunkt) Künstliche Intelligenz (Schwerpunkt) | Vertiefendes Projekt | Studieninformation | |
| - | benotete Prüfungsleistung | Studieninformation |
Die verbindlichen Modulbeschreibungen enthalten weitere Informationen, auch zu den "Leistungen" und ihren Anforderungen. Sind mehrere "Leistungsformen" möglich, entscheiden die jeweiligen Lehrenden darüber.