Human Action Recognition aims to automatically understand what people are doing in videos, which is essential for applications like human–robot interaction, workplace safety, and assistive technologies. In this project, you are going to annotate time-series data and apply transfer learning using the resulting labeled dataset. Tasks: 1. Define a set of action classes for the raw video data. 2. Annotate each action instance using an annotation tool such as Label Studio. 3. Fine-tune a pretrained model on the labeled dataset to recognize human actions or detect transitions between actions in video. References: - Torch video classification: https://docs.pytorch.org/vision/0.8/models.html#video-classification - PyTorchVideo (video understanding library): https://pytorchvideo.readthedocs.io/en/latest/model_zoo.html - https://mmaction2.readthedocs.io/en/latest/user_guides/inference.html - https://github.com/huggingface/notebooks/blob/main/examples/video_classification.ipynb
| Frequency | Weekday | Time | Format / Place | Period | |
|---|---|---|---|---|---|
| by appointment | n.V. | CITEC 0.114 | 13.04.-24.07.2026 |
| Module | Course | Requirements | |
|---|---|---|---|
| 39-Inf-WP-CIT-x Cognitive Interaction Technology (Focus) Kognitive Interaktionstechnologie (Schwerpunkt) | Vertiefendes Projekt | Student information | |
| - | Graded examination | Student information | |
| 39-Inf-WP-KI-x Artificial Intelligence (Focus) Künstliche Intelligenz (Schwerpunkt) | Vertiefendes Projekt | Student information | |
| - | Graded examination | Student information |
The binding module descriptions contain further information, including specifications on the "types of assignments" students need to complete. In cases where a module description mentions more than one kind of assignment, the respective member of the teaching staff will decide which task(s) they assign the students.