Autonomous robots and vehicles operating in environments shared with people must reliably estimate the 3D position of humans and objects to ensure safe operation. Traditionally, this requires dedicated depth sensors such as stereo cameras or LiDAR, which increase system complexity, cost, and calibration effort. In this project, you will investigate whether object detection models in combination with monocular depth generation models can be used to detect and track people in 3D using a single camera. Using state-of-the-art models (e.g. Depth Anything), you will evaluate depth accuracy, 3D localization error, and inference speed, and compare the results against measurements from physical depth sensors. Experiments will be conducted using a calibrated setup of stereocameras and LiDARs.
| Rhythmus | Tag | Uhrzeit | Format / Ort | Zeitraum | |
|---|---|---|---|---|---|
| nach Vereinbarung | n.V. | 13.04.-24.07.2026 |
| Modul | Veranstaltung | Leistungen | |
|---|---|---|---|
| 39-M-Inf-ASE-app-foc_a Applied Autonomous Systems Engineering (focus) Applied Autonomous Systems Engineering (focus) | Applied Autonomous Systems Engineering (focus): Projekt | Studienleistung
|
Studieninformation |
Die verbindlichen Modulbeschreibungen enthalten weitere Informationen, auch zu den "Leistungen" und ihren Anforderungen. Sind mehrere "Leistungsformen" möglich, entscheiden die jeweiligen Lehrenden darüber.