Autonomous robots and vehicles operating in environments shared with people must reliably estimate the 3D position of humans and objects to ensure safe operation. Traditionally, this requires dedicated depth sensors such as stereo cameras or LiDAR, which increase system complexity, cost, and calibration effort. In this project, you will investigate whether object detection models in combination with monocular depth generation models can be used to detect and track people in 3D using a single camera. Using state-of-the-art models (e.g. Depth Anything), you will evaluate depth accuracy, 3D localization error, and inference speed, and compare the results against measurements from physical depth sensors. Experiments will be conducted using a calibrated setup of stereocameras and LiDARs.
| Frequency | Weekday | Time | Format / Place | Period | |
|---|---|---|---|---|---|
| by appointment | n.V. | 13.04.-24.07.2026 |
| Module | Course | Requirements | |
|---|---|---|---|
| 39-M-Inf-ASE-app-foc_a Applied Autonomous Systems Engineering (focus) Applied Autonomous Systems Engineering (focus) | Applied Autonomous Systems Engineering (focus): Projekt | Study requirement
|
Student information |
The binding module descriptions contain further information, including specifications on the "types of assignments" students need to complete. In cases where a module description mentions more than one kind of assignment, the respective member of the teaching staff will decide which task(s) they assign the students.