Within this 5 or 10 ECTS project we offer interesting research questions at the intersection of robotics and (deep) learning. If you are generally interested in one or ideally both of these topics, please approach the organizer personally (CITEC 2.035) or via email (rhaschke@techfak.de) to discuss concrete project ideas tailored towards your specific interests and skills.
The following list provides some examples and explicitly doesn't aim to be exhaustive:
• Learning Visual/Tactile Servoing Skills
A key capability of robots is to servo their end-effector into a desired configuration with the environment. Classically, this is achieved via offline planning and open-loop execution of planned trajectories. Obvisouly, this approach fails in the presence of uncertainties, e.g. inaccurate kinematic models, environment perception errors or unmodeled environment dynamics. To overcome these limitations, visual and tactile servoing approaches have been proposed [1, 2, 3], which close the control loop via visual or tactile feedback.
Traditionally, visual and tactile features were manually designed. Within this thesis, you should implement and evaluate modern deep-learning based approaches to learn suitable features. Methodologically, various options are available:
o Via self-supervised learning, local interaction Jacobians could be learned. A key research question is to find a suitable feature representation, which is robust, task-specific, interpretable, and ideally linear w.r.t. control.
o Many recent approaches use Deep Reinforcement Learning (DRL) to learn a policy in an end-to-end fashion. While quite successful, these methods are very data-hungry and do not generalize.
o Alternatively, RL could be used to efficiently explore the feature-action manifold, which is learned via local linear models in an efficient fashion [4]. The reward function should focus on controllability/predictability of action outcomes, while exploration should ensure coverage of the manifold.
• Building semantic scene representations
• Learning to grasp or manipulate objects, e.g. performing peg-in-hole tasks, opening jars, loading/unloading a dishwasher, grasping into a fridge
• Learning complex action sequences and distilling the learned policy into behavior trees or hierarchical state machines of basic (servoing) controllers
• Designing, executing, and evaluating studies to compare motor learning strategies in humans and artificial agents
Robot experiments will be performed in simulation first (NVIDIA Isaac Sim or MuJoCo) and can be transferred to our real robots (a Franka 7-DoF arm and the RBO3 hand) on success.
| Rhythmus | Tag | Uhrzeit | Format / Ort | Zeitraum | |
|---|---|---|---|---|---|
| nach Vereinbarung | n.V. | 13.10.2025-06.02.2026 |
| Modul | Veranstaltung | Leistungen | |
|---|---|---|---|
| 39-M-Inf-AI-app-foc_a Applied Artificial Intelligence (focus) Applied Artificial Intelligence (focus) | Applied Artificial Intelligence (focus): Projekt | Studienleistung
|
Studieninformation |
| - | benotete Prüfungsleistung | Studieninformation | |
| 39-M-Inf-ASE-app-foc_a Applied Autonomous Systems Engineering (focus) Applied Autonomous Systems Engineering (focus) | Applied Autonomous Systems Engineering (focus): Projekt | Studienleistung
|
Studieninformation |
| 39-M-Inf-P Projekt Projekt | Projekt | unbenotete Prüfungsleistung
|
Studieninformation |
Die verbindlichen Modulbeschreibungen enthalten weitere Informationen, auch zu den "Leistungen" und ihren Anforderungen. Sind mehrere "Leistungsformen" möglich, entscheiden die jeweiligen Lehrenden darüber.