Large Language Models (LLMs) like GPT-3 or GPT-4 have greatly advanced the state-of-the-art in computational linguistics and AI in general. But since they seem to come so close to human language use, LLMs do also receive increasing attention in theoretical and experimental strands of linguistics: linguistic researchers are currently debating to what extent LLMs can be seen as a model of the human ability to process language and use it for communication. In this research-oriented seminar, we will first cover some computational basics of LLMs and then dive into this debate: what do LLMs know about human language and how can we test this? How can we use insights and experimental designs from linguistic research to study LLMs? And maybe: how can we use LLMs to learn something about human language?
Some background knowlegde in one of the following areas would be great:
or
Frequency | Weekday | Time | Format / Place | Period | |
---|---|---|---|---|---|
weekly | Mo | 14-16 | X-B3-117 | 13.10.2025-06.02.2026 |
Module | Course | Requirements | |
---|---|---|---|
23-LIN-Inf Computerlinguistische Grundlagen für Informatik-Studierende | Veranstaltung aus dem Bereich computerlinguistische Grundlagen | Study requirement
|
Student information |
Veranstaltung aus dem Bereich computerlinguistische Grundlagen | Study requirement
|
Student information | |
Veranstaltung aus dem Bereich computerlinguistische Grundlagen | Study requirement
|
Student information | |
- | Graded examination | Student information | |
23-LIN-MaCL-MethAngewCL Methoden der angewandten Computerlinguistik | Lehrveranstaltung 1 | Study requirement
|
Student information |
Lehrveranstaltung 2 | Study requirement
|
Student information |
The binding module descriptions contain further information, including specifications on the "types of assignments" students need to complete. In cases where a module description mentions more than one kind of assignment, the respective member of the teaching staff will decide which task(s) they assign the students.