Depending on the size of the project (5 or 10 LPs), different variants of the following topics can be considered.
If you are interested in any of them, please contact Jay (Isaac) Roberts (iroberts[at]techfak.uni-bielefeld.de).
-------------------------------------------------------------------------------------------------------------------------------------------
Shapley Interaction of Concepts
Many existing methods for automatic concept extraction rely on activations from the penultimate layer of deep neural networks. While this has produced promising results in explainable AI (XAI), such approaches most likely overlook complex interactions between concepts due to the linear classification layer. Additionally, methods using Shapley typically perturb features in the input space. This of course makes sense when working with tabular data but becomes more complex when dealing with high-dimensional data such as images and text. This project aims to develop methods to capture and analyze concept interactions— using Shapley-based interaction methods—and to investigate how these interactions can provide more nuanced explanations of model behavior applied specifically to Concept Bottleneck Models (CBMs) or another layer in a deep network.
Relevant Literature:
1. Shapley Interactions - https://proceedings.neurips.cc/paper_files/paper/2023/file/264f2e10479c9370972847e96107db7f-Paper-Conference.pdf
2. Concepts - https://openaccess.thecvf.com/content/CVPR2023/papers/Fel_CRAFT_Concept_Recursive_Activation_FacTorization_for_Explainability_CVPR_2023_paper.pdf
-------------------------------------------------------------------------------------------------------------------------------------------
Probabilistic Concept Extraction
Variational Autoencoders (VAEs) have shown strong performance in tasks such as out-of-distribution detection and image reconstruction, yet they have rarely been explored for concept extraction. Using a Sparse VAE could allow concepts to be modeled as multivariate Gaussian distributions, potentially offering new insights into model uncertainty. This project will investigate whether a Sparse VAE is suitable for concept extraction and, if successful, explore how it can be used to explain model uncertainty.
Relevant Literature:
1. Concepts - https://openaccess.thecvf.com/content/CVPR2023/papers/Fel_CRAFT_Concept_Recursive_Activation_FacTorization_for_Explainability_CVPR_2023_paper.pdf
2. VAEs - https://arxiv.org/abs/1906.02691
3. Sparse VAEs - https://direct.mit.edu/neco/article/36/12/2571/124821/Sparse-Coding-Variational-Autoencoders
-------------------------------------------------------------------------------------------------------------------------------------------
3D Concept Extraction
Concepts have gained significant attention as an explainable AI (XAI) technique due to their human interpretability. While concept-based explanations are widely studied in image and text domains, much less work has been done in the context of 3D data. This project will explore whether human-interpretable concepts can be identified in 3D datasets, and if so, evaluate their suitability as an explanation technique.
Relevant Literature:
1. Concepts - https://openaccess.thecvf.com/content/CVPR2023/papers/Fel_CRAFT_Concept_Recursive_Activation_FacTorization_for_Explainability_CVPR_2023_paper.pdf
2. GeoConv - https://openreview.net/pdf?id=b0elDO9v31
-------------------------------------------------------------------------------------------------------------------------------------------
Conceptualizing Concept Drift
Concepts have recently gained attention as an explainable AI (XAI) tool because of their human interpretability. While recent work shows that concepts can provide meaningful explanations for high-dimensional data drift, many open questions remain. This topic offers several directions, such as studying the properties of the embeddings required for concept extraction, adapting concept-based drift localization to online settings, extending the current approach from images to different data domains such as text, or developing new methods for detecting drift directly through concepts.
Relevant Literature:
1. Concepts - https://openaccess.thecvf.com/content/CVPR2023/papers/Fel_CRAFT_Concept_Recursive_Activation_FacTorization_for_Explainability_CVPR_2023_paper.pdf
2. Model-based Drift Explanations - https://www.sciencedirect.com/science/article/pii/S0925231223007634
3. Conceptualizing Concept Drift - https://www.esann.org/sites/default/files/proceedings/2025/ES2025-117.pdf
-------------------------------------------------------------------------------------------------------------------------------------------
All topics use some variant of unsupervised concept learning for xAI. Hence, recent papers on that topic are relevant, usually:
The CRAFT paper: https://openaccess.thecvf.com/content/CVPR2023/papers/Fel_CRAFT_Concept_Recursive_Activation_FacTorization_for_Explainability_CVPR_2023_paper.pdf
| Frequency | Weekday | Time | Format / Place | Period | |
|---|---|---|---|---|---|
| by appointment | n.V. | 13.10.2025-06.02.2026 |
| Module | Course | Requirements | |
|---|---|---|---|
| 39-M-Inf-INT-app-foc_a Applied Interaction Technology (focus) | Applied Interaction Technology (focus): Projekt | Study requirement
|
Student information |
The binding module descriptions contain further information, including specifications on the "types of assignments" students need to complete. In cases where a module description mentions more than one kind of assignment, the respective member of the teaching staff will decide which task(s) they assign the students.