In the project Virtual Reality theoretical and practical knowledge obtained in the lecture and the tutorial on Virtual Reality are trained and deepened in a practical application.
The specific project title is:
Gaze-based Interaction in Virtual Reality
Language: The project may be held in German or English, depending on the participants.
General Goal of the Project
In this project we want to create a virtual environment in which we can prototype gaze-based human-computer interaction. A central idea is that of a mobile cognitive assistant that is able to recognize everyday situations in which a digital assistant may help us.
An Attentive Shopping Assistant for the Supermarket
The concrete application idea we want to realize in the project is that of a mobile assistant for shopping situations.
When shopping for complex products, such as Muesli, it can be very time consuming to find the optimal product (unless you already have settled on a certain product and are "habituated" in your decision): there are several features along which a decision can be optimized (price, calories, sugar, brand, taste, ingredients, way of production, e.g. organic) and there might be other constraints, such as allergies against nuts. A typical shelf in a supermarket might contain 50 to 200 products of a kind, which leads to complex decision problems. In addition to that, some information is not directly accessible or visible, so packages need to be picked up and turned around…
The idea is now to use visual attention to detect which products the user is currently attending to and to create an assistant system based on these information to provide the relevant information to the user, e.g. by augmenting the shelves with small information bubbles. There are many possibilities to realize such visualizations and it is difficult to test them under real conditions in a real supermarket.
To this end, we will create a particular shelf in the supermarket with a rich selection of products of a chosen category (Muesli is nice because many packages are similar in shape and thus easy to model). The user will be enabled to walk through the supermarket using the CAVE and she will be equipped with an eye-tracking system to monitor her gaze. Based on the gaze information and the knowledge about the product placements, we will then be able to identify the product currently in focus in real-time. The technology for this is already in place and has been tested for other scenarios, we will thus merely need to adapt it to our application example.
Based on the supermarket and the gaze analysis, we will then design different assistant functions and implement several ways of visualizing information for the user.
Typical tasks we could support are:
- Finding a particular product by, e.g.
○ highlighting it in the shelf, or
○ presenting arrows pointing in the direction of the product, or
○ drawing a line from the current point of regard towards the target, etc…
- Finding a selection of products according to a set of preferences, e.g.
○ Showing all products that contain nuts/not contain nuts
- Comparing two products
○ e.g by depicting their relevant features on side-by-side information sheets decoupled of their position in the shelf
- Retrieving additional information about a product, e.g.
○ retrieving user comments from an online platform
○ finding and depicting test results
The real-time access to eye-gaze information will be used in several ways
- To identify the product of relevance
- To detect whether the user is attending to the presented information (and thus maybe fade them in stronger or make them stick)
- To organize the visual presentation, so that augmented information displays do not occlude relevant products or other information
- To let the user trigger some actions, such as retrieval processes
Procedure
The procedure of the project is oriented on the process of scientific research.
Phase 1 (April): State of the Art and Planning
In the beginning, all participants will discuss the current state of the art regarding gaze-based interaction and augmented information visualization. Relevant scientific questions will then be formulated and the application design will be developed.
Phase 2 (Mai-June): Implementation
Depending on the number of participants, groups will be formed which work on a dedicated aspect of the implementation of the design. The planned interaction methods will be implemented.
Phase 3 (June-July): Evaluation
The implemented system will be evaluated in a small user study.
Remarks:
The project makes it easy for students of different disciplines and of different backgrounds to complement each other. While participants of the lecture on Virtual Reality may focus on the design and implementation of the interaction methods, participants of the lecture "Mensch-Maschine-Interaktion" (Human Machine Interaction) may develop the interaction concept and evaluate the implementation.
As a conclusion of the project, a report will be written. This report will be in the shape of a scientific paper, which might be even submitted to a real scientific workshop, such as the German Workshop for Virtual and Augmented Reality. Participants of the project, in particular those following a master's program, will learn how to structure and write a scientific article. If the article is accepted, participants will also have the chance to present their work on a real scientific workshop (on their own expense).
This project may lay the foundation for many interesting questions or application problems which could be addressed in a bachelor or master thesis.
Rhythmus | Tag | Uhrzeit | Format / Ort | Zeitraum |
---|
Modul | Veranstaltung | Leistungen | |
---|---|---|---|
23-MeWi-HM5 Praxis-Umgang mit Medien | Lehrveranstaltung I | benotete Prüfungsleistung
|
Studieninformation |
Lehrveranstaltung II | Studienleistung
|
Studieninformation | |
Lehrveranstaltung III | Studienleistung
|
Studieninformation | |
Lehrveranstaltung IV | Studienleistung
|
Studieninformation | |
39-Inf-VR Virtuelle Realität | Virtuelle Realität | benotete Prüfungsleistung
|
Studieninformation |
Virtuelle Realität | Studieninformation | ||
Virtuelle Realität | unbenotete Prüfungsleistung
|
Studieninformation |
Die verbindlichen Modulbeschreibungen enthalten weitere Informationen, auch zu den "Leistungen" und ihren Anforderungen. Sind mehrere "Leistungsformen" möglich, entscheiden die jeweiligen Lehrenden darüber.
Studiengang/-angebot | Gültigkeit | Variante | Untergliederung | Status | Sem. | LP | |
---|---|---|---|---|---|---|---|
Bioinformatik und Genomforschung / Bachelor | (Einschreibung bis SoSe 2011) | Virtuelle Realität; Mensch-Maschine-Interaktion | Wahlpflicht | 6. | 4 | unbenotet | |
Informatik / Bachelor | (Einschreibung bis SoSe 2011) | Nebenfach | Virtuelle Realität; Mensch-Maschine-Interaktion | Wahlpflicht | 6. | 4 | unbenotet |
Kognitive Informatik / Bachelor | (Einschreibung bis SoSe 2011) | Mensch-Maschine-Interaktion; Virtuelle Realität | Wahlpflicht | 6. | 4 | unbenotet | |
Medieninformatik und Gestaltung / Bachelor | (Einschreibung bis SoSe 2011) | Mensch-Maschine-Inter; Virtuelle Realität | Wahlpflicht | 6. | 4 | unbenotet | |
Medienwissenschaft, interdisziplinäre / Master | (Einschreibung bis SoSe 2014) | Hauptmodul 4; Hauptmodul 6 | Wahlpflicht | 4 | unbenotet | ||
Naturwissenschaftliche Informatik / Bachelor | (Einschreibung bis SoSe 2011) | Mensch-Maschine-Interaktion; Virtuelle Realität | Wahlpflicht | 6. | 4 | unbenotet | |
Studieren ab 50 |