Komplexpraktikum Computergrafik und Visualisierung / CMS Team Project VC (WS 2025/26)
Modules
see course catalog
SWS 0/0/4
INF-B-510, INF-B-520, INF-B-530, INF-B-540, INF-MA-PR, INF-E-4,
in Beantragung zusätzlich INF-VMI-8a und INF-VERT7
SWS 0/0/8
CMS-VC-TEA (Visual Computing Team Project)
Organisation
Kickoff Meeting
| Time: | Tuesday, Oct. 21th, 1 pm |
| Place: | Room APB 2026 plus live via BigBlueButton |
General Information
This year we offer three topics. We will also present these topics in detail at a combined kickoff meeting. Whether a topic will actually be offered depends on the number of interested students. For now, please enroll in the OPAL course.
If you cannot make it to the kickoff, the topic slides will be accessible in OPAL. You can choose your topic until Wednesday, October 22th by enrolling to the specific topic in OPAL, after which your topic advisor will coordinate further organization with your team directly.
Topic "Building Interactive Experiences from Generative Models"
Advisor: Lennart Woidtke
The recent advancements in generative AI have unlocked unprecedented possibilities for content creation, enabling the automated generation of images, music, 3D models, and even coherent dialogue. While many artists and developers use these tools to augment their traditional workflows, this project explores the following question: Can a compelling video game be created using exclusively AI-generated assets? In this project, students will attempt to build a complete game from the ground up, navigating the challenges and opportunities of a fully AI-driven content pipeline. Compute resources will be made available to the students for running models locally.
Basic Tasks:
- Research the landscape of generative models for different asset types (audio, textures, 3D models, dialogue, ...) and select a suite of suitable tools / models.
- Define a game concept that leverages the strengths and works within the limitations of the chosen AI models.
- Set up the necessary pipelines to generate all game assets, from textures and 3D models to sound effects and music.
- Integrate the generated assets into a game engine and implement the core game logic and user interface.
- Critically evaluate the final product and the development process, documenting challenges such as maintaining artistic cohesion and managing asset quality.
Required during the project:
- Ability to work with local AI models (Python scripts, environment setup, running inference code)
- Experience with Linux and the command-line interface (ssh) to utilize remote compute resources
- General programming experience
- Proficiency with version control (Git) for managing code and project assets
- At least one team member with experience in a game engine (Godot, Unreal, Unity)
Nice-to-have:
- Familiarity with tools like Gradio, Hugging Face, or ComfyUI
- Prior experience with the HPC system and Slurm
- Experience with Blender or similar for working with 3D models
- Previous game design or development experience
- Technical art skills for asset post-processing (e.g., cleaning 3D meshes, making textures tileable)
Topic "Kinect Azure Gesture Recognition for 'Digitalisation of the Earth'"
Advisor: Lennart Woidtke
The "Digitalisation of the Earth" project is an interactive art installation where visitors influence a global simulation through physical movement. Currently, this interaction is powered by a legacy Kinect v2 sensor, which detects a predefined set of gestures. However, this older technology suffers from limitations in detection accuracy, particularly with varied body types and challenging viewing angles. This project aims to modernize the installation's core interaction by developing a new, robust gesture recognition system based on the much more recent Kinect Azure. The goal is to build a deep learning-powered application that significantly enhances the reliability and responsiveness of the visitor experience.
Basic Tasks:
- Familiarize with the Kinect Azure SDK and its sensor data streams (RGB, Depth, IR, Body Tracking).
- Research and select suitable deep learning architectures for real-time gesture recognition from skeletal or depth data.
- Collect a dataset and train a model capable of accurately classifying the existing gestures (e.g., jumps, squats) in real-time.
- Design and implement a pipeline that allows for easy further data collection and model retraining to support future improvements / extensions.
- Integrate the new recognition system with the existing Digital Earth simulation, replacing the legacy communication module.
Required during the project:
- Solid fundamentals in machine learning and deep learning
- Experience with setting up deep learning environments for both training and inference
- Understanding of 3D human pose representations (e.g., skeletal joints, kinematics)
- Strong programming skills in a language with established ML frameworks (e.g., Python with PyTorch/TensorFlow/ONNX)
Nice-to-have:
- Previous experience with the Kinect Azure SDK or other RGB-D cameras
- Experience with Protobuf and network communication
- Knowledge of data annotation and dataset management
- Experience with real-time application development
Topic "Generate Dynamic 3D Gaussian Splatting (3DGS) Scene'"
Advisor: Julien Fischer, Zihan Zhang
Dynamic 3DGS is an explicit 3D scene representation that models geometry and appearance with anisotropic Gaussian primitives, enabling efficient and high-fidelity reconstruction and rendering of dynamic scenes. Modeling the position, orientation, scale, and color of Gaussians across time enables a realistic depiction of motion, deformation, and complex dynamic geometry. The goal of this project is to explore dynamic 3DGS scene reconstruction. Students will use multi-camera captured data to generate and compare static and dynamic Gaussian scenes, analyzing differences in rendering quality and performance under various strategies.
Basic Tasks:
-
Literature research on suitable existing works for static/dynamic 3DGS. Use public datasets and existing implementations to render both static and dynamic scenes
-
Design of interesting dynamic scenes
-
Capture of these scenes with a multi-camera setup
-
Post-processing of the captures such that they can be used for the selected 3DGS methods
-
Creation of:
-
per-frame static 3DGS, and
-
Dynamic 3DGS, aiming to achieve the highest possible visual quality
-
-
Compare the results qualitatively and quantitatively
Required during the project:
- Understanding of 3D geometry and rendering fundamentals (coordinate systems, matrix transformations, lighting models)
Nice-to-have:
- CG knowledge beyond CG1