Komplexpraktikum Computergraphik und Visualisierung (WS 2024/25)
Modules
SWS 0/0/4
INF-B-510, INF-B-520, INF-B-530, INF-B-540,
INF-VERT2, INF-VMI-8, INF-VMI-8a, (for INF-VERT7 talk to Prof. Gumhold),
INF-MA-PR, INF-E-4, INF-04-KP, MINF-04-KP-FG1, IST-05-KP
SWS 0/0/8
CMS-VC-TEA (Visual Computing Team Project)
Organisation
Kickoff Meeting
Time: | Wednesday, Oct. 14th, 15:00 |
Place: | Room 2101 plus live via BigBlueButton (link is published in OPAL) |
General Information
This year we offer 3 topic options as described below. We will also present these topics in detail at a combined kickoff meeting. Whether a topic will actually be offered depends on the number of interested students. For now, please enroll in the OPAL course – you will already be able to choose your preferred topic once you registered there, but you can change your preference anytime (or withhold it completely) until after the initial kickoff meeting.
If you cannot make it to the kickoff, the topic slides will be accessible in OPAL and we will try to make a recording of the meeting available. You can choose your topic until Sunday, October 13th, after which your topic advisor will coordinate further organization with your team directly. You could also change your choice after the kickoff meeting until Tuesday, October 15th.
Topic "Digital Life Project: Autonomous 3D Characters With Social Intelligence"
Advisor: Kristijan Bartol
Metaverse and related applications are getting increasingly popular with the advancements of VR and AR technology. Human avatars are a crucial component in creating immersive 3D virtual worlds. An interesting aspect of virtual avatars is creating and simulating worlds where virtual avatars learn and develop to behave like humans, resembling social intelligence. In this project, a state-of-the-art digital life project (https://digital-life-project.com/) is used to develop and analyze different social scenarios between autonomous digital avatars. In addition, instead of using the "default" avatars, the group should create virtual replicas of themselves using 3D scanning and reconstruction models, and animate them.
Basic Tasks
-
Run different social scenarios between autonomous virtual avatars based on short descriptions
-
Estimate 3D reconstructions of the team members using deep learning models from single images
-
Estimate body and garment textures
-
Animate the reconstructed avatars
-
Replace the original avatars with the animatable, 3D-reconstructed avatars
Optional Tasks
- Try out different 3D virtual environments
- Combine real 3D scanned environments of the Faculty building with the digital avatars while avoiding major collisions
- Set the camera viewpoint to track the left/right eye of the selected avatar
Requirements
- Experience with setting up deep-learning-based environments (inference)
- Understand basic 3D scene representations
- Understand texturing
- Experience with rigging 3D characters (!!!)
Nice-to-have
- Adapting and fine-tuning deep learning models
- Understanding the collisions and the experience in body-scene collision management
- Experience with different 3D scene representations, i.e., effectively combining meshes with point clouds
- Hands-on experience with 3D transformations (camera, scene, objects)
Topic "Immersive Point2Floorplan: Floorplan generation from 3D point clouds"
Advisor: Tianfang Lin
Reconstructing the floorplan of indoor scenes from raw 3D data is crucial for various applications, including indoor scene rendering, understanding, furnishing, and reproduction. With the availability of large-scale point clouds of rooms and buildings, our goal is to extract 2D and 3D floorplans from these datasets and apply the resulting models in virtual reality environments.
Basic Tasks
-
Detect and extract the floor plane points
-
Extract boundary points at different height level
-
Approximate the boundary lines and reconstruct the floorplan
-
Generate meshes of walls, ceiling board and floor
-
Visualize the floorplan in VR
Requirements
- Coding Experience
- Knowledge of Computer Graphics
- Knowledge of Rendering Pipeline
- Knowledge of Point Cloud
Nice-to-have
- Adaptability and Problem Solving
- Project Management
- Experience with VR development
- Skilled C++ programming
Topic "Energy-Aware Rendering"
Advisor: Mario Henze
This project aims at potentially reducing the energy consumption in a traditional rendering pipeline by dynamically adjusting parameters depending on real-time GPU power draw. As a first step towards the growing intrest in the concept of "sufficiency", a prototypical implementation should study the ways to obtain these energy metrics and how to integrate them into an application.
Basic Tasks
- Integrate/Link with NVIDIA Management Library
- Build a simple rendering pipeline (or several) with adjustable quality parameters
- Combine energy metrics with your rendering pipeline(s). Profile the interaction behaviour of your implementation.
- Can a defined power budget be held precisely or are the energy metrics too chaotic?
- How much influence has the change of each individual parameter?
- ...?
Optional Tasks
- Integrate NVIDIA Management Library with the CGV Framework
- Extend framework's performance overlay with energy statistics
Requirements
- Experience with systems-level programming
- Dynamically linking against library
- How to interact with a C-API
- Knowledge of the Rendering Pipeline
- Experience with Shader Programming
Nice-to-have
- Experience with Nsight and CUDA