Team Project Computer Graphics and Visualization (WS 2023/24)
Modules
SWS 0/0/4
INF-B-510, INF-B-520, INF-B-530, INF-B-540,
INF-VERT2, INF-VMI-8, INF-VMI-8a, (for INF-VERT7 talk to Prof. Gumhold),
INF-MA-PR, INF-E-4, INF-04-KP, MINF-04-KP-FG1, IST-05-KP
SWS 0/0/8
CMS-VC-TEA (Visual Computing Team Project)
Organisation
This year we offer a maximum of three topic options as described below. Whether a topic will actually be offered depends on the number of interested students. For now, please enroll in the OPAL course – you will be able to choose your preferred topic after we presented each one in some more detail at the initial meeting on t.b.a. @ t.b.a. in room APB 2026.
Topic "Multi-user Gamified Ground-truth Annotation"
Advisor: Marzan Tasnim Oyshi
Machine Learning Guidance: Nishant Kumar, Kristijan Bartol
In the dynamic landscape of the digital era, where technology continues to evolve at an unprecedented pace, three distinct yet interwoven concepts have emerged as pillars shaping user engagement, information organization, and intelligent automation. Gamification, labeling, and machine learning, each a powerhouse in its own right, converge to redefine how we interact with digital systems, process information, and enhance user experiences.
The goal of this project is to develop an interactive labeling tool connecting gamification and machine learning to support ground truth annotation efficiently. Users should be able to participate in the labeling tasks in an enjoyable and competitive manner, keeping both annotation accuracy and speed under consideration. Intuitive user interface and data privacy can be considered as additional goals of this project.
Requirements
- Data Annotation Skills
- Coding Experience
- Gamification Knowledge
- Machine Learning Fundamentals
Nice-to-have
- Adaptability and Problem Solving
- Project Management
- User-friendly Interface
- Experience with Game Engines (Unity/Unreal/Godot)
Topic "Multi-sensor Calibration, Acquisition and Fusion"
Advisor: Julien Fischer
In recent years, the idea of a metaverse has gained traction across various parts of society. Reaching from applications in enternainment to smart virtual production lines in the industry to medical care for patients in a hospiverse, all of these approaches require the creation of a digital twin of the real world. In order to create such a digital twin with high fidelity, the real world has to be captured as accurately as possible.
The goal of this project is to develop a solution to dynamically capture a room and its content as accurately as possible in real time. For this, you will be planning and realizing a capturing setup using multiple Azure Kinect depth cameras. You will also write a software, based on the Azure Kinect SDK, that allows to synchronously capture and store all sensor data available from the cameras. Once the capturing setup and software are ready for deployment, you may also guide the installation in an experimental operating room at the university hospital in Dresden.
Requirements
- Fundamental knowledge of (modern) C++ (or an equivalent language) and software architecture
- Basic knowledge of computer vision
- Ability to work in a team
- Problem solving skills
Nice-to-have
- First experiences with source control tools like git
- First experiences in working with external APIs
- Basic understanding of or first experiences in using depth cameras
- Basic knowledge of computer graphics
- Experience in project management
Topic "Immersive Drone Control Center with Streaming Visualization"
Advisor: Benjamin Russig
The upcoming AI act of the European Union provides comprehensive rules for future wide-spread and ubiquitous application of AI systems in Europe. For autonomous cyberphysical agents like cars or drones, it requires real-time human oversight to prevent disasters and keep things going in case an agent's AI encounters a situation it cannot handle or safely recover from.
The goal of this team project is to develop an immersive VR control center that interfaces with a previously developed VR drone racing game through a to-be-developed networked interface to supervise the simulated drones. To this end, their internal state needs to be streamed to the control center and visualized in real-time. Here, immersive visualization techniques as well as classical methods embedded in VR should be offered to users, who should be able to control the visualization intuitively from inside the virtual control center as well as influence the drones' behaviour (e.g. by forcing them to perform scripted emergency landings).
Requirements
- advanced knowledge of computer graphics
- knowledge about modern graphics APIs
- coding experience with C-derived or other systems programming languages
- willingness to learn new things as you go (see below)
Nice-to-have
- experience using Unity (or similar) game engines
- experience with VR development
- experience with realtime networking