07.08.2025; Vortrag
AI-powered Semantic and Multimodal Interfaces
m Rahmen der Vortragsserie Dresden Talks on Interaction & Visualization möchten wir herzlich zu folgendem Vortrag von Prof. Can Liu (Laboratory of Empirical Research for Future Interfaces, City University of Hong Kong) einladen:
Abstract:
As we move forward to a future of ubiquitous computing, it is important for interfaces to incorporate multimodal input such as touch, speech, hand gestures and body movements. Recent advancement of Generative AI has enabled interfaces to incorporate natural language understanding and direct manipulation. In this talk, I will present my recent research on advancing multimodal interfaces with AI. This includes Large Language Model (LLM)-based speech for text composition; Natural Language Processing (NLP)-assisted text selection; as well as Large Vision Language Model (LVLM)-based VR scene authoring tool. These solutions build on empirical understanding of input modalities and challenge existing assumptions while leveraging new technologies. I hope they will spark fruitful discussions with the audience about developing future interfaces leveraging AI.
Weitere Informationen unter https://imld.de/research/dresden-talks/.