Jan 12, 2026
Symposium „Inklusive Mensch-Computer-Interaktion“
https://tu-dresden.zoom-x.de/j/62167118680?pwd=rVHzNUy9vTpbDdREUKk8HNYTMTpfHa.1
Meeting-ID: 621 6711 8680; Kenncode: qv4zrC9j!
Show map of this location.
Symposium „Inklusive Mensch-Computer-Interaktion“
08:30 - 09:30 Prof. Dr. Ilhan Aslan
Bridging Barriers in Human–Technology Interaction with Proactive AI and Multimodal Design
This talk explores how human-centered AI, combined with multimodal and proactive system design, can help bridge persistent barriers in human–technology interaction. We discuss how grounding AI systems in human values, such as psychological well-being and user agency, enables more inclusive and trustworthy experiences. By integrating multiple modalities (e.g., language, vision, speech, and context), users gain alternative ways to interact, and AI systems can better understand user intent and adapt to diverse needs. We discuss the emerging topic of proactive design approaches that allow AI to anticipate and support user goals. Through examples and open challenges, the talk highlights pathways toward systems that meaningfully augment human capabilities and reduce social, cognitive, and accessibility barriers.
10:45 - 11:45 Dr. Arthur Fleig
A Pathway to Inclusive Simulation-Driven Human-Computer Interaction
As society increasingly demands digital skills from everyone, designing interactive systems that accommodate human diversity is more critical than ever. Traditional user testing has its place but does not scale, and for users with disabilities or chronic conditions such iterative studies can be physically demanding and ethically sensitive. In this talk, I present a simulation-driven perspective on HCI that complements traditional user-centered design by enabling systematic exploration of interaction techniques before human experimentation and physical prototyping, opening the door to a scalable inclusive design process. To this end, I introduce simulation-based user models that couple biomechanical modeling with Reinforcement Learning–based control. These models perform typical interaction tasks such as pointing, tracking, typing, and choice reaction, while reproducing characteristic patterns of human movement such as Fitts' Law. I discuss lessons learned and design considerations for the critically important reward functions in Reinforcement Learning, helping make such methods more accessible to HCI researchers without prior RL expertise. I then illustrate how simulation plays a central role not only on the human side, but also on the machine side, using acoustic levitation displays as an example where simulation and optimization relieve designers from the burden of trial-and-error when designing accurate mid-air volumetric content. Finally, I turn to recent work on usable privacy and security, where large language models act as interactive intermediaries between complex technical information and users with varying levels of domain knowledge, making privacy policies more understandable and accessible through adaptive interaction. I conclude by outlining how the simulation-based foundation can be extended -- collaboratively -- toward inclusive intelligent systems that promote digital participation.
13:45 - 14:45 Prof. Dr. Thomas Kosch
Inclusion by Computation: Personalized AI Interfaces for Inclusive Human-Computer Interaction
Human-Computer Interaction is increasingly limited by static accessibility guidelines and "average user" assumptions. In this talk, I argue for AI-personalized inclusion: Interactive systems that adapt to individual abilities and preferences under constantly changing environments. Building on my research vision of AI empowering users, I outline how multimodal AI can transform physiological and behavioral signals into actionable models of user needs, enabling interfaces that support autonomy and participation. I will introduce a research program with three pillars. First, AI-driven user sensing for inclusive interaction: Robust inference of user state and context, paired with interaction techniques that let users understand, correct, and override system assumptions. Second, AI-enabled inclusive interaction techniques in physical-digital environments: Interventions that selectively enhance, simplify, or re-encode information to improve access and participation across hybrid spaces. Third, responsible personalization: Methods to detect and mitigate AI bias, evaluate differential impacts across user groups, and ensure accountability of adaptive interface behavior. I conclude by outlining a research agenda for trustworthy AI-personalized inclusion that empowers users while maintaining their agency.
16:00 - 17:00 Dr. Tonja Machulla
Augmented Reality as an Assistive Technology
Contrary to popular belief, most individuals with visual impairments, including those classified as ‘legally blind’, retain and actively use functional vision. Empirical evidence shows that, for a broad set of tasks, visual information is the preferred sensory channel over auditory or haptic alternatives. Thus, recent advances in portable visual displays offer new opportunities for developing assistive technologies that support and augment residual visual function.
My talk will address how head-worn augmented reality devices can be used to selectively enhance or substitute task-relevant information. I will present my research that has systematically defined requirements and prototyped solutions, which were tested for their viability across diverse use cases. These include reading, text entry, interacting with household appliances and public displays, object manipulation, street-level navigation, and social interactions in physical and virtual environments. I will situate this work within a conceptual framework that systematizes intra-modal mappings for information substitution, which allows for the design of user-adaptive assistive technologies. I conclude with an outlook, placing these results within a research program for multisensory assistive technology solutions.