11.12.2025
Statusvortrag im Promotionsverfahren von Herrn Zhan Qu
Titel: "Reasoning and Explainability in Deep Learning across Structured and Unstructured Data"
Abstract:
Advancing the ability of deep learning systems to reason, explain, and generate requires examining how these capabilities emerge and transform across heterogeneous data and representational structures. This dissertation investigates reasoning and explainability in deep learning as it progresses from structured temporal systems to multimodal clinical data, geometric perception, and ultimately structured generation. It begins by introducing counterfactual methods for dynamic graphs that provide principled, instance-level interpretability in temporal and text-attributed relational settings. These foundations extend naturally into the biomedical domain, where integrated benchmarks combine structured EHR tables, clinical notes, and ontological knowledge to evaluate and improve factual grounding, temporal coherence, and safety in clinical AI. The same reasoning principles are then adapted to geometric perception, enabling interpretable 3D point-cloud segmentation and counterfactual spatiotemporal analysis in physically grounded systems. Building on these insights, the final component explores contrastive reasoning for structured generation, applying explanation-driven constraints to molecular tasks and concluding with an artistic extension to classical Chinese poetry. Collectively, these contributions trace a trajectory in which counterfactual reasoning and structured representations advance interpretability, reliability, and controlled generation across diverse real-world modalities.
Betreuer: Prof. Michael Färber
Fachreferent: Prof. Volker Tresp