Nov 04, 2025
Einladung zum Statusvortrag im Promotionsverfahren von Herrn Tim Lenz
01307 Dresden
Seminar Room Top Floor
Abstract:
In my talk, I will present our research on advanced methods for dimensionality reduction and representation learning in clinical data analysis. Our work aimed to develop frameworks that learn compact and generalizable representations from complex biomedical data while remaining robust and clinically interpretable.
We first demonstrated that self-supervised vision transformers trained on liver MRI can predict cardiovascular events directly from imaging data. This study showed that latent representations of hepatic and vascular structures encode valuable prognostic information, positioning liver MRI as a potential source of imaging biomarkers for early cardiovascular risk stratification.
Building on this idea, we transferred a standard computational pathology workflow to radiology and trained a foundation model for 3D CT imaging. Within this framework, called CLEAR, we combined lesion-aware contrastive learning with attention-based aggregation to capture disease-relevant features across entire CT volumes efficiently and without pixel-level annotations.
Finally, in histopathology, we developed COBRA, a contrastive self-supervised learning approach that integrates embeddings from multiple foundation models to generate robust, task-agnostic slide-level representations. Despite being trained on a small number of whole-slide images, COBRA achieved state-of-the-art performance across multiple clinical prediction tasks.
Together, these studies illustrate a unified approach to clinically meaningful dimensionality reduction—training and utilizing foundation models to extract low-dimensional, generalizable features from heterogeneous medical data. Our results highlight how foundation model–driven methods can enhance data efficiency, interpretability, and clinical relevance in medical AI.