|
CMU-CS-25-102 Computer Science Department School of Computer Science, Carnegie Mellon University
Uncertainty-Aware AI for Clinical Decision Support Rohini Banerjee M.S. Thesis May 2025
Building interpretable-by-design AI models that intuitively communicate model uncertainty is vital to engendering physician and patient trust. We develop uncertainty-guided deep learning systems for two pertinent healthcare settings. Efficient intravascular access in trauma and critical care is a high-stakes intervention affording minimal tolerance for error. Autonomous needle insertion systems can be useful in austere environments due to the lack of skilled medical personnel. However, inaccuracies in vessel segmentation modeling can result in vessel damage and hemorrhage. The risk can be mitigated via predictive uncertainty estimation to assess model reliability. Thus, we introduce MSU-Net, a novel multistage approach to semantic vessel segmentation in ultrasound images that combines the predictive power of Monte Carlo networks and deep ensembles. We demonstrate significant improvements, 27.7% over the state-of-the-art, while enhancing model reliability through a 20.9% stronger discrimination in epistemic uncertainty between correct and incorrect predictions. Next, we investigate the robustness of predictive modeling in quantifying the severity of rash manifestations associated with Cutaneous Dermatomyositis (CDM), a rare and currently incurable autoimmune disorder. Given the importance of telemedicine for remote disease monitoring and timely intervention, we address challenges of data scarcity and patient diversity by integrating a novel BERT-style self-supervised learning framework to CNN-based models. Pretrained via masked image modeling on demographically diverse images, our model achieves over a 40% improvement in fine-tuning performance on high-resolution in-clinic hand images from a limited cohort of 23 CDM patients. We achieve 83% accuracy on a held-out patient set, surpassing the clinical benchmark of 70–75% accuracy. To our knowledge, this is the first work to integrate uncertainty estimation into such architectures, enabling robustness under distributional shift in skin tone unseen during fine-tuning. Our contributions lay the groundwork for developing accurate, statistically rigorous, clinically actionable deep learning models that can be aware of their limitations and communicate this awareness to their users. Future work aims to improve the interpretability of models for equitable clinical decision support. 90 pages
Thesis Committee:
Srinivasan Seshan, Head, Computer Science Department
|
Return to:
SCS Technical Report Collection This page maintained by reports@cs.cmu.edu |