Exploring Uncertainty Quantification in Deep Learning for Medical Imaging

, Postdoc Research Fellow, Mayo Clinic (Rochester, Minnesota)
, Postdoc Research Fellow, Mayo Clinic (Rochester, Minnesota)
, Medical Doctor and Research Fellow, Mayo Clinic (Rochester, Minnesota)
, Research Associate, Mayo Clinic (Rochester, Minnesota)

In recent years, deep learning has shown impressive performance in medical imaging analysis. However, for a model to be useful in the real world, it needs to be reliable, besides being valid and interpretable. The uncertainty quantification (UQ) methods determine the calibrated level of a model's confidence in its predictions. Moreover, UQ can demonstrate biases caused by overconfidence or lack of confidence in the model's predictions. By enabling UQ in medical deep learning models, users can be alerted when a model does not have enough information to make a decision. Consequently, a medical expert could reevaluate the uncertain cases, which would eventually lead to gaining more trust in the model. This lab teaches: Different types and sources of uncertainty in medical imaging Model calibration Different uncertainty quantification techniques You'll also implement the following using PyTorch and MONAI: Model ensembling Monte Carlo Dropout Evidential deep learning 

 

Prerequisite(s):  

 

  • Familiarity with PyTorch and Computer vision's concepts 

 

Please disregard any reference to "Event Code" for access to training materials. "Event Codes" are only valid during the original live session. Explore more training options offered by the NVIDIA Deep Learning Institute (DLI). Choose from an extensive catalog of self-paced, online courses or instructor-led virtual workshops to help you develop key skills in AI, HPC, graphics & simulation, and more.

活动: GTC Digital Spring
日期: March 2023
话题: Deep Learning - Inference
行业: 医疗健康与生命科学
级别: 中级技术
语言: 英语
话题: Deep Learning
所在地: