In recent years, deep learning has shown impressive performance in medical imaging analysis. However, for a model to be useful in the real world, it needs to be reliable, besides being valid and interpretable. The uncertainty quantification (UQ) methods determine the calibrated level of a model's confidence in its predictions. Moreover, UQ can demonstrate biases caused by overconfidence or lack of confidence in the model's predictions. By enabling UQ in medical deep learning models, users can be alerted when a model does not have enough information to make a decision. Consequently, a medical expert could reevaluate the uncertain cases, which would eventually lead to gaining more trust in the model. This lab teaches: Different types and sources of uncertainty in medical imaging Model calibration Different uncertainty quantification techniques You'll also implement the following using PyTorch and MONAI: Model ensembling Monte Carlo Dropout Evidential deep learning
Prerequisite(s):
Please disregard any reference to "Event Code" for access to training materials. "Event Codes" are only valid during the original live session. Explore more training options offered by the NVIDIA Deep Learning Institute (DLI). Choose from an extensive catalog of self-paced, online courses or instructor-led virtual workshops to help you develop key skills in AI, HPC, graphics & simulation, and more.