Toward Trustworthy Automated Clinical Q&A: Grounding LLMs in Evidence with Retrieval Augmented Generation and Uncertainty Quantification

, Medical Doctor and Research Fellow, Mayo Clinic
, Research Associate, Mayo Clinic
Large language models (LLMs) have recently captured significant attention in healthcare. Their stellar performance across a range of applications highlights their potential for vital tasks like medical question-answering. However, LLMs can produce "hallucination" errors, generating overly confident answers without sufficient factual basis. We'll provide a detailed walkthrough on how to set up, deploy, and enhance the trustworthiness of a retrieval augmented generation (RAG) pipeline for building clinical question-answering systems. This approach ensures LLMs rely on specific evidence sets when used in automated clinical question-answering systems. We'll guide you through the three primary stages of developing this pipeline.
Prerequisite(s):

Familiarity with PyTorch and natural language processing concepts


Explore more training options offered by the NVIDIA Deep Learning Institute (DLI). Choose from an extensive catalog of self-paced, online courses or instructor-led virtual workshops to help you develop key skills in AI, HPC, graphics & simulation, and more.
Ready to validate your skills? Get NVIDIA certified and distinguish yourself in the industry.

活动: GTC 24
日期: March 2024
行业: 医疗健康与生命科学
级别: 中级技术
话题: Large Language Models (LLMs)
NVIDIA 技术: RTX GPU
语言: 英语
所在地: