Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected
      • Quality

      Intro to Large Language Models: LLM Tutorial and Disease Diagnosis LLM Lab

      , Principal Data Scientist, Mark III Systems
      First we'll discuss what an LLM is and some of the strengths and weaknesses of these models, looking at a handful of models and approaches. We'll cover the difference between pre-training and fine-tuning and discuss input processing by showing how to take an input string and tokenize it into input ids. We'll present QLoRa as a means of greatly reducing computational requirements for LLM inference and fine-tuning. We'll wrap up the concepts portion of the session by discussing Hugging Face and their transformers library.

      The lab portion starts with performing inference using the Hugging Face transformers library and the Falcon-7B-Instruct model. We'll then move to fine-tuning Falcon-7B-Instruct using the MedText dataset, where the goal is to take a prompt that describes symptoms of a medical issue and generate a diagnosis of the problem, as well as steps to take to treat it.
      Prerequisite(s):

      Python and/or ML experience is helpful but not required.
      活动: GTC 25
      日期: March 2025
      NVIDIA 技术: DGX,BioNeMo,NeMo,NVIDIA NIM,NVIDIA AI Enterprise
      级别: 通用
      话题: Generative AI - Biology - Generative AI
      行业: 医疗健康与生命科学
      语言: 英语
      所在地: