Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected

      Apply Multi-Node Multi-GPU Computing to HPO and Inference

      , Solutions Architect, NVIDIA
      , Sr. Deep Learning Data Scientist, NVIDIA
      , Solutions Architect, NVIDIA
      With ever-increasing amounts of data and limited compute resources, training competitive ML models end-to-end can take hours, days, or even weeks. Parallel computing offers a solution. Join us to learn how to reduce your end-to-end ML pipeline and increase model accuracy by parallelizing and distributing model training, hyperparameter optimization, and inference across GPUs.
      Prerequisite(s):

      Foundational understanding of a data science workflow on tabular data.
      Experience with programming in Python.
      Experience with the Pandas and scikit-learn API.
      活动: GTC 25
      日期: March 2025
      行业: 所有行业
      话题: Data Science - Data Analytics / Processing
      级别: 通用
      NVIDIA 技术: RAPIDS
      语言: 英语
      所在地: