Video Player is loading.
Current Time 0:00
Duration 51:39
Loaded: 0%
Stream Type LIVE
Remaining Time 51:39
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected
    • default, selected

    Training Deep Learning Models at Scale: How NCCL Enables Best Performance on AI Data Center Networks

    , Distinguished Engineer, NVIDIA
    高度评价
    Discover how NCCL uses every capability of all DGX and HGX platforms to accelerate inter-GPU communication and allow deep learning training to scale further. See how Grace Hopper platforms can leverage multi-node NVLink to compute in parallel at unprecedented speeds. Compare different platforms to understand how technology choices can impact your training performance and time-to-completion. Understand how new mechanisms like NVLink SHARP and IB SHARP can be used to accelerate every dimension of the training of a large language model. Learn about new collective algorithms and their performance on many thousands of GPUs, and get a glimpse of how future improvements could push the boundaries even further.
    活动: GTC 24
    日期: March 2024
    话题: Accelerated Computing Libraries
    行业: HPC / 科学计算
    级别: 中级技术
    语言: 英语
    所在地: