Video Player is loading.
Current Time 0:00
Duration 46:49
Loaded: 0%
Stream Type LIVE
Remaining Time 46:49
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected
    • default, selected

    Optimizing Inference Performance and Incorporating New LLM Features in Desktops and Workstations

    , Deep Learning Solution Architect, NVIDIA
    , Product Manager, NVIDIA
    TensorRT has become the preferred choice for independent software vendor applications used in desktop and workstation environments, including those developed by Topaz, BlackMagic, and others. As these applications adapt to embrace the emerging generative AI trend, they seek to incorporate more features driven by large language models (LLMs) and stable diffusion techniques. We'll describe the journey as developer on how to apply TensorRT optimizations to achieve the speed-of-light inference performance, and share best practices. We'll also tell stories of how NVIDIA and partners worked together to come up with new features and improvements to support the release of TensorRT.
    活动: GTC 24
    日期: March 2024
    话题: AI 推理
    行业: 所有行业
    级别: 中级技术
    NVIDIA 技术: TensorRT
    语言: 英语
    所在地: