Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected
    • default, selected

    Accelerate PyTorch Inference with TensorRT

    , NVIDIA
    , NVIDIA
    高度评价
    Learn how to accelerate PyTorch inference without leaving the framework with Torch-TensorRT. Torch-TensorRT makes the performance of NVIDIA’s TensorRT GPU optimizations available in PyTorch for any model. You'll learn about the key capabilities of Torch-TensorRT, how to use them, and the performance benefits you can expect. We'll walk you through how to easily transition from a trained model to an inference deployment fine-tuned for your specific hardware, all with just a few lines of familiar code.
    If you want more technical details, the second half of the talk will give you a chance to deep dive into how Torch-TensorRT operates, the mechanics of key features, and a few in-depth examples.
    活动: GTC Digital November
    日期: November 2021
    级别: 101 / Getting Started
    行业: 所有行业
    话题: Deep Learning - Inference
    语言: 英语
    所在地: