Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected
      • Quality

      Scalable, Accelerated Hardware-agnostic ML Inference with NVIDIA Triton and Arm NN

      , Arcturus
      , Arcturus
      , Arm
      , Artisight
      NVIDIA Triton Inference Server simplifies the deployment of AI models at scale in production. With the extended Arm NN custom backend, we can orchestrate multiple ML model execution and enable optimized CPU/GPU/NPU inference configurations on embedded systems, including NVIDIA's Jetson family of devices or Raspberry Pi(s). We'll introduce the Triton Inference server Arm NN backend architecture and present accelerated embedded use cases enabled with it.
      活动: GTC Digital November
      日期: November 2021
      行业: 所有行业
      级别: 初级技术
      话题: Deep Learning - Inference
      语言: 英语
      话题: Accelerated Computing & Dev Tools - Performance Optimization
      所在地: