Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected
      • Quality

      Simplify model deployment and maximize AI inference performance with NVIDIA Triton Inference Server on Jetson

      , Product Manager, Deep Learning Software, NVIDIA
      , Data Scientist for Computer Vision, Video Analytics and Deep Learning, NVIDIA
      NVIDIA Triton Inference Server is now available on Jetson! NVIDIA Triton Inference Server is an open-source inference serving software that simplifies inference serving. Easily deploy and manage models from multiple frameworks on Jetson devices and achieve high inference performance. Learn why Triton is the best solution for deploying AI models on Jetson devices.In this webinar, you will learn:- Key features of Triton to maximize AI inference performance- How to deploy Triton and integrate with your applications on Jetson- Demo of key features including support for running multiple models concurrently, dynamically creating batches and using performance analyzer tools of Triton
      活动: Other
      日期: September 2021
      行业: 所有行业
      话题: Autonomous Machines
      级别: 中级技术
      语言: 英语
      所在地: