Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
详情
字幕
Simplify model deployment and maximize AI inference performance with NVIDIA Triton Inference Server on Jetson
, Product Manager, Deep Learning Software, NVIDIA
, Data Scientist for Computer Vision, Video Analytics and Deep Learning, NVIDIA
NVIDIA Triton Inference Server is now available on Jetson! NVIDIA Triton Inference Server is an open-source inference serving software that simplifies inference serving. Easily deploy and manage models from multiple frameworks on Jetson devices and achieve high inference performance. Learn why Triton is the best solution for deploying AI models on Jetson devices.In this webinar, you will learn:- Key features of Triton to maximize AI inference performance- How to deploy Triton and integrate with your applications on Jetson- Demo of key features including support for running multiple models concurrently, dynamically creating batches and using performance analyzer tools of Triton