Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
详情
字幕
Best Practices in Networking for AI: Perspectives from Cloud Service Providers
, SVP Networking, NVIDIA
, Principal Software Engineer, Microsoft Azure
, Sr Staff Network Development Engineer, X
, Senior Director, Applied Systems Engineering, NVIDIA
, Director of DC Networking, Meta
, CTO, CoreWeave
The network defines the data center, and a high-performance network is key to successfully training and deploying AI workloads such as foundational models, LLMs, and generative AI at scale. The networking must provide high effective bandwidth, low tail-latency, and performance isolation while also supporting the features needed for cloud-native operations and management. In this panel discussion, leading cloud service providers will share the considerations and challenges impacting the build-out of AI data centers, and how they have solved these challenges with NVIDIA Quantum InfiniBand and NVIDIA Spectrum-X Ethernet.