Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
详情
字幕
Apply Multi-Node Multi-GPU Computing to HPO and Inference
, Solutions Architect, NVIDIA
, Sr. Deep Learning Data Scientist, NVIDIA
, Solutions Architect, NVIDIA
With ever-increasing amounts of data and limited compute resources, training competitive ML models end-to-end can take hours, days, or even weeks. Parallel computing offers a solution. Join us to learn how to reduce your end-to-end ML pipeline and increase model accuracy by parallelizing and distributing model training, hyperparameter optimization, and inference across GPUs. Prerequisite(s):
Foundational understanding of a data science workflow on tabular data. Experience with programming in Python. Experience with the Pandas and scikit-learn API.