Omniverse Replicator augments costly, laborious human-labeled real-world data, which can be error prone and incomplete, with the ability to create large and diverse physically accurate data tailored to the needs of AV and robotics developers. It also enables the generation of ground truth data that is difficult or impossible for humans to label, such as velocity, depth, occluded objects, adverse weather conditions or tracking the movement of objects across sensors. In this training lab, participants will will learn to generate a synthetic data set from assets and scenes using Omniverse Replicator in a hands-on GPU environment. By participating in this training lab, you’ll learn the following topics: The case for synthetic data How to place a target asset in randomized environments to create a data set How to turn the data set into annotated data for training your own model Prerequisite(s): Intermediate understanding of Python (including classes, objects, and decorators) Basic understanding of Machine Learning and Deep Learning concepts and pipelines Windows or native Linux computer with ability to install Omniverse Launcher and the Omniverse Streaming Client application Internet bandwidth sufficient to support the Omniverse client/server stream (performance will vary) Download the OV client for Linux / Windows
*Please disregard any reference to "Event Code" for access to training materials. "Event Codes" are only valid during the original live session.
Explore more training options offered by NVIDIA DLI.