Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected

      Multimodal-to-Omniverse: An End-to-End Pipeline for 3D Model Generation from Voice or Text, Ninja Edition

      , Senior Solution Architect, NVIDIA
      , Senior AI Solution Architect, NVIDIA
      , Sr. Solution Architect, NVIDIA
      Learn how to run an multi-modal end-to-end pipeline generating 3D assets using your own voice. We'll use state-of-the-art NIM Agent LLMs with Langchain, Langgraph, and Langserve to create an agentic pipeline transforming from text to 2D to 3D, generating 3D assets to be imported via CAD Converter, and see it come to life using Omniverse KIT SDK. In the end, you'll be able to create a 360-degree beauty shot video as trophy that you can share on social media.
      Prerequisite(s):

      Basic understanding of Python.
      An understanding of Langchain and Omniverse is a plus but not mandatory.
      活动: GTC 25
      日期: March 2025
      行业: 所有行业
      话题: Content Creation / Rendering - Virtual Production
      级别: 通用
      NVIDIA 技术: RTX GPU,Riva,Omniverse,NVIDIA NIM
      语言: 英语
      所在地: