Video Player is loading.
Current Time 0:00
Duration 0:00
Loaded: 0%
Stream Type LIVE
Remaining Time 0:00
 
1x
    • Chapters
    • descriptions off, selected
    • subtitles off, selected
      • Quality

      Build a World of Interactive Avatars Based on NVIDIA Omniverse, AIGC, and LLM

      , Solution Architect, NVIDIA
      , Engineering VP, Mobvoi
      With the emergence of large language models and AI-generated content (AIGC) technologies, it became possible to build a more intelligent, immersive, and interactive digital human. With Mobvoi, a company that mainly focus on speech AI and generative AI technologies, we'll discuss the challenges of making an interactive digital avatar, including how to replicate a person's voice, expressions, and general behavior, and how to react to a human's questions or commands.

      Join this session to:
      • Learn how to drive and render a real-time, realistic digital human via Omniverse Audio2Face.
      • See how to leverage speech and conversational AI abilities to create an interactive, autonomous avatar, such as RIVA, Mobvoi TTS service of cloning a voice within three seconds, ChatGLM.
      • See our explorations of combining AIGC/LLM technologies with digital avatar, such as video/motion generation, commands understanding, and code generation.
      • Get a behind-the-scenes look at how we can combine Audio2Face in Unreal Engine.
      活动: GTC 24
      日期: March 2024
      NVIDIA 技术: Aerial,DRIVE AV
      行业: 所有行业
      话题: Avatar / Animation Generation
      级别: 中级技术
      语言: 英语
      所在地: