Beginning of dialog window. Escape will cancel and close the window.
End of dialog window.
详情
字幕
Novel Uses for AI to Drive Monetization in Large M&E Archives
, EVP Media, Entertainment, Gaming and Sports, Monks
In 1928, a Norwegian doctor put an original print of Carl Dreyer’s silent movie “The Passion of Joan of Arc” in a janitor’s closet in an insane asylum. This new cut was discovered during renovations 53 years later, changing our understanding of the film. Hopefully, most media companies don’t store in janitor’s closets, but most have hundreds of thousands of hours of footage sitting dormant, stored with little more information than the file name.
Learn how using a NIMS-based contextual analysis for metadata ingest within a software-defined broadcast pipeline can make your active media and archive searchable by a frame’s visual contents, and even the greater context around it. These data allow for dynamic and novel opportunities for monetization and personalization. We'll discuss how contextual metadata can be produced and refined recursively within a TAMS (time-addressable media) framework to add and amend metadata each time footage is retrieved. These data can be used to generate retrieval-augmented generation models for everything from anomaly detection to predictive/suggestive editing and FAST-content generation. Discover how a news organization will use metadata to “Control F” through their archive, and how a racing company will use their contextual data to create models to predict camera switching, improve safety, and generate user-specific narratives. We’ll close with an exploration of how media & entertainment (M&E) contextual data pipelines can be used outside our industry, from avoiding collisions with self-driving cars to sonar analysis.