Live demo to highlight MAM integration to Vū Digital for automatic metadata extraction and analysis
Las Vegas, NV (April 11, 2016) – Vizrt and Vū Digital are partnering to provide powerful automated tools to enrich metadata to assets in the Viz One media asset management (MAM) system.
Vizrt will demonstrate the integration between Viz One and Vū Digital at the National Association of Broadcasters 2016 annual show here next week, the world’s largest event focused on the intersection of technology, media and entertainment.
Both companies will showcase the technology solution at the week-long industry conference, which begins on April 18 and is expected to attract over 103,000 media and entertainment professionals and feature over 1,700 companies showcasing new technologies that are driving the future of digital media and entertainment.
Vizrt and Vū Digital hope to capitalize on the desire by media companies to create more content and more revenue from the same media. Using assets enriched with metadata helps users find, edit, approve and publish content more easily and quickly and also enables building automated workflows. The efficiency is achieved with the combination of Vizrt’s MAM and Vū Digital.
“Metadata has never been more important for broadcasters to enrich their media assets. As broadcasters look to increase their automated workflows its importance will continue to increase.” said Ximena Araneda, Executive Vice President of Video Workflows for Vizrt. “For the same reason, metadata has also become increasingly important for most of our products as the ability to reuse it saves time in production steps and enables us to automate many of the steps.”
Automatically extracting metadata from video
Vū Digital is a video meta-tagging solution that utilizes computer vision algorithms, distributed computation and cloud-based computer processing to analyze videos. This is done by identifying its contents – objects, faces, brands and logos – and by translating on-screen text and creating an audio transcript from the video. All of this information is converted into invaluable, time-stamped metadata transforming digital collections into indexable and searchable assets.
The end result is data that can be automatically imported and connected to the assets in Viz One. Vū Digital now powers Viz One’s search tool with the addition of time-stamped metadata. Vū eliminates the need to manually tag video content, which enables Viz One users to quickly search and discover, measure and manage their video assets. For example, with Vū’s logo detection, users can quickly quantify the time a logo appeared throughout a video.
Vū Digital was able to easily integrate with Viz One in just a few days using the powerful REST API’s available for the MAM system. “We believe that Vū’s granular metadata will become the expectation and not the exception as far as video metadata is concerned,” said Wade Smith, vice president of Operations for Vū Digital.
About Vizrt
Vizrt provides real-time 3D graphics, studio automation, sports analysis and asset management tools for the media and entertainment industry. This includes interactive and virtual solutions, animations, maps, weather, video editing, compositing, and playout tools. Vizrt has customers in more than 100 countries worldwide, including CNN, CBS, Fox, BBC, BSkyB, Al Jazeera, NDR, ITN, ZDF, Star TV, Network 18, TV Today, CCTV, NHK and the list keeps growing. Vizrt has nearly 600 employees and operates in 40 offices worldwide. Vizrt is a privately owned company by Nordic Capital Fund VIII.
About Vū Digital
Vū Digital, LLC was formed in 2013 as a technology company committed to delivering new and innovative solutions for digital content. Vū’s Video-to-Data (V2D) product converts video to data using face recognition, object recognition, brand recognition, music identification, and automated speech recognition. Vū’s algorithms and the use of an advanced architecture for distributing jobs for processing are patent pending and are one-of-a-kind in the marketplace. The core technology includes splitting a video into two components, audio and video frames. Both components are then processed using ASR, text extraction from images, facial recognition and image recognition. The output is metadata that includes timestamped references to individual frames. With this technological approach, video classification/clustering, search engine indexing, and personalization for content, including targeted advertising are possible.