Perifery’s Jason Perr explores the challenges of managing large volumes of content, and shows how AI pre-processing at the edge can improve efficiency, lower costs and speed up delivery.

Perifery AISearchForDigitalMedia

Artificial Intelligence is making far-reaching changes to media and entertainment (M&E) workflows that make it possible to more easily store and access the vast amounts of content that companies are creating and archiving today. In this article, AI Project Consultant at Perifery, Jason Perr, explores the challenges that M&E companies face today in storing and managing content, and shows how AI pre-processing at the edge results in improved efficiency, cost reduction and faster delivery.

A growing number of M&E companies must now deal with footage libraries that represent years and even decades of content. “Generating high-quality content is now easier and more affordable than it was five or 10 years ago and, consequently, the pace of video production is accelerating. Organisations of all sizes are finding new ways to use video to reach their customers and partners,” Jason said.

Defining, Finding and Using Content

His company Perifery, a new division of DataCore, develops edge devices and software for M&E applications that speed up content monetisation and create more cost-predictable options for storage, AI services and preprocessing.

Metadata is a critical factor in monetising and repurposing content, but if media assets have not been appropriately tagged with relevant metadata, those tasks become very challenging for M&E companies. He said, “Editors and post-production teams don’t have hours to spend searching for specific clips, nor is that a cost-effective approach. While newer content is generally assigned metadata, the usefulness of that metadata can be questionable.

“The production team for a TV series will typically tag original content with basic information such as the season, episode and keywords. But the needs of a marketing and promotional team are quite specific, and the content metadata might not be applicable for them. A promotional team will find it tough to find content with hilarious one-liners, for instance, with only basic metadata to work with.”

Perifery AISearch2

Exclusive Expertise

Furthermore, the value of an M&E library often hinges on a small set of people. “It’s not uncommon for media companies to rely on one or two production experts who know every piece of content that’s ever been shot. Whenever someone needs footage, they ask that person to find it.

“Let’s say multiple people in the company are working on marketing, advertising and publicity campaigns simultaneously. They all need access to the historic content, which creates a bottleneck in the workflow. Under these circumstances, it could take hours or days for the editor or production expert to find the content that they need.

“Moreover, traditional asset management systems are complex. Only a handful of editors are trained on how to search for content. M&E companies need to simplify access to footage, enabling anyone to perform a simple search and quickly find the exact digital assets they need.”

Close to the Source

Jason noted that M&E companies have recently been embracing cloud-based AI applications and services to find content quickly and efficiently. However, the cloud has an unpredictable cost model. Moreover, working in the cloud requires significant time and effort to upload, download and manage pre-processing services and data coming from various sources.

For those reasons, the industry is shifting toward running pre-processing tasks at the edge. Edge computing moves some of an organisation’s storage and compute resources closer to the data’s actual source. Instead of moving the raw data to a central data centre for processing and analysis, the work is carried out where the data is generated. Only the result of that computing at the edge is sent back to the data centre for review and various applications.

Perifery AISearch Jason Perr

AI Project Consultant at Perifery, Jason Perr

The number of devices connected to the internet, and the volume of data they produce, is growing too fast for many existing data centre infrastructures to cope. Meanwhile, organisations continue to generate more of their data outside of data centres. Moving so much data fast enough to be practical in most situations puts a strain on the global internet. Thus, the focus is changing from centralised approaches to the logical edge.

Divided Approach

“Using AI-driven pre-processing tools, media companies can avoid the data transfers associated with the cloud, improving efficiency, reducing costs and enabling faster delivery of digital assets,” said Jason. “Processing at the edge can be more cost-effective than the pay-as-you-go business model cloud services offer because users can process as much data as they want without paying per second.”

A further attraction is enhanced security for media and entertainment applications. By processing data at the edge, a company can securely store and access valuable content locally, reducing the risk of data breaches or unauthorized access.

Nevertheless, using AI in the cloud still has certain advantages. For instance, this model will always be faster and more performant for certain kinds of tasks like general large language model text generation.

Jason said, “By splitting processing between the cloud and the edge, media companies can simplify and speed up their workflows. Running AI at the edge reduces the load on local or cloud storage by enabling users to perform deep analysis and metadata enrichment on content locally when a tight turnaround is needed. In a hybrid environment, media companies can still rely on cloud AI for certain tasks, such as to run algorithms that haven’t yet moved out to the edge, or use them on data that has already been moved from the edge devices without pre-processing, such as on a historical archive.

Perifery AISearch3

Accuracy and Automation

AIOps (artificial intelligence for IT operations) can be used throughout the media pipeline, from acquisition of new content to delivery. Designed to streamline IT workflows, AIOps employs natural language techniques alongside AI-enhanced predictive data movements, facial recognition, auto transcription, auto summarization and others. Not only can organisations improve the accuracy of content indexing, but AI also automates metadata generation to extract relevant information such as object identification, scene description, location and timestamps.”

Tagging files becomes straightforward, as AI creates tags based on the content, metadata, and context to enable faster, more efficient retrieval and better management of massive volumes of media. Ultimately, AIOps improves media workflows by allowing companies to curate and automatically prepare content for the next step of the life cycle based on historic processes.

Jason described how Perifery’s own development has resulted in its new Perifery AI+ system for content production workflows. Perifery AI+ features a suite of AI-enabled preprocessing functionality, including object recognition and smart archiving, through a single user interface. Its simple, efficient approach is built to help increase monetization of digital media assets while reducing costs and speeding delivery.    www.perifery.com