Digital Media World Awards Celebrate Winners

20

Showcased here are the Gold winners of the 2019 Digital Media World Awards. Many thanks everyone who has participated. Awards events give us a snapshot of our industry – what has been achieved and what media and broadcast clients are aiming for in the coming year.

The number of entrants and quality of products and services was higher than ever this time. Therefore, several Special Merit Award winners have also been selected – see them listed here, just below the Gold Winners

Demand for remote production capabilities including the cloud, flexibility and customisation during IP migration, artificial intelligence, better compression and other ways to optimise streaming – and more – are trends you will see in the descriptions below, as well as in the information supplied for all entries, here.

Awards events like Digital Media World are exciting for the winners but in fact, every company, manufacturer and developer that enters gains a chance to tell the industry what they do best and what they are working on. Look out for the 2020 Digital Media World Awards call for entries and don’t miss the opportunity to be a part of it.

Zixi Software-Defined Video Platform
IP Broadcast Solutions - Distribution

Zixi cloud-based and on-premise video platform supports reliable broadcast-quality live delivery over all-IP and hybrid IP networks. It is used to source, manage, localise and distribute live video streams to thousands of channels in over 100 countries, economically and securely.

The platform is software-defined, based on a tight integration of four elements that allow users to deploy, manage and monitor live video workflows using software and integrated devices, regardless of the underlying network infrastructure.

The four elements include ZEN Master cloud-based Orchestration interface that gives control over large complex networks of live broadcast channels. Video Solutions Stack is software tools and core media processing functions used to transport live video over any IP network, correcting for packet loss and jitter. Zixi Protocol detects and dynamically adjusts to varying network conditions and employs error correction during video transport over IP. Zixi Enabled Network of about 100 integrated technology and service provider partners gives users access to an ecosystem of resources for live video.

Sci sys openmedia

SCISYS OpenMedia
IP Broadcast Solutions - Production

OpenMedia is a customisable, scalable newsroom system that some 50,000 users globally employ for cross-media research, planning and creation of daily news, magazines and sports publications.

In the newsrooms of today, many journalists now work independently around the world with much larger amounts information. OpenMedia is an integrable system used to structure information in advance, focussing on the story to allow TV, radio and online journalists to share topics and contributions across platforms and work more efficiently together despite time and distance. OpenMedia encompasses an NRCS tuned for TV and digital media, a dashboard system for collaboration and coordination, Personal, Team and Public boards to organise data from many sources - and other functions.

Vizrt Group NDI 4
IP Broadcast Solutions - Switching, routing, monitoring

NewTek developed the Network Device Interface (NDI) standard to enable video-compatible products to communicate, deliver and receive broadcast quality video at low latencies and suitable for switching in a live production environment. NDI can encode, transmit and receive many streams of frame-accurate video and audio in real time.

In 2019, NDI 4 was released and includes the ability to record unlimited NDI video channels with any video or audio format, including resolutions 4K and higher. Recordings are time-stamped and time base corrected, keeping any number of channels on any number of machines accurately synchronised and making multi-cam editing possible. Video from different sources from the same event can be edited simultaneously.

NDI connects NDI devices with each other on a network, allowing automatic discovery. NDI 4 also supports a 16 bit video path and more input and output colour formats, and improves video quality without an increase in bit-rate.  NDI recording supports the alpha channel, multiple audio channels, and works with 16 bits-per-pixel sources at full precision.


NDI 4 supports a discovery server, which means a server or the cloud can be used to coordinate and connect all sources, and includes NDI | HX (low bandwidth, H.264 compression) integrated into the NDI Embedded SDK. Support for Epic Games Unreal Engine in NDI 4 makes it possible for apps created in Unreal to appear as NDI sources.

Edgeware Virtual Channel Creation
Virtualisation - Channel Virtualisation

Virtual Channel Creation is used to create alternative OTT versions of linear channels that are tailored and adapted to suit different audiences, as a way for broadcasters to increase the value of their existing content and compete with online TV services. Fans of a certain sports team could see a tailored version of a programme that shows more content related to their team in the build-up to a match. Once the match starts, all fans would revert back to watching the same programme. Having this level of customisation appeals to viewers’ interests and helps meet the demand for tailored programming.

Broadcasters can also stitch together content from different sources to create virtual channels. For channels that share some content, Shared content only needs to be stored and encoded once, reducing encoder and storage capacity needs.  For example, regional content can be stitched into the national channel to create each full regional channel. Edgeware can also be used to manage restrictions in content distribution rights by inserting replacement segments, programs or providing a blackout system, even for time-shifted content.

Signiant Jet
Cloud Infrastructures - Delivery

Jet is a cloud-native SaaS solution for automated high-speed transfers of large media files between locations, either internally within a business or with external partners, customers and suppliers.  Jet has a visual interface for deploying and monitoring transfers and configuring alerts. It handles any size file and does not impose limits on the amount of data transferred or put constraints on bandwidth use. Instead it uses an intelligent transport mechanism that adapts to network conditions in real-time to optimise throughput. Interrupted transfers are automatically restarted from the point of failure, to support automation of large file transfers.

Jet addresses the need to replace scripted FTP, rsync and other legacy transfer tools with an easier, faster, more reliable alternative, and incorporates multiple layers of security controls. Jet’s dashboard gives visibility into transfer activity and generates automated alerts on job status. Two parties can agree to regularly exchange files from one another through a secure handshake mechanism to save time.

Lawo multiview

LAWO - vm_dmv Distributed Multiviewer
Production - Processing

Lawo’s vm_dmv Distributed Multiviewer app applies the cluster concept for broadcast applications. Unlike apps that live on single processing blades, the vm_dmv is more bandwidth-efficient, infinitely scalable, and separates multiviewer processing tasks into two distinct input and output stages.

The input stage handles ST2110/2022 IP streams or SDI inputs, monitors them and outputs downscaled versions with their monitoring information to the network. The output stage receives the downscaled signals, generates and lays out the multiviewer head, handles alarming based on the monitoring data, and outputs multiviewer head signals as ST2110/2022 IP streams or SDI signals.

This very low latency, high performance cluster architecture may potentially handle live production. The point of difference is that the vm_dmv’s tasks are broken down into smaller assignments, which can be processed by a group of C100 processing blades in a cluster, referred to as a node. One node acts as master, the single point of contact for systems used to control the multiviewer.

Videon EdgeCaster
Streaming - Compression

EdgeCaster enables 4K HEVC and H.264 encoded signals as part of an HLS, DASH, and CMAF workflow while simultaneously creating six different encoded output versions. By performing computationally intensive, expensive functions on the encoder, rather than in the cloud, the EdgeCaster enables faster-than-broadcast latency while also reducing the cost of streaming.

With the processing power of the SnapDragon chip, the EdgeCaster streams at resolutions up to 4K at 30 FPS using either H.264 or H.265/HEVC compression. As an AWS Elemental Technology Partner, Videon’s EdgeCaster interfaces directly to AWS Elemental MediaStore and Amazon Cloudfront, enabling less than three seconds of latency, in scale, over public internet connections using standard HTTP-compatible applications for playback. As a result, EdgeCaster users can launch and scale up services — including live and interactive. Videon’s support for SRT also allows users to stream from building to building or across campuses while achieving latency of less than half a second.

Matrox Monarch EDGE 4K/Multi-HD Encoder
Streaming - OTT/IPTV systems

Monarch EDGE remote production encoder is made to produce live, multi-camera events and lower on-site expenses by keeping most of the production staff in-house. It synchronizes and delivers remote camera feeds to a master control room or cloud for production. Web or OTT content producers can buy the 4:2:0 8-bit version, and the 4:2:2 10-bit version suits more demanding, broadcast-quality productions.

The Monarch EDGE encoder can deliver a live stream of up to 3840x2160 at 60 fps to multiple destinations using varied streaming protocols. Each input could also be streamed at resolutions up to 1080p60 using the higher 4:2:2 H.264 encoding profile. Furthermore, multiple processes can be performed on each input by scaling and de-interlacing engines enabling each input to be streamed at multiple resolutions and bitrates simultaneously, which is useful for remote monitoring.

8

EVS Overcam
Production - Cameras EVS

Deployed through robotic cameras triggered by artificial intelligence, the Overcam system automates key camera positions around the field of play to help live sports productions capture more content, more efficiently. Overcam is powered by VIA Mind, EVS’ AI engine integrated with machine learning methods, and can use smart tracking to analyse the game’s key parameters, such as player and ball positions and steers the cameras to frame the scene correctly. Trained on real customer footage, it can mimic traditional camera framing for each position.

Due to its real-time processing, the system can integrate with existing multi-camera live production workflows in which robotic cameras are mixed with manually operated cameras. This provides live sports productions with extra content to work with for more varied storytelling options. It also creates opportunities to cover live sports with smaller budgets that have not usually been broadcast. Overcam runs on an EVS 2RU COTS server, either on-site or in a remote operation centre, and cameras are calibrated automatically.

Virtuoso JPEG XS media function
Production - Video Processing

Nevion’s software-defined media node, Virtuoso, now has a JPEG XS encoding and decoding media function. JPEG XS (ISO/IEC 21122) is a new encoding standard for video that achieves original quality visually lossless output, both for one-time and multiple concatenated compression, at latency of less than a video frame. Typical compression ratios range from 6:1 for visually lossless, to 10:1 and higher for high quality visual monitoring. This makes JPEG XS a good match for low latency real-time transport of HD and 4K/UHD video over wide-area networks (WANs) and for bandwidth-constrained campus and facility local area networks (LANs) when uncompressed transport is not viable.

The Nevion Virtuoso JPEG XS media function is powered by TICO-XS from intoPIX, supplying encoding/decoding capabilities and multi-channel support. It takes uncompressed video in either SDI and IP (SMPTE ST 2110-20) formats, and outputs JPEG XS encoded video for transport over IP (SMPTE ST 2110-22).

Masstech Kumulate
Production - Storage

Kumulate automates and optimises storage and lifecycle management of video assets across production, processing, packaging and distribution. Industry segments range from news and sports to broadcast, archive and cinema. As assets evolve through their lifecycle, Kumulate builds a storage environment for each user’s operations and workflows. Kumulate assures that assets and associated files are stored in the most cost-efficient storage tier - cloud, on-premise, disk or hybrid.

Kumulate is containerised and fully virtualised, allowing customers to deploy on-premise or entirely in the cloud modular, comprising storage, transcode and workflow orchestration, and focusses on automated workflows replace processes for data migrations between storage tiers, Disaster Recovery, and integrated transcode, packaging and publishing to social media and other new formats.

Kumulate is an open system, with deep API and application level integrations into MAM, PAM, NLEs, News Room Computer Systems and custom built management and production systems, allowing control of content via existing infrastructure or Kumulate’s interface. It is hardware agnostic and also supports integration into cloud storage providers including AWS, Microsoft, Google, Wasabi and all native S3 platforms.

Integrations with AI engines form a layer on top of Kumulate’s search and metadata enrichment. Assets can be re-monetized with facial, object and location recognition services, or unstructured speech-to-text to enable greater search and sharing across disparate global systems and groups.

Vnova

V-Nova P+
Distribution - Compression

P+ is a highly optimised software library for encoding and decoding video streams supplemented with MPEG-5 Part 2, the Low-Complexity Enhancement Video Coding (LCEVC) developed by MPEG, and including contributions from V-Nova. When enhanced with LCEVC, existing encoding pipelines – using codecs such as AVC/H.264, HEVC, VP9 or AV1 and in the future VVC – can achieve the same or better quality as an upcoming codec at the same bitrate and at a computational complexity as much as 4x lower.

The purpose of P+ is to deliver better video quality and resolution at the same bitrates, in order to improve QoE and customer satisfaction, increasing viewing times for ad monetisation and reducing subscriber churn. P+ enables stable video delivery at lower bitrates, still reaching viewers over very limited networks. Storage and CDN delivery costs can be reduced as well - applying P+ to a codec reduces its complexity and can improve encoding density.

Primestream Elastic Data Viewer
Virtualisation - Media Processing as a Service

Primestream’s comprehensive MAM + AI system aims to optimise media creativity and make working with media is as straightforward as using native language. To achieve this, the Elastic Data Viewer is a centralised, web accessible media asset management platform for video and media assets allowing organisations to keep track of large libraries of media content generated by multiple departments at dispersed locations.

The interface is designed for AI data in mind to make sense of huge amounts of information. Its Data Viewers review and modify AI metadata using an integrated AI workflow that supports AI classification results including face, topic and keyword recognition. A workflow automation engine simplifies and automates repetitive tasks including AI processing while keeping users notified via email on the progress of workflows in use. For example, it uses an automated multi-language Transcription workflow preventing the need to send multiple copies of the same file to extract different languages within media.

Blackmagic DaVinci Resolve Studio 16
Post Production - Editing, Finishing, Colour

DaVinci Resolve Studio combines professional offline and online editing, colour correction, audio post, multi user collaboration and visual effects together in one application. DaVinci Resolve 16 is a major update that adds a new cut page for editors working on fast turnaround work such as television commercials and news cutting, for broadcast or YouTube.

Its dedicated interface and tools help import, edit, trim and add transitions and titles at speed, and automatically match colour and mix audio. The regular edit page is still available so that users can switch between edit and cut pages to change editing style in the middle of a job. Two new tools that are drawn from the past are the source tape mode and an A/B trim tool.

The DaVinci Neural Engine uses new GPU developments in deep neural networks and learning with AI to power new functionality such as speed warp motion estimation for retiming, up-scaling footage, auto colour and colour matching. The engine is built with simple tools to solve complex, repetitive and time consuming problems, such as using facial recognition to automatically sort and organise clips into bins based on people in shots.

DMW resolve

EditShare EFS 2020
Post Production - Storage

The EFS 2020 file system and management console for creative collaboration improves speed and security, and helps manage high-bandwidth, multi-stream 4K and other high-res workflows. Its fast new collaborative storage space brings an increase in throughput performance of up to 20%, within a single namespace. The system has a programmable REST API used to automate mundane administrative tasks such as account and project creation and user privileges, and no single point of failure exists.

EFS 2020 also automates tiered storage migration from on-line to nearline, moving content between parking storage and high speed media storage. EditShare has written its own efficient drivers for EFS, and manages the entire EFS 2020 technology stack, aiming for stability, prevention of IP packet loss through standard protocols, and faster video file transfers.

A new security and accountability measure is EFS 2020 File Auditing, a real-time purpose-built content auditing platform for production workflows that tracks all content movement on the server by generating an activity report with a detailed trail back to the instigator.

Singular.live Interactive & Adaptive Digital Overlays
Broadcast – Motion Graphics

Singular.live is a cloud-based platform for adding animating digital overlays to livestreams, with an authoring environment, built-in control applications, integration with streaming software and devices, and an open API and SDKs for additional integration and customisation. Singular is free to use and runs via any computer with an internet browser. It can also be used in SDI or NDI environments to accommodate traditional and new broadcast workflows.

Singular’s overlays are HTML-based and, with Singular’s local rendering on the viewing device, each viewer has a personalised experience with targeted ads, local date and time, custom colour themes, and even unique overlays or information. While capitalising on the power of the internet to customise the viewing experience, local rendering also precludes the need for render engines or licenses.

Singular supports interactive overlays, allowing viewers to participate directly with the stream. Users can vote on polls and see live updating results or click to see specific stats and overlays during a sports stream. Overlays can be authored in adaptive mode to automatically resize based on viewing device.

To see the SPECIAL MERIT AWARD winners, find them listed here.