Finding the Sweet Spot for Virtualized Playout
Virtualising playout operations is now considered a primary means of adding agility and scalability to broadcast and video distribution. As part of IP migration, relying more heavily on cloud resources generally means efficiency measures can be adjusted more easily, making it easier to satisfy consumers over time and respond faster to changes in the market.
Nevertheless, adding a virtualised playout infrastructure into a broadcast facility adds extra complexity and specific, new requirements into its operations. According to CTO Ian Cockett at Pebble Beach Systems, who specialise in broadcast automation, virtualization and channel-in-a-box systems, “Implementing a full-scale virtualised platform is likely to change your organization’s business model and its relationships with application providers and integrators.
“Beyond making the associated changes to processes and equipment you will need wide-ranging in-house expertise because, from early on in the process, you will be building relationships with an entirely different set of suppliers. From start to finish, it’s important to understand all facets of the virtual, cloud environment.”
The Need to Virtualize
Ian has implemented numerous virtualised playout systems that are in use today and, with his team, gained insight into the main requirements and potential pitfalls organisations encounter when considering a move into virtualised infrastructure and channel playout based in the cloud. A number of factors must be balanced, and continuously re-balanced.
“You need to be certain of what you are trying to achieve through virtualisation. Diversity and utilisation, mobility and security are the main objectives of cloud-based operations. But a virtualised playout infrastructure comes with an overhead – ultimately, it will deliver flexibility and agility, but the path to those benefits may not be short, straightforward or inexpensive,” he said.
“However, a virtualised playout environment does enable you to isolate the application from the generic systems in the datacentre. Drives and CPUs can theoretically be upgraded simply, by non-broadcast specialists. Operatives will have fewer ‘boxes’ to maintain because the same systems will most likely be in use across multiple platforms. Virtualisation makes it simple to clone and redeploy applications within a common environment across the whole facility, making expanding a system or migrating to new generation hardware a much simpler exercise.”
Understanding Hypervisors
In other words, the process is not necessarily about being cheaper, or getting more out of the resources, but about flexibility, portability and ease of maintenance. To those ends, hypervisors play a central role in virtualization because they add a layer of management and control over the data centre and enterprise. Staff members not only need to understand how their hypervisor works, but also how to operate supporting functionality such as virtual machine configuration, migration and snapshots.
A hypervisor is a function that abstracts or isolates operating systems and applications from the underlying computer hardware, allowing the underlying host machine hardware to independently operate one or more virtual machines (VM) as guests. Guest VMs can share the system's processor cycles, memory space, network bandwidth and other physical compute resources. The hypervisor meanwhile works as a virtual machine monitor.
Being able to run multiple guest VMs can improve the utilization and diversity of the underlying hardware by a huge margin. A physical server might only host one operating system and application, but a virtualized server can host multiple VM instances, each running an independent operating system and application on the same physical system, using much more of the system's available compute resources.
The abstracted VM’s independence from the underlying hardware makes it mobile as well. Although traditional software may need to be closely associated to the underlying server hardware, VMs may be moved or migrated between any local or remote virtualized servers that have enough computing resources, with almost no disruption.
VMs remain logically isolated from each other even though they run on the same physical machine. Consequently, an error, crash or malware attack on one VM does not affect the other VMs on the same or other machines. Also, instead of stopping operations to backup, snapshot tools can capture the content of each VM's memory space as a point-in-time image, and save it to disk very quickly.
Proof-of-Concept
It’s important to tell your vendor everything that your channels need to support your transport streams, because whatever functionality the vendor incorporates to accommodate the streams will affect the host environment and overall performance. Examples of the relevant requirements are the file formats, audio tracks, graphics and so on that need to be built in, and whether or not you are going to use GPU or CPU-based encoding. The vendor also needs to know the end destination for the transport streams, the bitrate you will use for final delivery, and if you will use MPEG2 or H.264.
Once you have chosen which virtualised platform to use, depending on its type and the hypervisor in use, you will need to know and understand how you are going to measure performance within your environment. The playout software may perform differently with different hypervisors, for instance. While the provider might be willing to provide a small-scale virtualised system for a proof-of-concept, a full-scale production environment is different. Providers may be less willing to take responsibility for the design and control of your infrastructure.
In any case, a more meaningful proof-of-concept may be one that shows whether your organisation is able to design, build, manage and maintain a scalable virtualised infrastructure, and whether that system will meet your on-going demands. Remember, you won’t just need to measure the behaviour of the playout software application. You also need to monitor the behaviour of the entire infrastructure. For example, the sleep/wake time of the processors in certain hypervisors may not be good enough for real time playout. Latencies and behaviour will vary depending on the hypervisor you test.
Diagnostics for IP Systems
Operational monitoring once a system is in place is important especially in public cloud scenarios, but Ian Cockett advises that diagnostics may be more challenging to set up for IP systems. You’ll need to investigate what tools are at your disposal, as well as staff who are able to interpret the results. It is worth finding out where your IP streams actually go, and how closely you can test them.
As well as monitoring latencies and considering how and where your operators will monitor the playout, other diagnostic considerations are any control latencies that will have to be added, failure scenarios and failover contingencies, and who or what will be switching IP streams. In case a VM fails and you lose the transport stream altogether, you need to check if your downstream distribution can deal with no stream at all, or even without a null stream to maintain bitrate.
Over-provisioning
You should also make sure that your chosen hypervisor supports the disk drives and storage you want to use with your COTS hardware, and check the effect that over-provisioning has on its performance. Over-provisioning, that is, adding more hardware to your virtual environment or buying more cloud instances, is the traditional response to uncertainty about what an application is capable of or its exact requirements. But in certain cases it will rapidly increase latency, rendering the architecture unsuitable for playout.
A mix of workloads of different formats and size can make servers appear to be full, and that is another reason why having diagnostics in place is important. Careful analysis may show that you can go farther with your existing server capacity and not increase risk, just by organizing workloads so that they don't collide.
Because the transport streams your playout infrastructure generates will go through the enterprise network switches, it isn't surprising that they can overload the network bandwidth and affect on-air performance, despite the fact that your playout software application may be running on a completely separate network.
Also, multiple virtual machines on the same physical host can impact each other’s performance, in spite of their logical isolation. Should you find the playout application is sharing a hypervisor with your MAM system or an email server in the data centre, you will need to closely manage permissions for deploying channels. www.pebble.tv