Standards Conversion in the Cloud

To take advantage of new distribution opportunities and fully monetize their content libraries, media organisations need to be able to format content to reach the widest possible audience.

Converting source material or even the final master to be compliant with geographic, platform or device comes under the umbrella term of standards conversion. The actual and most important standards we are converting are around the picture (or frame) size, the dynamic range, colour space and frame rate.

Conversion of the frame rate is often a neglected consideration, but it does govern how the viewer will perceive motion. The reality is that good frame rate conversion will not be noticed but bad frame rate conversion can ruin a programme.

Until recently the content production, post and distribution teams only needed to worry about a handful of frame rate standards — traditionally 23.98, 24, 25, 29.97, 50, 59.94 and 60 Hz. Even this collection presents a challenge in terms of handling mixed content within a project or simply the need to format the end product for delivery via cinema, linear TV, VOD/OTT, web, and myriad of mobile end devices.  

The introduction of high frame rate (>60hz) and even user-generated content (UGC) has in many ways made the situation even more complicated. So today, standards conversion is no longer limited to bridging the differences between film and TV frame rates.

As you might expect, more complex conversions require more sophisticated techniques to preserve both picture quality and smooth motion playback. In the context of frame rate conversion, complexity is largely about the numerical relationship between the source and output frame rates, as well as the inherent motion within a sequence. This is where motion compensated interpolation is widely regarded as providing the best results.

Why motion compensation matters?

The only way to ensure smooth motion and a crisp, clear picture through frame rate conversion is to analyze adjacent frames, detect and calculate the movement of objects within the scene, and create new frames with appropriate object placement and display timing. This is how motion compensation places a moving object in the right place, at the right time, and on the right path while converting content from one frame rate to another. This approach overcomes the temporal and spatial differences by generating new frames — still images — that very closely represent what the viewer would see if the content had been captured at that rate.

Figure 1 illustrates motion compensated frame rate conversion from 50 frames per second (fps) to 60 fps in simple terms. The top image represents five frames (0.1 sec) of 50 fps video, and the moving football appears in each of these five frames. As the bottom image demonstrates, conversion to 60 fps results in six frames, or six appearances of the ball within the same period.

So somehow, we have gained a frame? If we achieved this by duplicating any single frame, then the position of the ball would not move in the new frame. This and the shorter timespan between the frames this would result in a jerky, uneven motion.

Motion compensation makes it possible to insert new frames in which the ball appears in a predicted location at the time when the new frame occurs. We can also preserve original frames which occur at the same temporal position or scene changes.

Figure 1: Motion Compensated Frame Rate Conversion 50 to 60 fps

Conversion in the cloud

The availability of cloud services has released broadcasters from the shackles of inflexible and costly capex-based infrastructures. They can now choose the cloud to host their operations or access third-party media services.

Most media organisation are doing both as costs can be better managed by paying for what you actually use — and only when you need to use it.  There are also “economies of scale” advantages to be had as cloud providers offer attractive pricing models for high-demand consumers.

Until recently, high-quality motion compensated frame rate conversion of file-based or live content was one of the few remaining gaps in a cloud services offering. The availability of software-based conversion using CPU-only resources has changed that. Thanks to virtually unlimited CPU processing power, they can quickly and cost-effectively scale content processing.

InSync has a partner approach to cloud services. Our mission is to make some or all of our unique conversion capabilities available within leading cloud services platforms. This is where our flagship FrameFormer software engine comes in!

Real world implementations

InSync FrameFormer provides the best in motion compensated frame rate conversion. It can be easily integrated via its open API into any part of the media production chain but most commonly sits within file-based transcoding platforms. FrameFormer uses only CPU resources and so is ideally suited to cloud deployment for the reasons of hyper-scalability explained earlier.

InSync has engaged with prominent trailblazers in the media cloud services space and currently Frameformer conversion is available from three partners; AWS Elemental, Dalet and Hiscale.

AWS Elemental MediaConvert

The most recent launch saw FrameFormer being available as a premium tier add-on service via AWS Elemental MediaConvert. MediaConvert is a file-based video transcoding service with broadcast-grade features that make it easy for users to create video-on-demand (VOD) content for broadcast and multiscreen delivery at scale.

https://aws.amazon.com/mediaconvert/

Hiscale FLICS

Hiscale GmBH, a leading provider of hyper-scalable and cost-efficient video processing offer FrameFormer conversion via their Flics online SaaS as well as their hybrid cloud clustered solutions. Hiscale are also in the process of onboarding InSync’s new live video frame rate conversion capability and this service will be available in early 2021.

“With InSync’s expertise in motion compensated conversions, we are able to offer a huge benefit to cloud encoding workflows for our customers, who were not able to match the quality of traditional solutions with flexibility and scalability of cloud transcoding before”, says Christoph Jurkuhn, Head of Product and Presales

https://hiscale.com/

Dalet Amberfin

Dalet AmberFin is an established transcoder and workflow engine which has recently been also launched as an on-demand cloud service. FrameFormer is available via this new service as well as on premise and VPC configurations

‘Dalet is pleased to offer the high-quality of InSync’s motion-compensated, CPU-only standards conversion engine, Frameformer, within AmberFin Cloud Transcoder Service natively in our multi-tenant SaaS environment, as well as to our customers utilizing AmberFin in a VPC or on-premise”, says Eric Carson, product manager for AmberFin’

https://www.dalet.com/platforms/amberfin

What next?

InSync will expand it base of hosting partners for FrameFormer file-based and live stream conversion services. Following the same integration model and applying the same high standards, InSync will soon also introduce a new conversion engine named PixFormer which is focused on providing the industry’s most advanced SDR and HDR conversion tools.