Quality of Experience – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Thu, 22 Dec 2022 16:52:03 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg Quality of Experience – Bitmovin https://bitmovin.com 32 32 Quality of Experience (QoE) in Video Technology [2022 Guide] https://bitmovin.com/blog/qoe-why-quality-video-matters/ Wed, 23 Feb 2022 08:25:12 +0000 https://bitmovin.com/?p=71909 Quality of Experience (QoE) in Video Technology [2022 Guide] Welcome to our comprehensive guide to Quality of Experience in video technology. If you’re looking for the following: Why QoE in Video Matters Today QoE and Revenue Loss Statistics Video Encoders and QoE Quality Assessment Scoring Methods Encoding Choices and QoE Then  you are in the...

The post Quality of Experience (QoE) in Video Technology [2022 Guide] appeared first on Bitmovin.

]]>
Quality of Experience (QoE) in Video Technology [2022 Guide]

Welcome to our comprehensive guide to Quality of Experience in video technology.
If you’re looking for the following:

  • Why QoE in Video Matters Today
  • QoE and Revenue Loss Statistics
  • Video Encoders and QoE
  • Quality Assessment Scoring Methods
  • Encoding Choices and QoE

Then  you are in the right place.
Let’s get started!

Why Quality of Experience in Video Matters Today

Content owners are investing large sums of money on their premium content, so it is more important than ever they invest in maximizing the quality upon delivery as well.
Content needs to be prepared and streamed in impeccable quality in order to satisfy viewer expectations, and those expectations are getting increasingly demanding. Recent events have shown us that viewers are not shy to air their grievances when things don’t look or stream right.
From major sporting events to big-budget series, more of us expect to experience the fully immersive deep colors and vivid images promised by HDR and the 4k experience. When it isn’t forthcoming, users are quick to voice their displeasure.
Game of Thrones fans may recall the final season’s quality fiascos. Social media lit up with posts regarding the HBO streaming experience. Rather than mentions of epic battle scenes or plot twists, viewers took to social media to complain of difficult-to-see scenes, washed-out colors, and image artifacts — especially apparent on large television screens.

The key take-away: Great content is no longer enough to keep audiences engaged!

- Bitmovin
A consumer unhappy with the Quality of his live stream

QoE and Revenue Loss Statistics

Industry studies show that as many as 33% of users leave a stream due to poor streaming quality. Verizon estimates that OTT video services delivering average or poor-quality experiences account for as much as 25% loss in revenue.
With the explosion of VOD platforms competing for our attention, it is vital that quality is not be a reason for your audience to churn or tune out. 

So, what’s changed?

Only a decade ago, viewing high definition video online was a luxury, now high video quality has become a commodity and continues evolving.
High Definition and 4K resolutions are currently the industry standards (with 8K becoming more widespread).
In addition to high pixel count, the quality of pixels has consistently improved over the years as well. Today, pixels are comprised of more bits – which translates to more vibrant colors and details, enabling HDR technologies. 
Audiences agree that these technologies offer a superior viewing experience over past media formats. However, implementing cutting-edge media technology is not always seamless. 
Online streaming has proved to be one of the most challenging applications for new media formats.
Some of the challenges that content providers face include:  

Lack of standardization

Device targeting with a given codec is necessary to optimize quality, and it becomes increasingly difficult with fragmented software & hardware codec support. Encoders must prepare content in a number of HDR formats (HLG for broadcast, HDR10 for streaming, Dolby Vision for streaming, etc) to reach viewers across devices.

Authentic 4K/8K experiences start with video production 

Unless the entire upstream pipeline consists of a native 4K environment and downstream devices support 4K, image scaling is required.

Last-mile bandwidth limitations

Even with faster home and mobile networks and more efficient codecs, compressing 4K, 8K and HDR video into a size which can be efficiently streamed at the last mile is difficult without compromising quality. 
Delivering a high-definition stream free of network interruptions (rebuffering) is no longer enough to satisfy viewers and subscribers. Today’s viewers expect immersive video experiences.
They want to take advantage of the rich features their screens and devices support — 4K, 8K, HDR, and next-generation audio. 

Video Encoders Responsibility

Modern-day Adaptive Bitrate (ABR) video players are resilient to bandwidth fluctuations, but video encoders must be able to produce the clearest possible images at the most efficient bitrates to satisfy Quality of Experience (QoE) expectations.
Whether a talking head in a newsroom, an action-packed war scene, or a close-up of the game-winning goal, encoders must be flexible enough to maintain pristine quality in all content scenarios.
But exactly how much does quality matter and how should content owners identify and address quality problems?
Many still believe the key to high-quality video is higher bitrates.
That approach does not leverage the power of codecs or consider limited bandwidth audiences are faced with (nor is it cost-effective).
Either by increased delivery costs or quality-related churn; wrongly equating bitrate for quality will have a negative impact on the bottom line.
The most effective approach to optimizing encoding profiles will ensure visual perceptual quality is not compromised (improved QoE), bitrates are reduced for better delivery and availability performance (improved Quality of Service), and storage and delivery costs go down (improved economics).

(State-of-the-Art) Quality Assessment Scoring Methods

A viewer’s rating of quality is important for experience metrics, however, it’s not a reliable measurement of quality for production and distribution stakeholders such as service providers or network operators.
A standard consumer can provide valuable insight into subjective quality assessment, but subjective measurements often lack scientific objectivity and scalability.
For this reason we focus on repeatable and objective quality measurement methods.
Although there are many measurements, we have three identified primary methods “plus” an extension that are ideal for objective quality assessments:

Video Multi-Method Assessment Fusion (VMAF):

One of the latest metrics adapted by the streaming community is Netflix’s Video Multi-method Assessment Fusion (VMAF). It predicts subjective video quality based on reference and distorted video sequences.
The metric can be used to evaluate the quality of different video codecs, encoders, encoding settings, or transmission variants.

Structural Similarity (SSIM):

The structural similarity index is a method for predicting the perceived quality of digital television and cinematic pictures, as well as other kinds of digital images and videos.
As its name indicates: SSIM is used for measuring the similarity between two images.
The SSIM index is a full reference metric; in other words, the measurement or prediction of image quality is based on an initial uncompressed or distortion-free image as a reference.
SSIM is designed to improve on traditional methods such as peak signal-to-noise ratio (PSNR) and mean squared error (MSE).

SSIMPLUS:

The SSIMPLUS Viewer Score is a family of algorithms invented by the team that built SSIM, based on two decades of research & development after inventing SSIM, and is a commercially available tool.
SSIMPLUS measures video experience from content creation to consumption in an Apples-to-Apple fashion and assigns scores between 0–100, linearly matched to human subjective ratings.
SSIMPLUS also adapts assigned scores by intended viewing devices, viewer types, such as Studio, Expert, and Typical viewers; thereby comparing video across different resolutions, frame rates, dynamic ranges and contents.
According to its authors, SSIMPLUS achieves higher accuracy due to its ability to assess all commonly occurring impairments and higher speed than other image and video quality metrics. The Viewer Score is validated using publicly available and commonly used subject-rated datases. A visual comparison with other objective quality assessment approaches is available here.

Peak Signal-to-Noise Ratio (PSNR):

The Peak Signal-to-Noise Ratio is a video quality metric that combines human vision modeling with machine learning.
PSNR is most commonly used to measure the quality of reconstruction of lossy compression codecs (e.g., for image compression).
When comparing codecs, PSNR is an approximation of the human perception of reconstruction quality.
Generally speaking, a higher PSNR indicates that the reconstruction is higher quality, in some cases it may not.
However, in cases where the codecs and/or content are different, the validity of this metric can vary greatly; therefore, you should be extremely careful when comparing results.
PSNR is a well-established method especially in research and also within the industry when measuring at scale mainly given its simplicity.
But it is also known to not correlate well to human perception. Other metrics, such as SSIM or VMAF better correlate with subjective scores but can be quite expensive and time-consuming to compute.
In the end, it is up to an individual encoding professional to decide which method to use testing video quality.
There are additional quality metric scores such as MOS and DMOS, and the industry is still in the process of determining the gold-standard of objective quality measurement.

Encoding Choices

With objective quality assessment tools in hand, it is now easier for you and encoding professionals to evaluate and select which encoding technology will best suit your content delivery (and quality) needs.
So which technologies currently exist in the market and what are the ways that to encode (especially for quality)?

Codec Support Selection – it’s a multi-codec world! 

Types of video codecs on the market

Video Codecs:

H.264/AVC

The industry standard for video compression – designed and maintained by the Moving Pictures Expert Group (MPEG) since 2003. According to our 2019 Video Developer Report – is currently used by over 90% of video developers 

H.265/HEVC

MPEG’s most recent video compression standard – designed and maintained since 2013 – offers 50% higher efficiency than its predecessor

VP9

Google’s royalty-free video compression standard – designed and maintained since 2013. Mostly used on YouTube, but otherwise does not offer full device reach. Great for saving on CDN/Bandwidth costs!
AV1
Another open-source code, designed by the Alliance for Open Media (AOMedia), a conglomerate of video tech giants like Google, Facebook, Netflix, Amazon, Windows, and Bitmovin.
AV1 offers 70% better compression rates than H.264 but is currently limited by application within browsers and devices. However, this is slated to change within the coming two years.

Next generation codecs:

VVC

Versatile Video Coding (VVC), is also called H.266, ISO/IEC 23090-3 and MPEG-I Part 3 is a video compression standard finalized on 6 July 2020 by the Joint Video Experts Team (JVET) of the VCEG working group of ITU-T Study Group 16 and the MPEG working group of ISO/IEC JTC 1/SC 29.

EVC

MPEG-5 Essential Video Coding (EVC) is a current video compression standard that was completed in April 2020 by decision of MPEG Working Group 11 at its 130th meeting. The standard is to consist of a royalty-free subset and individually switchable enhancements.

LCEVC 

Low Complexity Enhancement Video Coding (LCEVC) is a ISO/IEC video coding standard developed by the Moving Picture Experts Group (MPEG) under the project name MPEG-5 Part 2 LCEVC

Benefits of single codec usage vs Multi-codec support

Even though H.264 is ubiquitous and widely supported at the hardware level, it is much less efficient than next-generation codecs in terms of compression rate.
By encoding your videos using a multi-codec approach you can aim to double the quality while still reducing your bandwidth consumption without compromising device reach.

This graphic shows you how to perform an encode

How 3-Pass Encoding Works
3-Pass Encoding Framework

 

Now that you’ve selected which codecs you’ll be using, the next step is to determine how you’ll encode your video content. To retain the best quality during conversion, we’ve determined that multi-pass encoding is the best option; below you’ll find the types of multi-pass encodes:

2-Pass Encoding

A file is analyzed thoroughly in the first pass and an intermediate file is created. In the second pass, the encoder looks up the intermediate file and appropriately allocates bits, therefore, the actual encoding takes place during the second pass.

3-Pass Encoding

Similar to 2-Pass encoding, 3-Pass encoding analyzes the video three times from the beginning to end before the encoding process begins. While scanning the file, the encoder writes information about the original video to its own log file and uses that log to determine the best possible way to fit the video within the bitrate limits the user has set for the encoding process.

Per-Title Encoding

A form of encoding optimization that customizes the bitrate ladder of each video based on the complexity of the video file. The ultimate goal is to optimize towards a bitrate that provides just enough room for the codec to encapsulate information to present a perfect viewing experience.
Another way to consider it, is that the optimized adaptive package is reduced down to contain the exact information for optimal viewing quality. Anything beyond the human eye’s ability to perceive is stripped out. 
Some of the true magic behind encoders is their ability to choose how to implement or tune a given codec.
In some cases, encoders allow users to configure and optimize codec compression settings, like motion estimation or GOP size and structure. 
It goes without saying, but the best method to ensure top quality, even through an encode, is by supplying high-quality sources; starting with pristine quality videos and using best practices for signal acquisition/contribution. 
More Readings:

Follow Bitmovin on Twitter: @bitmovin

Did you know?

Bitmovin has a range of video streaming services that can help you deliver content to your customers effectively.
Its variety of features allows you to create content tailored to your specific audience, without the stress of setting everything up yourself. Built-in analytics also help you make technical decisions to deliver the optimal user experience.
Why not try Bitmovin for Free and see what it can do for you.
We hope you found this guide useful! If you did, please don’t be afraid to share it on your social networks!

The post Quality of Experience (QoE) in Video Technology [2022 Guide] appeared first on Bitmovin.

]]>
Raising the Bar on Quality: How to Upconvert SDR to HDR https://bitmovin.com/blog/sdr-vs-hdr-upconverting-video/ Wed, 06 Oct 2021 10:10:49 +0000 https://bitmovin.com/?p=195701 HDR support is critical to video workflows at nearly every stage, starting from encoding to playback. When an organization makes the inevitable switch from SDR vs HDR, it becomes clear that the video quality of experience (QoE) improves significantly. As HDR support is gradually becoming ubiquitous in the general streaming space, it’s clear that OTT...

The post Raising the Bar on Quality: How to Upconvert SDR to HDR appeared first on Bitmovin.

]]>
HDR support is critical to video workflows at nearly every stage, starting from encoding to playback. When an organization makes the inevitable switch from SDR vs HDR, it becomes clear that the video quality of experience (QoE) improves significantly. As HDR support is gradually becoming ubiquitous in the general streaming space, it’s clear that OTT services need to provide HDR content to remain competitive as consumer technology reaches resolution saturation, and begins to focus on pixel depth in its stead. 
The reality though, is that HDR has only been commercially available since 2014 when Dolby launched their Dolby Vision format, but even with that, most contents wasn’t being produced with end-to-end HDR distribution in mind until fairly recently, meaning the majority of videos out there are still stuck in the past with the limitations of SDR. 

SDR vs HDR – to Convert or not to Convert, that is the question

So what do you do with all of your existing SDR content? Can it be converted to HDR? In a few ways, yes, though results will vary depending on the chosen approach. There’s also the old adage “quality in, quality out” or the less optimistic “garbage in, garbage out” meaning that even with the best solutions, the quality of the source material is going to impact the quality of the final product. Existing compression artifacts or camera noise have the potential to be magnified and become more noticeable when upconverted, requiring extra “clean-up” pre- or post-processing steps. Thus one must ask the question, should SDR content be converted to HDR in the scenarios that it could? Even if an SDR source file has been successfully converted to the HDR specifications, it might still look exactly the same, depending on the technique applied, so let’s take a closer look at what needs to be done and the available options to create noticeable and worthwhile upconversions. 

How does SDR become HDR?  

There are two things that need to happen to transform SDR video into the HDR domain. First, the color gamut has to be widened to the desired color space, usually from the standard Rec. 709 to the wider Rec. 2020, which represents a spectrum of color possibilities much closer to the capabilities of what the human eye can detect. 

HDR Range_Color Gamaut Comparison_Rec.709 vs Rec.2020_Graphs
SDR vs HDR Range Color Gamut Comparison: Rec.709 vs Rec.2020

In addition to better matching the color detection potential of the human visual system, HDR video also aims to more thoroughly mimic and take advantage of our brightness and contrast-detection capabilities. The human eye has evolved to have greater sensitivity for noticing differences in darker tones than it does for lighter ones, creating a non-linear relationship between actual increases in brightness and the increases we perceive. 
HDR Awareness_Human Visual Brightness Perception Chart_Illustration
Human Visual Brightness Perception Chart (Source: Pixelsham)

If that disparity is not taken into account, bandwidth will be wasted on areas with bright highlights where we are unable to notice any difference, while the darkest regions will be underprovisioned, resulting in loss of detail and visual quality. The solution is to apply what’s known as a gamma function or gamma correction process. This begins when an image or video is captured, with gamma encoding being applied by the camera in order to prioritize and preserve the details and light levels humans can appreciate. Gamma encoding was originally developed to compensate for the non-linear light intensity produced by CRT monitors but is still in use today as monitors with different underlying technologies employ different gamma decoding or gamma expansion functions to reproduce the captured source or artist’s intent as accurately as possible. 
The chart below shows a typical camera gamma correction curve and corresponding transfer function used by a CRT monitor to compensate and display a scene that matches what the human visual system would have produced. 
HDR Gamma Encoding_Standard Camera Gamma Encoding Capabilities_Line Graph
Standard Camera Gamma Encoding Capabilities (Source: Pixelsham)

In 2011 the ITU adopted BT.1886 as the recommended gamma function for flat panel displays used in HDTV studio production. This standard models the response of CRT monitors better than previous functions, enabling a more consistent viewing experience and also takes into account the black level capabilities of the device, allowing clearer distinctions in the darkest areas for SDR video. 
In order to complete the conversion from SDR to HDR, the gamma function needs to be translated, usually from the discussed BT.1886 to either Perceptual quantizer(PQ) or Hybrid log-gamma(HLG), depending on the desired final format. PQ was developed by Dolby and is the gamma function used by the Dolby Vision, HDR10 and HDR10+ formats. The HLG HDR format shares its name with the gamma function it employs and was jointly developed by the BBC and NHK for broadcast applications. As the name suggests, HLG uses a hybrid approach for its gamma function, combining the SDR gamma curve with a logarithmic function for the HDR range, allowing HLG content to be viewed as intended on both SDR and HDR capable monitors.   
SDR vs HDR Gamma Curve Comparison_Line Graph
SDR and Hybrid log-gamma curves (Source: Pixelsham)

Direct Mapping vs Up-Mapping 

The simplest approach to converting SDR to HDR involves directly mapping the equivalent color and brightness values, essentially encapsulating the SDR signal in an HDR container, along with associated metadata. This is the most computationally and cost-effective method to complete the technical conversion to HDR, but the video being identified as HDR will look the same as the SDR source which can lead to an inconsistent and disappointing end-user experience. 
The preferred method from a quality perspective is known as up-mapping or inverse-tone mapping and creates an “HDR-look”, allowing SDR clips or shows to blend into an HDR production or platform. This is achieved through the use of tone-mapping filters or Lookup tables (LUTs) that have been extensively tested and calibrated to accurately recreate and enhance SDR content in the HDR domain. Up-mapping is more computationally complex, and thus more expensive than direct mapping, but the visual difference is significant and necessary for a true SDR to HDR workflow. 

Improving Quality AND Experience

Streaming services with a mixed catalog of old and new, SDR and HDR, should strongly consider up-mapping their SDR content to match the look and format(s) of their HDR content. It’s the best way to ensure the highest quality, most consistent and enjoyable viewing experiences across their entire library. In a crowded market, it will elevate and differentiate their service. 

Planet Earth_BBC_HDR Content
(Image credit: BBC)

When switching between displaying SDR and HDR content, there may be flickering or flashing as monitors auto-detect and recalibrate, so at the very least, providers with blended SDR/HDR content should be direct mapping SDR to HDR to maintain consistent technical specifications for the display. This is an important consideration for services that have HDR content in combination with ad-supported subscription tiers, who should be confirming their ad inventory is conditioned to match the HDR specs of their programming so they’re not amplifying the disruptions the ads are already creating.   

The Future is in HDR 

We are living through a transitional period of mixed content and no doubt will eventually reach a point where everything is being produced in HDR, there’s universal playback support, and all of the legacy SDR videos deemed worthy will have been upconverted. Streaming industry analyst Dan Rayburn sees HDR as a key advantage and predicts we’ll see the fastest rates of adoption in streaming services over the next 2 years. It’s really not a matter of if, but when, so any content that has long-term appeal should be upconverted to HDR as soon as possible to maximize the lifetime value of that process. Keep an eye out for our upcoming post about how to streamline your HDR encoding workflows with Bitmovin, including examples for upconverting SDR to HDR with the Bitmovin encoding API. 
In the meantime, if you want to find out more about SDR vs HDR and the best upconversion approach for your service, get in touch today or check out one of these other great resources:
[Blog Post] HDR vs SDR: Why HDR Should Be Part of Your Video Workflow
[On-Demand Webinar] Deploying HDR for Streaming Services: Challenges and Opportunities
[On-Demand Webinar] How to Enable and Optimize HDR Encoding

The post Raising the Bar on Quality: How to Upconvert SDR to HDR appeared first on Bitmovin.

]]>
Quality of Experience Issues: Levelling the Content Creation Playing Field (ft. Dalet) https://bitmovin.com/blog/quality-of-experience-dalet/ Wed, 29 Jul 2020 10:32:08 +0000 https://bitmovin.com/?p=121583 This article was co-authored by Dalet’s Solutions Architect, Brett Chambers, and Bitmovin’s Solution Director APAC, Adrian Britton Not all content is created equal, especially when you’re a publishing house and your content arrives from multiple sources. But that doesn’t mean that your viewer’s quality of experience (QoE) needs to suffer as a result.  In this...

The post Quality of Experience Issues: Levelling the Content Creation Playing Field (ft. Dalet) appeared first on Bitmovin.

]]>
- Bitmovin
This article was co-authored by Dalet’s Solutions Architect, Brett Chambers, and Bitmovin’s Solution Director APAC, Adrian Britton
Not all content is created equal, especially when you’re a publishing house and your content arrives from multiple sources. But that doesn’t mean that your viewer’s quality of experience (QoE) needs to suffer as a result. 
In this blog post, we discuss some of the typical failure modes that we see in mezzanine content, and how the combination of Dalet’s Ooyala Flex Media Platform and Bitmovin’s Encoding joint solution can help mitigate them with technical metadata, black bar removal, deinterlacing, colour correction, and more! We cover some of the top issues that affect a viewer’s quality of experience and how our solutions can help your organization resolve them. 

Quality Matters – Factors that affect viewer experience

There are countless factors that can negatively affect your subscribers’ experience – luckily most of them can be resolved with a simple combination of accurate meta-data and specific dashboard inputs. For your convenience we’ve organized the top six factors that will make the most positive impact on your workflow:

Black Bar removal

Perhaps one of the most noticeable factors that may affect a viewer’s Quality of Experience is the addition of those pesky Black Bars that appear on both sides of a video player during playback. Bars or letter-box artifacts occur when an asset of a non-conforming aspect ratio is introduced somewhere in the workflow. A mezzanine asset typically would not see this, but where content has moved through upstream systems – the likelihood increases. Typically Bitmovin’s tools initiate a black bar removal if either an asset’s technical metadata requires it or the Ooyala Flex ingest path determines it necessary.  
For correction, Bitmovin encode contains a cropping filter, which can be controlled through the Ooyala Flex Media Platform to remove the required pixels or frame percentage, thus correcting the image. You can see this process below:
[bg_collapse view=”button-blue” color=”#f7f5f5″ icon=”eye” expand_text=”Show Bitmovin API Reference” collapse_text=”Hide Bitmovin API Reference” ]
CropFilter{

id* string
readOnly: true
example: cb90b80c-8867-4e3b-8479-174aa2843f62

Id of the resource
name string
example: Name of the resource

Name of the resource. Can be freely chosen by the user.
description string
example: Description of the resource

Description of the resource. Can be freely chosen by the user.
createdAt string($date-time)
readOnly: true
example: 2016-06-25T20:09:23.69Z

Creation timestamp formatted in UTC: YYYY-MM-DDThh:mm:ssZ
modifiedAt string($date-time)
readOnly: true
example: 2016-06-25T20:09:23.69Z

Modified timestamp formatted in UTC: YYYY-MM-DDThh:mm:ssZ
customData string
writeOnly: true
additionalProperties: OrderedMap { “type”: “object” }

User-specific meta data. This can hold anything.
left integer
example: 0

Amount of pixels which will be cropped of the input video from the left side.
right integer
example: 0

Amount of pixels which will be cropped of the input video from the right side.
top integer
example: 0

Amount of pixels which will be cropped of the input video from the top.
bottom integer
example: 0

Amount of pixels which will be cropped of the input video from the bottom.
unit PositionUnitstring
title: PositionUnit
default: PIXELSEnum:
Array [ 2 ]

[/bg_collapse]
Some of the aspect ratio issues that may come up within the Pixel Aspect Ratio (PAR), Storage Aspect Ratio (SAR), and/or Display Aspect Ratio (DAR). The ideal AR should be as follows:

  • PAR: the aspect ratio of the video pixels themselves. For 576i, it’s 59:54 or 1.093.
  • SAR: the dimensions of the video frame, expressed as a ratio. For a 576i video, this is 5:4 (720×576).
  • DAR: the aspect ratio the video should be played back at. For SD video, this is 16:9.

Technical metadata detection within Ooyala Flex will, in most cases, correctly determine the aspect ratio characteristics, allowing the encoding profile in Bitmovin’s dashboard to be adjusted automatically. These are all characteristics of a video asset. Getting it wrong either in detection, encoding, or playback results in squashed video playback. 

Colour-Correction

Bitmovin’s encoder contains a powerful set of color space, color range, and color primary manipulation logic. While not a full color-grading solution, the encoding workflow can easily be modified to correct for color issues commonly found in mezzanine formats.
[bg_collapse view=”button-blue” color=”#f7f5f5″ icon=”eye” expand_text=”Show Bitmovin API Reference” collapse_text=”Hide Bitmovin API Reference” ]

ColorConfig{

copyChromaLocationFlag boolean
example: false
Copy the chroma location setting from the input source
copyColorSpaceFlag boolean
example: false
Copy the color space setting from the input source
copyColorPrimariesFlag boolean
example: false
Copy the color primaries setting from the input source
copyColorRangeFlag boolean
example: false
Copy the color range setting from the input source
copyColorTransferFlag boolean
example: false
Copy the color transfer setting from the input source
chromaLocation string
title: ChromaLocation
The chroma location to be applied
Enum:
Array [ 7 ]
colorSpace string
title: ColorSpace
The color space to be applied. If used on a Dolby Vision stream, this value must be set to UNSPECIFIED.
Enum:
Array [ 12 ]
colorPrimaries string
title: ColorPrimaries
The color primaries to be applied. If used on a Dolby Vision stream, this value must be set to UNSPECIFIED.
Enum:
Array [ 13 ]
colorRange string
title: ColorRange
The color range to be applied. If used on a Dolby Vision stream, this value must be set to JPEG.
Enum:
Array [ 3 ]
colorTransfer string
title: ColorTransfer
The color transfer to be applied. If used on a Dolby Vision stream, this value must be set to UNSPECIFIED.
Enum:
Array [ 17 ]
inputColorSpace string
title: InputColorSpace
Override the color space detected in the input file. If not set the input color space will be automatically detected if possible.
Enum:
Array [ 12 ]
inputColorRange string
title: InputColorRange
Override the color range detected in the input file. If not set the input color range will be automatically detected if possible.
Enum:
Array [ 3 ]

 

[/bg_collapse]

Deinterlacing

Interlacing or deinterlacing can be used to drastically improve the visual performance of your fast-moving content. When manipulating these aspects of the encode it’s important to have a full view of the input asset making the decision to trigger these ‘filters’ for either content of a particular source, a particular technical metadata characteristic, or as part of a human-driven QA process.
[bg_collapse view=”button-blue” color=”#f7f5f5″ icon=”eye” expand_text=”Show Bitmovin API Reference” collapse_text=”Hide Bitmovin API Reference” ]

DeinterlaceFilter{

id* string
readOnly: true
example: cb90b80c-8867-4e3b-8479-174aa2843f62
Id of the resource
name string
example: Name of the resource
Name of the resource. Can be freely chosen by the user.
description string
example: Description of the resource
Description of the resource. Can be freely chosen by the user.
createdAt string($date-time)
readOnly: true
example: 2016-06-25T20:09:23.69Z
Creation timestamp formatted in UTC: YYYY-MM-DDThh:mm:ssZ
modifiedAt string($date-time)
readOnly: true
example: 2016-06-25T20:09:23.69Z
Modified timestamp formatted in UTC: YYYY-MM-DDThh:mm:ssZ
customData string
writeOnly: true
additionalProperties: OrderedMap { “type”: “object” }
User-specific meta data. This can hold anything.
parity PictureFieldParitystring
title: PictureFieldParity
default: AUTO
Specifies which field of an interlaced frame is assumed to be the first one
Enum:
Array [ 3 ]
mode DeinterlaceModestring
title: DeinterlaceMode
default: FRAME
Specifies the method how fields are converted to frames
Enum:
Array [ 4 ]
frameSelectionMode DeinterlaceFrameSelectionModestring
title: DeinterlaceFrameSelectionMode
default: ALL
Specifies which frames to deinterlace
Enum:
Array [ 2 ]
autoEnable DeinterlaceAutoEnablestring
title: DeinterlaceAutoEnable
default: ALWAYS_ON
Specifies if the deinterlace filter should be applied unconditionally or only on demand.
Enum:
Array [ 3 ]

[/bg_collapse]

Conformance (FPS)

Source content is unlikely to always be captured at the same frame-rate. US-sourced content can range from 29.97 FPS to capture rate of 25/50 FPS – or even faster! Although the normal role of the encoder is to conform all inputs to a given frame-per-second, especially when ad-insertion is used, there are also some use cases where certain content coming through certain workflows need to maintain different (and higher) values.
[bg_collapse view=”button-blue” color=”#f7f5f5″ icon=”eye” expand_text=”Show Bitmovin API Reference” collapse_text=”Hide Bitmovin API Reference” ]

ConformFilter{

id* string
readOnly: true
example: cb90b80c-8867-4e3b-8479-174aa2843f62
Id of the resource
name string
example: Name of the resource
Name of the resource. Can be freely chosen by the user.
description string
example: Description of the resource
Description of the resource. Can be freely chosen by the user.
createdAt string($date-time)
readOnly: true
example: 2016-06-25T20:09:23.69Z
Creation timestamp formatted in UTC: YYYY-MM-DDThh:mm:ssZ
modifiedAt string($date-time)
readOnly: true
example: 2016-06-25T20:09:23.69Z
Modified timestamp formatted in UTC: YYYY-MM-DDThh:mm:ssZ
customData string
writeOnly: true
additionalProperties: OrderedMap { “type”: “object” }
User-specific meta data. This can hold anything.
targetFps number($double)
example: 25
The FPS the input should be changed to.

[/bg_collapse]

Post/Precuts

Rarely will content start exactly where you want it to, be it color-bars or lead-in titles, or simply a longer form recording that runs too long. Being able to clip out a beginning by X seconds and clip out an end at Y seconds can avoid the costly exercise of offlining content for craft editing. The Ooyala Flex-powered workflow can control clip-in and clip-out parameters, automating ingest where required.

Post-Precuts_QoE_Ooyala Decision Tree
Ooyala Flex-powered clip-in and clip-pout parameter automated workflow (visualized)

Audio-Levelling

Some content is loud, some content is quiet. The audio-filter allows all or selected content to have its volume adjusted, thereby creating a uniform viewer experience. 
[bg_collapse view=”button-blue” color=”#f7f5f5″ icon=”eye” expand_text=”Show Bitmovin API Reference” collapse_text=”Hide Bitmovin API Reference” ]

AudioVolumeFilter{

id* string
readOnly: true
example: cb90b80c-8867-4e3b-8479-174aa2843f62
Id of the resource
name string
example: Name of the resource
Name of the resource. Can be freely chosen by the user.
description string
example: Description of the resource
Description of the resource. Can be freely chosen by the user.
createdAt string($date-time)
readOnly: true
example: 2016-06-25T20:09:23.69Z
Creation timestamp formatted in UTC: YYYY-MM-DDThh:mm:ssZ
modifiedAt string($date-time)
readOnly: true
example: 2016-06-25T20:09:23.69Z
Modified timestamp formatted in UTC: YYYY-MM-DDThh:mm:ssZ
customData string
writeOnly: true
additionalProperties: OrderedMap { “type”: “object” }
User-specific meta data. This can hold anything.
volume* number($double)
example: 77.7
Audio volume value
unit* AudioVolumeUnitstring
title: AudioVolumeUnit
The unit in which the audio volume should be changed
Enum:
[ PERCENT, DB ]
format AudioVolumeFormatstring
title: AudioVolumeFormat
Audio volume format
Enum:
[ U8, S16, S32, U8P, S16P, S32P, S64, S64P, FLT, FLTP, NONE, DBL, DBLP ]

[/bg_collapse]

How will you overcome Quality of Experience issues?

To summarize there are six key factors that often affect a user’s QoE: black bars, poor coloring, incorrect lacing for visual quality, non-optimized FPS, content length, and inappropriate audio volume. So, how do you overcome the most glaring Quality of Experience issue: aspect ratio-related issues?

How to detect Aspect Ratio-related Quality of Experience issues

When it comes to content preparation, everything really starts with the Ooyala Flex Media Platform, extracting Technical Metadata from incoming media. This critical step extracts information such as format, framerate, frame size, colour space, D.A.R., P.A.R., codecs, bitrates, and specific details for codecs in use (e.g. GOP structures, profiles, audio sample rates and bit-depths), audio track counts, timecode start time and duration. 

Quality of Experience Issues-Extract-Display-technical meta-OoyalaMAM
OoyalaMAM displaying technical metadata for a media asset

This wealth of technical information stored as metadata against the media asset can be easily utilised throughout any workflow orchestration process, enabling the construction of bespoke validation criteria, ensuring the media is compliant. If validation happens to fail, workflow orchestration can take steps to rectify any issue automatically.

The evaluation of the technical side of our media is a great start, but what about validation of the media essence?

The Ooyala Flex Media Platform incorporates a number of tools that allow clients to effortlessly integrate with external products; a perfect example of this integration is for automated quality control, such as Dalet AmberFin. Automated QC reports from external products can be analysed by Ooyala Flex, workflow orchestration can take remedial action to correct any QC issues, and submit corrected media to automated QC again to ensure compliance.

Quality of Experience Issues-Automated Quality Control-Dalet Amberfin
OoyalaMAM transcode action manual selection

Quality of Experience Issues-Automated Quality Control-report-Dalet Amberfin-visualized
OoyalaMAM auto-QC report stored as asset metadata

In addition to automated QC, we also have the ability to create tasks for users to perform manual QC. Manual QC tasks can even be augmented from a previous automated QC run by highlighting ‘soft errors’ as temporal metadata annotations in a timeline ‘Review’. This orchestrated usage of human intervention helps ensure that our workflow does not stall, deadlines are met, and most importantly, quality does not suffer. 
For more information on the combined Dalet + Bitmovin capabilities, you can watch the recording of our most recent joint webinar here, and if you’d like to further discuss how to best address your Quality of Experience issues, do reach out:

The post Quality of Experience Issues: Levelling the Content Creation Playing Field (ft. Dalet) appeared first on Bitmovin.

]]>