hdr – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Mon, 24 Jun 2024 17:58:24 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg hdr – Bitmovin https://bitmovin.com 32 32 WWDC 2024 HLS Updates for Video Developers https://bitmovin.com/blog/hls-updates-wwdc-2024/ https://bitmovin.com/blog/hls-updates-wwdc-2024/#respond Mon, 24 Jun 2024 01:14:26 +0000 https://bitmovin.com/?p=282616 Apple’s Worldwide Developer Conference is an annual event used to showcase new software and technologies in the Apple ecosystem. It was created with developers in mind, but sometimes new hardware and devices are announced and its keynote presentations have become must-see events for a much wider audience. There is also usually news about changes and...

The post WWDC 2024 HLS Updates for Video Developers appeared first on Bitmovin.

]]>

Apple’s Worldwide Developer Conference is an annual event used to showcase new software and technologies in the Apple ecosystem. It was created with developers in mind, but sometimes new hardware and devices are announced and its keynote presentations have become must-see events for a much wider audience. There is also usually news about changes and additions to the HTTP Live Streaming (HLS) spec and associated video playback APIs. These HLS updates are often necessary to support new features and capabilities of the announced OS and hardware updates. This post will expand on Apple’s “What’s new in HTTP Live Streaming” document, with additional context for the latest developments that content creators, developers, and streaming services should be aware of.

The lastest HLS updates for 2024

The first draft of the HLS spec (draft-pantos-http-live-streaming) was posted in 2009, then superseded by RFC 8216 in 2017. There are usually draft updates published once or twice per year with significant updates and enhancements. A draft proposal was shared on June 7, that details proposed changes to the spec to be added later this year. Let’s look at some of the highlights below. 

Updated Interstitial attributes

In May 2021, Apple introduced HLS Interstitials to make it easier to create and deliver interstitial content like branding bumpers and mid-roll ads. Now, new attributes have been introduced for Interstitial EXT-X-DATERANGE tags, aimed at enhancing viewer experience and operational flexibility. 

  1. X-CONTENT-MAY-VARY: This attribute provides a hint regarding coordinated playback across multiple players. It can be set to “YES” or “NO”, indicating whether all players receive the same interstitial content or not. If X-CONTENT-MAY-VARY is missing, it will be considered to have a value of “YES”.
  1. X-TIMELINE-OCCUPIES: Determines if the interstitial should appear as a single point “POINT” or a range “RANGE” on the playback timeline. If X-TIMELINE-OCCUPIES is missing, it will be considered to have a value of “POINT”. “RANGE” is expected to be used for ads in live content.
  1. X-TIMELINE-STYLE: Specifies the presentation style of the interstitial—either as a “HIGHLIGHT” separate from the content or as “PRIMARY”, integrated with the main media. If X-TIMELINE-STYLE is missing, it is considered to have a value of “HIGHLIGHT”. The “PRIMARY” value is expected to be used for content like ratings bumpers and post-roll dub cards. 

More detail is available in the WWDC Session “Enhance ad experiences with HLS interstitials“.

Example video timeline using new HLS Interstitials attributes, part of HLS updates from WWDC 2024.
Example timeline for using HLS Interstitials with new RANGE attribute – source: WWDC 2024

Signal enhancements for High Dynamic Range (HDR) and timed metadata

HDR10+

Previously, the specification had not defined how to signal HDR10+ content in a multi-variant HLS playlist. Now you can use the SUPPLEMENTAL-CODECS attribute with the appropriate format, followed by a slash and then the brand (‘cdm4’ for HDR10+). The example Apple provided shows the expected syntax: SUPPLEMENTAL-CODECS=”hvc1.2.20000000.L123.B0/cdm4″. For a long time, HDR10+ was only supported on Samsung and some Panasonic TVs, but in recent years it has been added by other TV brands and dedicated streaming devices like Apple TV 4K and a few Roku models.

Dolby Vision with AV1

Dolby Vision has been the more popular and widespread dynamic HDR format (compared to HDR10+) and now with Apple adding AV1 decoders in their latest generation of processors, they’ve defined how to signal that content within HLS playlists. They are using Dolby Vision Profile 10, which is Dolby’s 10-bit AV1 aware profile. HLS will now support 3 different Dolby Vision profiles: 10, 10.1 and 10.4. Profile 10 is “true” Dolby Vision, 10.1 is their backward compatible version of HDR10 and 10.4 their backward compatible version of Hybrid Log Gamma (HLG). For profiles 10.1 and 10.4, you need to use a SUPPLEMENTAL-CODECS brand attribute and the correct VIDEO-RANGE. For these, 10.1 should use ‘db1p’ and PQ, and 10.4 should use ‘db4h’ and HLG. The full example codec string they provided is: CODECS=”av01.0.13M.10.0.112″,SUPPLEMENTAL-CODECS=”dav1.10.09/db4h”,VIDEO-RANGE=HLG.

If you’re interested in Apple’s overall AV1 Support, you can find more details in this blog post.

Enhanced timed metadata support

HLS now supports multiple concurrent metadata tracks within Fragmented MP4 files, enabling richer media experiences with timed metadata (‘mebx’) tracks. This will enable new opportunities for integrating interactive elements and dynamic content within HLS streams. .

Metrics and logging advancements

The introduction of the AVMetrics API to AVFoundation will allow developers to monitor performance and playback events. This opt-in interface lets you select which subsets of events to monitor and provides detailed insights into media playback, allowing you to optimize streaming experiences further.

More details are available in the AVFoundation documentation and the WWDC 2024 session “Discover media performance metrics in AVFoundation”.

Common Media Client Data (CMCD) standard integration

HLS now supports the CMCD standard, enhancing Quality of Service (QoS) monitoring and delivery optimization through player and CDN interactions. AVPlayer only implemented the preferred mode of transmitting data via HTTP request headers. They have not included support for all of the defined keys and for now is only supported in iOS and tvOS v18 and above. There was no mention of support in Safari. 

Bitmovin and Akamai debuted our joint CMCD solution at NAB 2023. You can learn more in our blog post or check out our demo.

FairPlay content decryption key management

As part of ongoing improvements, HLS is deprecating AVAssetResourceLoader for key loading in favor of AVContentKeySession. AVContentKeySession was first introduced at WWDC 2018 and until now, Apple had been supporting both methods of key loading for content protection in parallel. Using AVContentKeySession promises more flexibility and reliability in content key management, aligning with evolving security and operational requirements. This move means any existing use of AVAssetResourceLoader must be transitioned to AVContentKeySession. 

Conclusion

The recent HLS updates show Apple’s commitment to enhancing media streaming capabilities across diverse platforms and scenarios. For developers and content providers, staying updated with these advancements not only ensures compliance with the latest standards but also unlocks new opportunities to deliver compelling streaming experiences to audiences worldwide. 

If you’re interested in being notified about all of the latest HLS updates or you want to request features or provide feedback, you can subscribe to the IETF hls-interest group.

The post WWDC 2024 HLS Updates for Video Developers appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/hls-updates-wwdc-2024/feed/ 0
Globo, Google Cloud and Bitmovin: Taking Quality to New Heights https://bitmovin.com/blog/globo-google-cloud/ https://bitmovin.com/blog/globo-google-cloud/#respond Wed, 10 Apr 2024 17:28:53 +0000 https://bitmovin.com/?p=279364 Globo’s content and reach When it comes to content scale and audience reach, Globo is on par with Hollywood and the big US broadcasters with over 3,000 hours of entertainment content being produced each year. The viewership numbers are equally impressive with forty-nine million Brazilians watching the daily, one-hour newscast and Globo’s Digital Hub attracting...

The post Globo, Google Cloud and Bitmovin: Taking Quality to New Heights appeared first on Bitmovin.

]]>
Globo’s content and reach

When it comes to content scale and audience reach, Globo is on par with Hollywood and the big US broadcasters with over 3,000 hours of entertainment content being produced each year. The viewership numbers are equally impressive with forty-nine million Brazilians watching the daily, one-hour newscast and Globo’s Digital Hub attracting eight out of ten Brazilians with internet access. The Digital Hub hosts a variety of content categories, from news, sports, and entertainment to live events such as the Olympics, Carnival, and the FIFA World Cup. Globo also runs a subscription video on demand (SVOD) service called Globoplay that streams live sports, licensed content, as well as movies and television series produced by Estúdios Globo, the largest content production studio in Latin America.

Globo standard of quality

Globo has worked hard to build and become known for the “Globo Standard of Quality”. This included creating the optimal viewing experience together with award-winning content, delivered in stunning visual quality. To develop that reputation, Globo became one of the first mainstream broadcasters outside of the US to offer content in 4K, adopting it as a new standard across its platforms and devices. It has already produced hundreds of hours of 4K content (including HDR) with over a thousand hours of encoding output with its telenovelas and original series. The early adoption of 4K is even more impressive for Globo as Brazil is ranking 79th on the list of countries by Internet connection speed. In order to deliver high-quality video, operators cannot just work with higher bitrates but rather have to find the optimal encoder that achieves both quality, speed, and cost-efficiency at the same time. In the past, 4K encoding was accomplished with on-premises hardware encoders. As the next update cycle of the appliances was fast approaching, Igor Macaubas, Head of Online Video Platform, and Lucas Stephanou, Video Platform Product Owner at Globo, decided to conduct a thorough evaluation of vendors, and ultimately chose Bitmovin.

“We are not willing to compromise the visual integrity of our content and we hold ourselves to strict perception-quality standards. Bitmovin’s renowned 3-Pass Encoding exceeded our expectations and ensures that high perceptual quality can still be delivered while streaming at optimal bandwidth levels.”

– Lucas Stephanou (Video Platform Product Owner, Globo)

Globoplay, powered by Bitmovin VOD Encoding on Google Cloud

Globo handles a massive VOD library of over a million titles, and with 12 variants in their HEVC bitrate stack — encoding demands are high. Bitmovin’s VOD encoding service running on Google Cloud gave Globo the capability to encode a 90-minute video asset in 14 minutes across the entire HEVC ladder. This is a realtime factor of 6.4 times, which resulted in a quantifiable impact on time-to-market. Globo saw the business need for fast turnaround time in encodes and chose Bitmovin as the clear front runner in this regard. 

Bitmovin VOD Encoding on Google Cloud is an easy-to-use, fully-managed video transcoding software-as-a-service (SaaS). Bitmovin VOD Encoding allows customers to efficiently stream any type of on-demand content to any viewing device. Customers use Bitmovin VOD Encoding for a wide range of on-demand streaming use cases, including Subscription Video on Demand (SVOD), Transactional VOD (TVOD), and Ad-supported VOD (AVOD) services, online training, and other use cases. Bitmovin’s Emmy Award® winning multi-codec outputs and per-scene and per-title content-aware transcoding produce higher visual quality video outputs at lower bit rates than other file-based transcoding SaaS to optimize content delivery and reduce streaming cost. Bitmovin VOD Encoding is available for purchase on Google Cloud Marketplace.

Bitmovin’s 3-Pass Encoding algorithm uses machine learning and AI to examine the video on a scene-by-scene basis. It analyzes the content’s complexity multiple times to optimize intra-frame and inter-frame compression. This helps determine the ideal resolution and bitrate combinations that maximize the quality and efficiency. All together, this ensures the visual elements of the video are not degraded in the encoding process and prevents unnecessary overhead data that might impact the viewing experience. 

Processing HD and 4K video with Globo’s volume requires computing resources that would exceed the CapEx budgets of most companies. This is where the Google Cloud’s flexibility and on-demand compute power really shine. Together with Bitmovin’s split-and-stitch technology, single encoding jobs run significantly faster with parallel processing and spikes in demand are handled with ease and throughput that is just not possible with on-premises encoding. Customers also have the option to deploy Bitmovin VOD Encoding as a managed service running in the Bitmovin account or as a single tenancy running in the customer’s Google Cloud account. This allows encoding costs to be applied toward any annual spending commitments.

“Globo is known to set quality standards. We want our viewers to experience our great content in stunning video quality. Our 4K workflows have been relying on hardware encoders, but we wanted to test the power of the cloud and conducted a thorough vendor evaluation based on video quality. Bitmovin’s encoding quality and speed convinced us across the board. And, since using Bitmovin’s encoding service running on Google Cloud, we are spending a fraction of the cost by bringing our capital cost down without spending more on operational cost.”

– Igor Macaubas (Head of Online Video Platform, Globo)

Olympics in 8K

One prime example of this collaboration innovating and pushing the boundaries of video quality is from the Tokyo Olympics in 2021, where 8K VOD content from the Olympics was delivered to viewers at home via Globoplay. This marked the first time that the Olympics were viewable in 8K resolution outside of Japan. 8K video has 16x the resolution of HD and 4x that of 4K, so it requires an enormous amount of processing power and advanced compression to lower the data rates for delivery to end users. 4K and 8K content is also referred to as Ultra High Definition (UHD) and is usually mastered in a High Dynamic Range (HDR) format that allows for brighter highlights, more contrast and a wider color palette. Hybrid-Log Gamma (HLG) is an HDR format that was developed for broadcast applications and backward compatibility with Standard Dynamic Range (SDR) television sets.    

After receiving the HLG mastered content from Intel in Japan, Globo utilized Bitmovin VOD Encoding on Google Cloud’s compute instances for efficient parallel processing with Bitmovin’s VOD Encoding API. 8K/60p transcoding was performed using the High Efficiency Video Coding (HEVC) codec, creating an optimized adaptive bitrate ladder. At this stage, Bitmovin’s 3-pass encoding was key for transforming the content into a compatible size for transport over broadband internet connections, without sacrificing the stunning 8K visual quality. The 8K content was then delivered via Globo’s own Content Delivery Network (CDN) infrastructure to subscribers of Globoplay with 8K Samsung TVs.

- Bitmovin

“Our 3-Pass Encoding proved to be the right encoding mode. It ensured high perceptual quality could still be delivered while streaming at optimal bandwidth levels. With our split-and-stitch technology running on Google Cloud’s scalable infrastructure, we were able to deliver both speed and quality for this time-sensitive content.”

– Stefan Lederer (CEO, Bitmovin)

Learn more about Bitmovin’s VOD Encoding SaaS here.

Related Links

Google Cloud Media & Entertainment Blog

Bitmovin on Google Cloud Marketplace

Globo – Bitmovin Customer Showcase and Case Study

The post Globo, Google Cloud and Bitmovin: Taking Quality to New Heights appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/globo-google-cloud/feed/ 0
Streamlining HDR Workflows with Bitmovin https://bitmovin.com/blog/streamlining-hdr-workflows/ Thu, 11 Nov 2021 17:56:57 +0000 https://bitmovin.com/?p=202439 Since the launch of Dolby Vision, HDR video has quickly grown from a novel innovation, mostly known in videophile and engineering circles, to a ubiquitous and expected feature for the general television-owning public. Content producers and streaming platforms should be producing and delivering HDR video whenever possible in order to best recreate the artist’s vision...

The post Streamlining HDR Workflows with Bitmovin appeared first on Bitmovin.

]]>
Since the launch of Dolby Vision, HDR video has quickly grown from a novel innovation, mostly known in videophile and engineering circles, to a ubiquitous and expected feature for the general television-owning public. Content producers and streaming platforms should be producing and delivering HDR video whenever possible in order to best recreate the artist’s vision or the in-person experience for live events, but due to fragmentation in HDR formats and device support, this can be easier said than done. For platforms with a mix of SDR and HDR content (or advertisements), there are technical reasons for upconverting SDR to HDR even beyond improving visual quality. We’ve made it easier to streamline all of your HDR workflows with Bitmovin, but first, let’s take a look at the different HDR formats.

HDR formats

Dolby Vision

The original High Dynamic Range format, Dolby Vision, is a set of HDR technologies developed by Dolby Laboratories that cover content creation, distribution, and playback. Dolby Vision is a dynamic HDR format with accompanying metadata that is used to adjust and optimize every scene or even frame of a video to match the display’s capabilities, accurately reproducing the content creator’s intent. Dolby Vision requires a max luminance of at least 1,000 nits, but often is mastered to a max of 4,000 nits and can technically support up to 10,000 nits though that’s well beyond the capabilities of readily available consumer displays.

HDR10

HDR10 is an open HDR standard released by the Consumer Technology Association and is the most widely used HDR format. While Dolby Vision has dynamic metadata that adjusts to changes between scenes, HDR10 has static metadata that is applied uniformly to the entire file. The metadata has three components:

  • SMPTE ST 2086: describes the capabilities of the display used to master this piece of content, including color, white point, and min/max luminance
  • MaxFALL (Max Frame Average Light Level): denotes the average light level of the brightest frame in the video
  • MaxCLL (Max Content Light Level): denotes the light level of the brightest single pixel in the video

As with Dolby Vision, HDR10 content is usually mastered with a max luminance of 1000 or 4000 nits, though the standard does go up to 10,000. 

HLG

Hybrid-Log Gamma (HLG) was jointly developed by the BBC and NHK as a broadcast television HDR format that has a wider dynamic range while remaining backward-compatible with existing SDR transmission standards. As such, it can be displayed correctly on both SDR and HDR monitors. HLG carries no metadata and is mastered to a max luminance of 1000 nits.   

BBC's HDR Workflows with HLG_Image of Bug in HDR
Image credit: BBC

 

How to streamline your HDR Workflows

Dolby Vision conversions

Perhaps the flashiest, most branded, and most innovative forms of HDR workflows, are those that support Dolby Vision. Bitmovin was the first Dolby Pro Partner to integrate the Dolby Encoding Engine (DEE) to support both Dolby Vision and Dolby Atmos encoding in the cloud. While the quality of Dolby Vision is held in high regard, it is a proprietary format with licensing fees and as a result, is not universally supported. Because of that, Apple’s HLS specification recommends:

If you supply HDR content, you SHOULD provide both Dolby Vision and HDR10.”

We’ve made that easier for customers utilizing Dolby Vision mezzanine files by extending our DEE capabilities to allow the creation of Dolby Vision, HDR10, and SDR outputs in a single workflow.  To support this process, we created a new kind of InputStream resource in our API, specifically for Dolby Vision files, that accepts both the video and its accompanying dynamic metadata as a combined input:

DolbyVisionInputStream dolbyVisionInputStream = new DolbyVisionInputStream();
dolbyVisionInputStream.setInputId(input.getId());
dolbyVisionInputStream.setMetadataInputPath(METADATA_PATH);
dolbyVisionInputStream.setVideoInputPath(VIDEO_PATH);

From there, you can transcode and prepare your high bitrate Mezz file for OTT delivery in progressive, HLS, and DASH formats, with all of the stunning color, contrast, and detail of Dolby Vision. But, if you want to follow Apple’s HDR workflows recommendations (you do), you’ll still need to create HDR10 versions of your content. Further, if you want to provide the best possible quality for all devices, you’ll also need to create SDR variants for displays that aren’t HDR-capable, as viewing HDR10 video on an SDR monitor will result in washed-out colors at best and more than likely an unacceptable experience. 
To make it easier to correctly configure these HDR10 and SDR outputs,  we added preset configurations for the different dynamic range formats. The encoder detects the applicable conversions based on the input and the chosen output preset. These can be configured by setting the H265DynamicRangeFormat on the VideoConfiguration resource

var HDR10output = new H265VideoConfiguration();
HDR10output.setDynamicRangeFormat(H265DynamicRangeFormat.HDR10);
var SDRoutput = new H265VideoConfiguration();
SDRoutput.setDynamicRangeFormat(H265DynamicRangeFormat.SDR);

From here, you can further tailor the HDR10 and SDR outputs and streaming manifests to the devices they will be targeting. 

Device Reach of Videos with an HDR Workflows
Devices capable of supporting HDR

Static HDR Workflow conversions

Our new presets can also be used to cross-convert between static HDR formats, HDR10 and HLG, or to down-convert them to SDR. Again, the encoder will detect and perform the applicable conversions based on the input format and chosen preset. 

SDR upconversion

Last but not least, our new presets can also be used to up-convert SDR content to HDR10 and HLG, via a direct-mapping process that ensures they conform to the specification, matching your HDR content. This is especially important for platforms with a mix of HDR and SDR programming(or advertisements) as switching between them can create jarring flickering effects. 

Other codecs

In addition to all of the H265 DynamicRangeFormat presets mentioned, we are also adding presets for converting to other codec-format combinations including H264-SDR, VP9-SDR and VP9-HLG, allowing you to target more potential devices and deliver the highest quality experience to the widest possible audience, all in one streamlined workflow. 
Sign up for a trial or request a demo and let Bitmovin help you handle all the ins and outs of HDR!

Learn more

To learn more about how to build or support all HDR workflows, check out some of our other great HDR-oriented content below:
[On Demand Webinar] How to Enable and Optimize HDR Encoding
[On Demand Webinar] Deploying HDR for Streaming Services: Challenges and Opportunities
[Blog Post] Why HDR Should Be Part of Your Video Workflow
[Blog Post] Raising the Bar on Quality: How to Upconvert SDR to HDR
[Blog Post] HDR: Delivering the Best Quality from Encoding to Playback
[Blog Post] How to Effectively & Affordably Deliver 4K, HDR, & Dolby Vision Content

The post Streamlining HDR Workflows with Bitmovin appeared first on Bitmovin.

]]>
Raising the Bar on Quality: How to Upconvert SDR to HDR https://bitmovin.com/blog/sdr-vs-hdr-upconverting-video/ Wed, 06 Oct 2021 10:10:49 +0000 https://bitmovin.com/?p=195701 HDR support is critical to video workflows at nearly every stage, starting from encoding to playback. When an organization makes the inevitable switch from SDR vs HDR, it becomes clear that the video quality of experience (QoE) improves significantly. As HDR support is gradually becoming ubiquitous in the general streaming space, it’s clear that OTT...

The post Raising the Bar on Quality: How to Upconvert SDR to HDR appeared first on Bitmovin.

]]>
HDR support is critical to video workflows at nearly every stage, starting from encoding to playback. When an organization makes the inevitable switch from SDR vs HDR, it becomes clear that the video quality of experience (QoE) improves significantly. As HDR support is gradually becoming ubiquitous in the general streaming space, it’s clear that OTT services need to provide HDR content to remain competitive as consumer technology reaches resolution saturation, and begins to focus on pixel depth in its stead. 
The reality though, is that HDR has only been commercially available since 2014 when Dolby launched their Dolby Vision format, but even with that, most contents wasn’t being produced with end-to-end HDR distribution in mind until fairly recently, meaning the majority of videos out there are still stuck in the past with the limitations of SDR. 

SDR vs HDR – to Convert or not to Convert, that is the question

So what do you do with all of your existing SDR content? Can it be converted to HDR? In a few ways, yes, though results will vary depending on the chosen approach. There’s also the old adage “quality in, quality out” or the less optimistic “garbage in, garbage out” meaning that even with the best solutions, the quality of the source material is going to impact the quality of the final product. Existing compression artifacts or camera noise have the potential to be magnified and become more noticeable when upconverted, requiring extra “clean-up” pre- or post-processing steps. Thus one must ask the question, should SDR content be converted to HDR in the scenarios that it could? Even if an SDR source file has been successfully converted to the HDR specifications, it might still look exactly the same, depending on the technique applied, so let’s take a closer look at what needs to be done and the available options to create noticeable and worthwhile upconversions. 

How does SDR become HDR?  

There are two things that need to happen to transform SDR video into the HDR domain. First, the color gamut has to be widened to the desired color space, usually from the standard Rec. 709 to the wider Rec. 2020, which represents a spectrum of color possibilities much closer to the capabilities of what the human eye can detect. 

HDR Range_Color Gamaut Comparison_Rec.709 vs Rec.2020_Graphs
SDR vs HDR Range Color Gamut Comparison: Rec.709 vs Rec.2020

In addition to better matching the color detection potential of the human visual system, HDR video also aims to more thoroughly mimic and take advantage of our brightness and contrast-detection capabilities. The human eye has evolved to have greater sensitivity for noticing differences in darker tones than it does for lighter ones, creating a non-linear relationship between actual increases in brightness and the increases we perceive. 
HDR Awareness_Human Visual Brightness Perception Chart_Illustration
Human Visual Brightness Perception Chart (Source: Pixelsham)

If that disparity is not taken into account, bandwidth will be wasted on areas with bright highlights where we are unable to notice any difference, while the darkest regions will be underprovisioned, resulting in loss of detail and visual quality. The solution is to apply what’s known as a gamma function or gamma correction process. This begins when an image or video is captured, with gamma encoding being applied by the camera in order to prioritize and preserve the details and light levels humans can appreciate. Gamma encoding was originally developed to compensate for the non-linear light intensity produced by CRT monitors but is still in use today as monitors with different underlying technologies employ different gamma decoding or gamma expansion functions to reproduce the captured source or artist’s intent as accurately as possible. 
The chart below shows a typical camera gamma correction curve and corresponding transfer function used by a CRT monitor to compensate and display a scene that matches what the human visual system would have produced. 
HDR Gamma Encoding_Standard Camera Gamma Encoding Capabilities_Line Graph
Standard Camera Gamma Encoding Capabilities (Source: Pixelsham)

In 2011 the ITU adopted BT.1886 as the recommended gamma function for flat panel displays used in HDTV studio production. This standard models the response of CRT monitors better than previous functions, enabling a more consistent viewing experience and also takes into account the black level capabilities of the device, allowing clearer distinctions in the darkest areas for SDR video. 
In order to complete the conversion from SDR to HDR, the gamma function needs to be translated, usually from the discussed BT.1886 to either Perceptual quantizer(PQ) or Hybrid log-gamma(HLG), depending on the desired final format. PQ was developed by Dolby and is the gamma function used by the Dolby Vision, HDR10 and HDR10+ formats. The HLG HDR format shares its name with the gamma function it employs and was jointly developed by the BBC and NHK for broadcast applications. As the name suggests, HLG uses a hybrid approach for its gamma function, combining the SDR gamma curve with a logarithmic function for the HDR range, allowing HLG content to be viewed as intended on both SDR and HDR capable monitors.   
SDR vs HDR Gamma Curve Comparison_Line Graph
SDR and Hybrid log-gamma curves (Source: Pixelsham)

Direct Mapping vs Up-Mapping 

The simplest approach to converting SDR to HDR involves directly mapping the equivalent color and brightness values, essentially encapsulating the SDR signal in an HDR container, along with associated metadata. This is the most computationally and cost-effective method to complete the technical conversion to HDR, but the video being identified as HDR will look the same as the SDR source which can lead to an inconsistent and disappointing end-user experience. 
The preferred method from a quality perspective is known as up-mapping or inverse-tone mapping and creates an “HDR-look”, allowing SDR clips or shows to blend into an HDR production or platform. This is achieved through the use of tone-mapping filters or Lookup tables (LUTs) that have been extensively tested and calibrated to accurately recreate and enhance SDR content in the HDR domain. Up-mapping is more computationally complex, and thus more expensive than direct mapping, but the visual difference is significant and necessary for a true SDR to HDR workflow. 

Improving Quality AND Experience

Streaming services with a mixed catalog of old and new, SDR and HDR, should strongly consider up-mapping their SDR content to match the look and format(s) of their HDR content. It’s the best way to ensure the highest quality, most consistent and enjoyable viewing experiences across their entire library. In a crowded market, it will elevate and differentiate their service. 

Planet Earth_BBC_HDR Content
(Image credit: BBC)

When switching between displaying SDR and HDR content, there may be flickering or flashing as monitors auto-detect and recalibrate, so at the very least, providers with blended SDR/HDR content should be direct mapping SDR to HDR to maintain consistent technical specifications for the display. This is an important consideration for services that have HDR content in combination with ad-supported subscription tiers, who should be confirming their ad inventory is conditioned to match the HDR specs of their programming so they’re not amplifying the disruptions the ads are already creating.   

The Future is in HDR 

We are living through a transitional period of mixed content and no doubt will eventually reach a point where everything is being produced in HDR, there’s universal playback support, and all of the legacy SDR videos deemed worthy will have been upconverted. Streaming industry analyst Dan Rayburn sees HDR as a key advantage and predicts we’ll see the fastest rates of adoption in streaming services over the next 2 years. It’s really not a matter of if, but when, so any content that has long-term appeal should be upconverted to HDR as soon as possible to maximize the lifetime value of that process. Keep an eye out for our upcoming post about how to streamline your HDR encoding workflows with Bitmovin, including examples for upconverting SDR to HDR with the Bitmovin encoding API. 
In the meantime, if you want to find out more about SDR vs HDR and the best upconversion approach for your service, get in touch today or check out one of these other great resources:
[Blog Post] HDR vs SDR: Why HDR Should Be Part of Your Video Workflow
[On-Demand Webinar] Deploying HDR for Streaming Services: Challenges and Opportunities
[On-Demand Webinar] How to Enable and Optimize HDR Encoding

The post Raising the Bar on Quality: How to Upconvert SDR to HDR appeared first on Bitmovin.

]]>
HDR vs SDR: Why HDR Should Be Part of Your Video Workflow https://bitmovin.com/blog/hdr-vs-sdr-video-workflows/ Wed, 22 Sep 2021 17:35:58 +0000 https://bitmovin.com/?p=192403 Is Your Video Stuck in the Past with SDR? Would you do your taxes on a computer from the 1950s?  I’d guess for most of us the answer is no because usually, limiting yourself to using 70-year-old technology is not very likely to provide the best user experience or most accurate results. But when it...

The post HDR vs SDR: Why HDR Should Be Part of Your Video Workflow appeared first on Bitmovin.

]]>
Is Your Video Stuck in the Past with SDR?

Would you do your taxes on a computer from the 1950s? 

- Bitmovin
Content shouldn’t be delivered in quality that’s as old a computer from the 50s

I’d guess for most of us the answer is no because usually, limiting yourself to using 70-year-old technology is not very likely to provide the best user experience or most accurate results. But when it comes to video, if you’re not creating and delivering High Dynamic Range (HDR) content, that’s exactly what you’re doing! 
Standard Dynamic Range (SDR) video is bound by the brightness, color, and contrast limitations of the cathode-ray tubes that powered the first home television sets. Even when HDTVs were becoming the pride of local dads in the early 2000s, more pixels were added and the screens got bigger, but they were still playing by the ‘50s rules for the quality of the individual pixels. 
This would remain true until the 2014 Consumer Electronics Show in Las Vegas, when Dolby unveiled their Dolby Vision format and with it, the dawn of the HDR era. Since then, the ecosystem has grown to include multiple competing formats like HLG, HDR10, and HDR10+, and with them a critical mass of supporting playback devices. As more and more manufacturers of playback devices and streaming services advertise their HDR support, the perception of HDR among consumers is shifting from that of a premium feature to a standard expectation. In September 2021, Apple announced that the iPhone 13 Pro would support capturing, editing, and sharing Dolby Vision video, bringing HDR production out of the studio and into the mainstream. 

HDR vs SDR: Why is High Dynamic Range Better?

There are multiple factors that make the argument for HDR vs SDR, beginning with the obvious, higher dynamic range. This means there is more contrast available between the light and dark areas of what can be captured and displayed. 

Light and Dark Scale_SDR vs HDR Comparison
HDR vs SDR Light Scale

The darks are darker, the lights lighter, and there is more detail in both the shadows and the highlights. This results in more realistic representations of the luminance differences between day, night, exterior, and interior scenes. Having a higher dynamic range enhances depth-perception and allows surface materials to be better identified, creating an almost 3D effect when well executed. 
Another advantage of HDR vs SDR video formats is that they support wider color gamuts than SDR. While having a higher dynamic range expanded on the luminance levels and contrast that can be represented compared to SDR, having a wider color gamut means expanding the range and levels of colors that are captured and presented to more closely match the capabilities of the human eye. The spectrum shown in the charts below represents the range of colors typically detectable by humans. SDR devices are capable of capturing and displaying only the colors within the Rec. 709 colorspace triangle in the chart on the left, while HDR utilizes the expanded Rec. 2020 colorspace and can replicate almost all of the colors we can see. 

HDR vs SDR Range_Color Gamaut Comparison_Rec.709 vs Rec.2020_Graphs
HDR vs SDR Range Color Gamut Comparison: Rec.709 vs Rec.2020

Finally, HDR video is created with a higher bit-depth of at least 10-bits, compared to 8-bit SDR. Bit-depth refers to how many bits are used to represent the color and brightness of each individual pixel. 10-bit allows for over 1 billion different colors to be represented compared to 16.7 million possibilities for 8-bit. Having so many more options allows for finer distinctions, smoother gradients, and more accurate representations of the scene being captured. 8-bit video can suffer from “banding” where there are noticeable borders and jumps in color in areas where there should be a smooth transition. 

8-Bit vs 10-Bit Color Scale_HDR vs SDR
8-Bit vs 10-Bit Color Scale

Creating the Best Visual Experience

Visual artists have been attempting to recreate scenes from real life since time immemorial, from drawing, carving, and painting to photography and cinematography. As the techniques and technologies used by artists have evolved, their representations have become more realistic and captivating. With the end-to-end HDR ecosystem in place, the artists’ toolset is finally approaching the full capabilities and potential of the human visual system. 
Depth of color, grayscale, light, and shadow all play a role in establishing the mood and tone for a visual storyteller and HDR video helps preserve the artist’s creative intent in ways that SDR cannot, delivering a more compelling experience. Similarly, producers of live sports and concert events should be utilizing HDR to create an at-home experience that is as immersive and as close to the in-person experience as possible.

Consumers want UHD & HDR Content

As quality-oriented technologies have become more ubiquitous in the video streaming market, the common consumer has also wised up and now seeks out content that showcases the full potential of their home viewing experience.

A popular retailer in the US has 263 TVs listed for sale. The first 30 featured results list 21 HDR models – near enough 70% penetration.” – Alex Davies, ReThinkTV

At this point in time, HDR has become synonymous with a top-tier quality, as a result, consumers are generally aware of what their devices are capable of and they want to use the best of their features. With most of the top content providers supporting some variation of HDR, it’s critical for streaming services to provide the best possible quality to remain competitive and grab a piece of the market share.
In their day, the CRT televisions of the 1950s were magical innovations that did the best they could with the available SDR video signals, but both should be considered things of the past. HDR is here and the time is now to modernize your video pipeline. Bitmovin can help with solutions for HDR encoding, format conversions and workflow optimizations to help you create the highest quality viewing experiences on every platform and device. Get in touch today!

Video technology guides and articles

The post HDR vs SDR: Why HDR Should Be Part of Your Video Workflow appeared first on Bitmovin.

]]>
HDR: Delivering the Best Quality from Encoding to Playback https://bitmovin.com/blog/hdr-encoding-playback/ Wed, 30 Jun 2021 13:37:15 +0000 https://bitmovin.com/?p=176661 With the increase in demand for premium content, you’re being pushed to deliver content faster, with increasingly crisp picture quality. To keep up, video industry leaders have to explore ways to take advantage of the luminance, contrast, and color gamut that HDR offers. You’re probably familiar with HDR (high dynamic range) technology. Its brightness, color...

The post HDR: Delivering the Best Quality from Encoding to Playback appeared first on Bitmovin.

]]>
- Bitmovin
With the increase in demand for premium content, you’re being pushed to deliver content faster, with increasingly crisp picture quality. To keep up, video industry leaders have to explore ways to take advantage of the luminance, contrast, and color gamut that HDR offers.
You’re probably familiar with HDR (high dynamic range) technology. Its brightness, color accuracy, and contrast, especially compared to standard dynamic range (SDR), means that HDR is now common in displays from computers to TVs. Put simply, these displays can read and show images built from a wider gamut of color and brightness.
4K might be the current standard for display technology, but not all 4K is created equal, and different standards of HDR encoding produce disparate results. In this guide, we’ll briefly define HDR, talk about the differences between video and photo HDR, and finally discuss the three most common standards used by the industry to encode HDR content:

  • HDR10
  • HDR10+
  • Dolby Vision

Ready? Buckle up!

What is HDR?

To begin, let’s understand what HDR actually is. Each pixel in a display is assigned a certain level value of color and brightness. SDR limits that to a specific range—for example, very dark shades of gray are just displayed as black and brighter grays are shown as whites.
With HDR, this limitation is broken as the display panels are capable of producing darker blacks and brighter whites. HDR also increases the number of steps between each value of gray, resulting in more shades of gray.

Differences Between Video HDR and Photo HDR

It’s important to know that there’s a difference between how the term is used and applied for videos and photos.
In photography, HDR refers to your camera trying to behave as your eyes do. Your irises can expand and contract as light levels change, something that cameras could not imitate until HDR came to the rescue. High-end cameras utilize HDR by combining several photos taken during a burst. Each photo is taken at different exposures (ie, steps), with the amount of light doubled from one to the next. These images will then be combined into one stunningly vivid image.

HDR Processed Photo_Image Comparison
Photo before and after HDR is applied

If that’s not genius, I don’t know what is.
Unlike photo HDR, video HDR is just one single exposure. There are no fancy tricks with the camera merging multiple images together. Instead, the technology takes advantage of new HDR-compatible displays that make the bright parts of the images really bright, while keeping the dark parts dark. This so-called _contrast ratio_ is supposed to be greater on HDR-capable televisions as compared to SDR sets. It makes your TV brighter only in areas on screen that need it.
HDR Processed Video_Image Comparison
Video before and after HDR is applied

Defining Contrast and Color

Contrast is the difference between dark and bright. Color is a separate value-based on absolute RED, GREEN, and BLUE levels regardless of the video format. Color is produced by how we perceive light, so a greater range of light means we’ll perceive a greater range of color.
HDR provides a huge improvement to the range of both contrast and color, meaning that the brightest parts of the image can be made brighter than normal resulting in the image appearing to have more depth. The expanded color range means that RED, GREEN, and BLUE primary colors, as well as everything in between, are brighter and much more dynamic (especially between blue and green).
In other words, HDR offers great illumination with difficult light. Shoots with light background and dark foreground (where a display’s contrast ratio comes in) are feasible without much effort and equipment.
Contrast ratios can range from 1 to 21 and are commonly written as 1:1 to 21:1. This ratio can be calculated using the formula:

(L1 + 0.05) / (L2 + 0.05)

  • L1 is the relative luminance of the lighter colors.
  • L2 is the relative luminance of the darker colors.

For a better understanding of this topic, check out a definition of relative luminance.
The range of colors that a specific display is capable of showing in an RGB system is known as its color gamut. Put another way, it’s the colors that can be created within the boundaries of three specific RGB color primaries.
Rec.709 is a color gamut that’s been in use for TV since 1990. By contrast, the new ITU-R Rec.2020 color gamut has much wider primaries than any previous color space, allowing for more saturated colors.

HDR Range_Color Gamut Comparison_Rec.709 vs Rec.2020_Graphs
HDR Range Color Gamut Comparison: Rec.709 vs Rec.2020

HDR Standards

HDR is currently very confusing, with different encoding standards requiring different kinds of hardware and video content. This means that a single standard can’t fit every use case. As mentioned earlier, the three most widely embraced standards are HDR10, HDR10+, and Dolby Vision.

HDR10

HDR10 is a standard that was advanced by the UHD Alliance and adheres to the following conventions:

  • Uses the ST2084 Perception Quantization (PQ) curve
  • Allows maximum luminance of 1000 nits
  • Uses the REC.2020 color space
  • Has 10bpc (bits per channel) encoding
  • Is open standard
  • Uses static metadata

How Does HDR10 Handle Contrast Metadata?

HDR10 works by providing static metadata to the display. Metadata can be pictured as the information alongside the signal that tells your display how much of the PQ container will be used.
Static metadata is consistent across all displays. The maximum peak and average brightness values are calculated across the entire video instead of a specific scene or frame. This means that it informs your display as to what the brightest and darkest points are in the video, thereby setting a single brightness level for the entire video.

Advantages of HDR10 for Content Distribution

  • HDR10 is the most popular HDR standard, thanks to being open standard.
  • It has some of the cheapest HDR TV sets, expanding the net of affordability and availability.
  • It doesn’t cost TV manufacturers and streaming services providers any royalty to have it in their products because it’s open source.
  • It is the most common and widely used format for content.

HDR10+

HDR10+ offers better and improved picture quality than HDR10. The HDR10 industry standard provides a single luminance guide value that has to apply across the entire video. HDR10+, on the other hand, allows content creators to add an extra layer of data to add updated luminance on a scene-by-scene or even frame-by-frame basis. This helps displays deliver an impressive HDR picture experience.

Advantages of HDR10+

  • Incorporates dynamic metadata, which provides separate information for each scene.
  • The tone-mapping process is efficient due to dynamic metadata.
  • Allows luminance value of up to 4000 nits.
  • Allows resolutions of up to 8K.
  • Delivers HDR quality at a cheaper rate as compared to Dolby Vision.
  • Is open standard. Though most of it was developed by Samsung, any brand can use it or modify it.
  • Offers more freedom to manufacturers as it runs away from the locked-in approach offered by Dolby Vision. This gives brands the freedom to bring their own strengths and processes into play.

Dolby Vision

Dolby Vision offers the highest quality but is the most demanding standard currently in the market.

  • Allows for luminance values of up to 10,000 nits
  • Incorporates dynamic metadata
  • Uses 12bpc encoding
  • Is a proprietary standard, so you have to pay a license fee to create content in it or incorporate the technology into a device to use it.

How Does Dolby Vision Manage Contrast Metadata?

Dolby Vision incorporates dynamic metadata, which adjusts to the capabilities of your display instead of dealing with absolute values based on how the video was mastered. Dynamic metadata adjusts brightness levels based on each scene or frame, preserving more detail between scenes that are either very bright or very dark.

Advantages of Dolby Vision

  • Capable of replicating real-life scenes/images. Sunrises, for example, look more realistic than other HDR standards.
  • Most closely expresses the creator’s intention.
  • Adjusts the tone color on a scene-by-scene basis.
  • Produces brighter and more detailed dark scenes.

The Requirements of Common HDR Streaming Services

There’s no shortage of places to find streaming HDR content, though not every company offers the same standards.

  • Amazon Prime: Supports 4K UHD. You’ll need a minimum bandwidth of 25 Mbps. 
  • Netflix: Supports Dolby Vision and HDR10. You’ll need a minimum bandwidth of 25 Mbps.
  • iTunes: Supports HDR and Dolby Vision. You’ll need a minimum bandwidth of 25 Mbps.
  • YouTube: Supports HDR. You’ll need a minimum bandwidth of 15 Mbps.
  • Google Play: Supports 4K UHD. You’ll need a minimum bandwidth of 15 Mbps.
  • Vudu: Supports Dolby Vision and HDR10. You will need a minimum bandwidth of 11 Mbps.

Conclusion

If you’ve made it this far, you’ve covered a lot of ground; you’ve refined your knowledge of HDR, you understand the differences between photography HDR and video HDR, and you’re familiar with the different HDR standards and their advantages. I’ve also shown you where to head to in case you want to stream this content.
Quality video is key to engagement with customers who want to stream video, so it’s important to stay up to date about how to produce content that’s HDR-enabled. Consumer HDR TVs are becoming more and more common, and most smartphones being produced today are capable of displaying HDR content.
Keeping up with advancements in video technology can be daunting, and HDR is nothing short of complex. Consider partnering with Bitmovin—they provide video infrastructure for digital media companies around the globe and have offered support for Dolby Vision (and all other HDR standards) since 2020.

The post HDR: Delivering the Best Quality from Encoding to Playback appeared first on Bitmovin.

]]>
How to Effectively & Affordably Deliver 4K, HDR, & Dolby Vision Content https://bitmovin.com/blog/4k-hdr-dolby-vision-content/ Tue, 29 Sep 2020 08:04:18 +0000 https://bitmovin.com/?p=128821 Quality of experience for video viewing is critical for today’s brands trying to make the most of their content. That’s why, as the 4K TV market is expected to reach $380.9 billion by 2025, many broadcasters are looking to stream their video in 4K, HDR, and Dolby Vision. The problem is that many brands find...

The post How to Effectively & Affordably Deliver 4K, HDR, & Dolby Vision Content appeared first on Bitmovin.

]]>
- Bitmovin
Quality of experience for video viewing is critical for today’s brands trying to make the most of their content. That’s why, as the 4K TV market is expected to reach $380.9 billion by 2025, many broadcasters are looking to stream their video in 4K, HDR, and Dolby Vision.
The problem is that many brands find that creating and delivering high-quality video content can get expensive. If broadcasters follow some video streaming best practices, however, it’s possible to deliver high-quality content at an affordable rate.
With this in mind, let’s take a closer look at what 4K, HDR, and Dolby Vision are exactly, and how broadcasters can optimize their Dolby Vision content delivery.

The Difference Between 4K, HDR, & Dolby Vision

4K is an ultra high definition video resolution—and the next progression beyond high definition—that’s quickly becoming the standard for most new TVs. In fact, 4K video has about four times as many pixels as traditional high definition 1080p video. While this may seem dramatically better, the quality difference may only be noticeable if the video content is native 4k or you’re viewing from a close distance. Besides the video resolution, there are other factors that have an impact on the viewing experience, such as screen size, color range, sharpness, and more.
High Dynamic Range (HDR) is a technology that enables improved brightness, color accuracy, and contrast when compared to standard dynamic range (SDR). That means HDR can do more with the pixels it’s given, whether the content is 4K resolution or not. While HDR has been used in theaters for a while, it’s now making its way into home viewing with HDR-compatible TVs. But not every HDR format is created equal and there are a variety of options to choose from, such as HDR10+, HLG, and Dolby Vision.

What is Dolby Vision?

Dolby Vision is an HDR 4K video format that’s at the forefront of high-quality TV viewing. The format supports 12-bit color, which is 2 bits more than HDR10 and nearly 68 billion total colors. In terms of brightness, Dolby Vision can handle up to 10,000 nits—a measure of screen luminance or brightness—which is near the limit of existing display capabilities. Moreover, it’s a dynamic format that can adjust HDR elements on a granular level as well.
Dolby Visions is a relatively new format, yet far exceeds the capabilities of today’s TV technologies. That means Dolby Vision content is a futureproof option for broadcasters that can implement the necessary workflow now. Dolby Vision is now available as a delivery option on Bitmovin Player and Encoding solutions.

Delivering 4K, HDR, & Dolby Vision Content

If brands want to provide the best viewing experience possible at an affordable rate, they’ll want to follow some best practices for delivering high-quality video content.

Multi-Codec Encoding

Multi-codec streaming is a file compression technique that entails encoding videos into several codecs, and letting the video player choose which files to stream during playback. That means broadcasters can maximize the video quality and viewership of their Dolby Vision content while reducing bandwidth consumption and data usage.
While H.264 (AVC) is nearly universally compatible, it’s not as optimal as H.265 (HEVC), VP9, or AV1. By encoding videos into each of these formats, however, the viewer’s device can choose the best format that it’s compatible with. Multi-codec encoding may require more processing power up front, but the potential cost savings in bandwidth over the long run makes the method a key part of an optimal encoding workflow.

what-is-transcoding-workflow
What is Encoding? Visualized

Per-title & Multi-pass Encoding

Along with offering multiple compression formats, broadcasters can streamline their Dolby Vision content delivery by optimizing the encoding process itself.
Per-title encoding is a method for reducing the cost of streaming high-quality video by customizing bitrate ladders to a video’s content and complexity. Software that can perform per-title encoding analyzes each video file and builds a unique encoding profile that uses the lowest bitrates possible without affecting the video’s perceivable quality.
Broadcasters can take compression a step further with multi-pass encoding. For example, 3-pass encoding involves a high-level analysis of the video file, a more in-depth analysis of its individual segments, and finally using these insights to encode the file. In many cases, multi-pass encoding leverages machine learning (ML) algorithms to compress future video content even more efficiently based on previous inputs.

Adaptive Bitrate Streaming

The key to getting the most out of encoding video files into multiple codecs and bitrates is adaptive bitrate streaming (ABR), which requires a streaming profile that covers the potential viewing capabilities of an audience. 

4K, HDR, & Dolby Vision Content - Bitrate Ladder - Comparative Table
Bitmovin’s Adaptive Bitrate Ladder

The two most popular protocols that support ABR streaming are MPEG-DASH and HTTP Live Streaming (HLS). These protocols split video files into 2-10 second chunks, and an adaptive video player adjusts the quality of the segments it downloads in real-time to optimize the viewing experience. That way, broadcasters can strike the right balance between efficiency and quality.
For example, if broadcasters only provide a high-bitrate 4K stream, viewership could be greatly limited. On the other hand, only providing low-bitrate streams could leave viewers that have higher-end devices disappointed. ABR streaming is crucial for efficiently delivering multiple streams at varying codecs, bitrates, and more so that audiences can enjoy content with any device or network conditions.

Per-Scene Adaptation

Dolby Vision has the option for both static and dynamic metadata. Static metadata records HDR-related information about an entire video file, while dynamic metadata efficiently defines this information on a scene-by-scene or frame-by-frame basis.
For broadcasters, the dynamic metadata enables per-scene adaptation, which is a technique that identifies specific scenes that can be played at a lower bitrate because they’re indistinguishable to the human eye. This takes ABR streaming a step further to reduce bandwidth consumption and drive cost savings.

Choosing the right technology to fit your 4K, HDR, & Dolby Vision content strategy

Today’s users are demanding high-quality viewing experiences from their favorite broadcasters.  For brands to deliver 4K, HDR, and Dolby Vision content, however, they’ll need a video streaming platform that can keep costs low. This requires powerful encoding and delivery capabilities that reduce bandwidth consumption without compromising on end-user compatibility and the viewing experience.
Bitmovin is an online video solution that supports Dolby Vision streaming. The platform allows broadcasters to encode each video on a per-title basis into multiple codecs, and deliver this content using ABR streaming. Moreover, the encoder supports the latest codecs as they’re released, so that broadcasters can reduce their time to market and reach a growing audience, no matter which devices they choose to use. Choose Bitmovin to deliver Dolby Vision content to your viewers without the headache. 
If you enjoyed this blog post, you might also like the following content:

The post How to Effectively & Affordably Deliver 4K, HDR, & Dolby Vision Content appeared first on Bitmovin.

]]>