Martin Smole – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Mon, 19 Feb 2024 01:31:17 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg Martin Smole – Bitmovin https://bitmovin.com 32 32 Bitmovin Improves Support AV1 Video Encoding for VoD https://bitmovin.com/blog/bitmovin-improves-av1-video-encoding/ Mon, 19 Feb 2024 01:31:14 +0000 https://bitmovin.com/?p=19474 **Updated in Feb 2024** Since 2017, Bitmovin has actively worked in video and streaming standardization and has consistently driven standards from inception to implementation. Our founders co-created the MPEG-DASH streaming standard used by Netflix, YouTube, and many others, which is responsible for over 50% of peak U.S. internet traffic. Given our encoding, virtualization, and codec...

The post Bitmovin Improves Support AV1 Video Encoding for VoD appeared first on Bitmovin.

]]>
**Updated in Feb 2024**
Since 2017, Bitmovin has actively worked in video and streaming standardization and has consistently driven standards from inception to implementation. Our founders co-created the MPEG-DASH streaming standard used by Netflix, YouTube, and many others, which is responsible for over 50% of peak U.S. internet traffic. Given our encoding, virtualization, and codec expertise, we are excited to work with and contribute to the AV1 codec. As of today, we have doubled down on bringing AV1 to the market and enabling our customers. We have continued to improve our AV1 video encoding technology, and the performance has drastically improved in the last 5 years. In the following, we provide a high-level summary of the features.

The AV1 Video Codec

First things first, what is AV1 and where does it come from? In September 2015 the Alliance for Open Media (AOMedia) was founded by leading companies from various industries with an association with media technology. Among them are browser vendors like Google, Mozilla, and Microsoft, hardware vendors like AMD, ARM, Intel, and NVIDIA, and content providers like Amazon and Netflix. The goal of the AOMedia is to develop an open, royalty-free, next-generation video coding format that is:

  • Interoperable and open
  • Optimized for the Internet
  • Scalable to any modern device at any bandwidth
  • Designed with a low computational footprint and optimized for hardware
  • Capable of consistent, highest-quality, real-time video delivery, and
  • Flexible for both commercial and non-commercial content, including user-generated content.

The new video coding format AOMedia Video 1 (AV1) is meant to replace Google’s VP9 and compete with HEVC/H265 from MPEG. The Alliance is targeting an improvement of about 50% over VP9/HEVC with only reasonable increases in encoding and playback complexity.
When comparing AV1 with HEVC, probably the biggest competitive advantage of AV1 will be that it is royalty-free, especially if we look at the still very uncertain royalty situation with HEVC. Currently, there are two patent pools with MPEG LA and MPEG Advance, plus some unknown HEVC IP owners who have not joined a pool yet. In the end, nobody will know how much you will need to pay in royalties for HEVC. This situation is obviously not satisfactory for the industry and especially, encoding, distribution, content, and hardware companies. (Download the AV1 Datasheet)

Bitmovin and AV1 Video Encoding as of 2024

We have made improvements to the core AV1 encoder in itself and have extensively benchmarked it against multiple practical use cases. The turnaround time and speed of encoding have improved by several orders of magnitude. And in regards to the quality, for the encoder version release v2.110.0, we found that AV1 can offer the same visual quality at 50% less bitrate for H.264/AVC and 30% less bitrate for H.265/HEVC respectively. That is a pretty significant gain!
In addition to the improvements to the core encoder itself, we have integrated AV1 with all the popular features that our customers have come to love. Here is a quick rundown : 

  • Since encoder version 2.104.0, 3-pass encoding with AV1 is generally available. We have found that three-pass AV1 video encoding provides significantly better bitrate distribution compared to the regular 2-pass encoding.
  • Since encoder version 2.109.0, Per-Title encoding with AV1 is available now. Per-Title is one of our biggest competitive advantages. We are proud to offer this also for AV1. 
  • Since encoder version 2.110.0, AV1 video encoding offers three smart presets. This allows customers to choose an optimal tradeoff between the quality and speed of the AV1 encodings. 
  • Since encoder version 2.187.0, AV1 video encoding can be used in HLS playlists, together with FairPlay content protection. This enables support for AV1 playback on compatible Apple devices like the iPhone 15 Pro and new laptops with Apple’s M3 processor.

Also at Bitmovin, we like to keep our promises 😉. We promised seven years ago that we will not stop innovating around AV1 and that we will enable our customers in the best possible way with our AV1 solutions. We are excited to announce that we have kept our end of the bargain. We have developed two patent-pending technologies around AV1. We cannot delve into the details now but just to tease you out, it significantly improves the turnaround times for Per-Title and 3-pass encodings. Keep watching this space for more details about this soon!
And here is the cherry on top of all this. It’s easy to get all this awesome Per-Title ABR encoding together with the AV1 codec and DASH packaging in a SINGLE API call! Yes, it’s not a typo. We said SINGLE. Can you believe that 🤯🤯!? What are you waiting for you? It’s easier than ever to get started with AV1. Try it and reach out to us if you have any questions! We are happy and excited to get you onboard with AV1.

How AV1 Video Encoding Development Works

The AV1 codec has its roots in the codebase of Google’s VP9/VP10 codec with an additional 77 experimental coding tools that have been added and are under consideration. Out of that 77 experimental coding tools, only 8 are currently enabled by default (adapt_scan, ref_mv, filter_7bit, reference_buffer, delte_q, tile_groups, rect_tx, cdef), but the performance of the codec is already appealing. The final goal is to get as many promising coding tools into the final version of the codec and afterward freeze the bitstream specification.
The following procedure explains the high-level process on how experiments can be added to the AV1 codec:

  1. Coding tools are added as experiments into the AV1 codebase. They are controlled at build-time by flags (e.g., –enable-experimental –enable-<experiment-name>).
  2. The hardware team (group of hardware members inside of AOMedia) reviews the experiments to ensure it can be implemented in hardware.
  3. Each experiment needs to pass an IP review to ensure no IPs are violated.
  4. Once reviews are passed the experiment can be enabled by default.

As of today, it is not sure which experiments will make it into the final codec. However, we want to highlight a few that look promising today:

Directional Deringing

It is an effective algorithm for removing ringing artifacts from a coded frame. It plugs in right at the end of the decoding process, so it is easy to integrate. Blocks are searched for an overall direction that is taken into account when applying a conditional replacement filter (CRF) to reduce the risk of blurring and only take obvious ringing patterns into account. It is currently enabled by default.

PVQ (Perceptual Vector Quantization)

This experiment was originally developed for the Daala codec and has the potential to bring a lot of gains, however, it is also quite difficult to integrate into AV1 because PVQ interacts with many other parts of a codec. Compared to the usual scalar quantization, PVQ offers a lot more flexibility to control quantization. It makes techniques like Chroma from Luma or Activity Masking easier. Activity Masking is trying to provide better resolution in low contrast areas. This can be achieved by varying the codebook which is possible with PVQ.

Chroma from Luma (CfL)

CfL is based on a rather simple idea: Take advantage of the fact that edges in the chroma plane are usually well correlated with those in the luma plane. As CfL works entirely in the frequency domain, it can be easily implemented using PVQ. Using PVQ, the chroma coefficients can be predicted from injected luma coefficients. It is a very promising tool as it is quite simple to compute and provides nice benefits with much cleaner colors.

Bitmovin AV1 VoD and Live Encoding

The Bitmovin encoding service now supports AV1 video encoding for VoD and Live. It is possible to encode AV1 with our cloud encoding service. Currently, AV1 video encoding with common encoding tools is a very time-consuming process, as can be seen in the below screenshot taken from a Lenovo T540p notebook with an i7-4800MQ, 8GB RAM running Ubuntu 14.04. It would take 8 hours and 42 minutes to encode a 1080p@24fps 40-second long sequence (Tears of Steel Teaser) with a target bitrate of 1.5Mbps.

Bitmovin encoding AV1

The encoding runs with about 1.93 fpm (frames per minute) which would translate to 0.032 fps (frames per second). If you want to achieve real-time with 24 fps you would need at least 746 times the computing power on a single machine, which is not very practical in a real-world scenario. Clearly, we need another approach to encode with reasonable speeds, especially when it comes to live streaming.
Thanks to our chunk-based encoding approach that allows us to scale a single encoding among multiple instances we can encode AV1 with reasonable turnaround times and it’s also possible to use AV1 for live streams. Our chunked encoding allows us to speed up the encoding almost linearly with the number of instances that are added to the encoding cluster and this approach works with our cloud encoding the same way it works with our on-premise setups that are based on Kubernetes and Docker. Consequently, we can reach the same encoding speeds for AV1 that our customers have come to expect for H264, VP9, and HEVC encoding, which makes the codec effectively usable for media companies and content providers throughout the industry.

How AV1 Video Encoding Works_Workflow_Image
How AV1 Video Encoding Works

We also encoded the ToS teaser with our AV1 encoder in the cloud with the default configuration where we achieved 7 fps, which is about 219 times faster than what was achieved in the test with the Lenovo notebook. This is already pretty impressive however, we were not satisfied with the speed as it was still below real-time. So we tried with an enterprise set-up by just adding more instances to the encoding process. The resulting encoding speed was at 36 fps, which is about 1125 times faster than with the single Lenovo notebook.

AV1 Video Encoding of Tears of Steel_Workflow_Image
Encoding Tears of Steel with AV1 video encoding

In addition, we don’t have to compromise on quality for speed because our encoder does not need to sacrifice quality to reach a certain speed on a single instance as other encoding vendors typically do. With our approach we are not bound to the hardware restrictions of a single instance, we can add more instances to an encoding cluster to generate the quality that our customers have configured in a reasonable time or in real-time for live streams. With our chunk-based implementation of the AV1 video codec, we can encode videos with AV1 even faster than in real-time without compromising quality.

How to implement an AV1 Livestream

In most cases, to run live stream encodings you would need around 4 to 15 Mbps with traditional codecs like H264 to deliver the same quality. So AV1 could reduce your CDN and storage cost by up to 10x.
The setup of our AV1 live workflow that we will showcase consists of the following components:

  • OBS RTMP mezzanine stream, 12Mbps 1080p@30fps
  • Bitmovin Distributed AV1 Cloud Encoder running in Google Cloud receives an RTMP ingest and transcodes to 1.5Mbps 1080p@30fps segmented WebM. Segments will be directly transferred to a Google Cloud Storage bucket.
  • The Bitmovin Distributed AV1 Cloud Encoder also generates HLS and MPEG-DASH manifests that will be transferred to the Google Cloud Storage bucket. Enabled experiments of the AV1 codec are: adapt_scan, ref_mv, filter_7bit, reference_buffer, delte_q, tile_groups, rect_tx, cdef
  • Native playback on a desktop with a Bitmovin Player based on aomdec and ffplay

AV1 live stream screen shots
Our AV1 encoder generates WebM segmented output that could be used with HLS or MPEG-DASH for VoD and Live. However, as AV1 is currently not supported by any browser, we had to write our own player that is able to playback our AV1 live stream. We updated the aomdec application to be able to download and decode the AV1 chunks which can be seen in the left console window. Fortunately, decoding is not as resource intensive as the encoding, which allows you to decode the AV1 stream on normal hardware without special requirements, e.g., the same Lenovo notebook (i7-4800MQ, 8GB RAM running Ubuntu 14.04) that was not capable of encoding this video just near to realtime could easily playback AV1 in software. After the decoding step, we pipe the decoded YUV frames to ffplay to display the stream in a window as you see in the screenshot above. We plan to contribute this functionality back to aomdec after a technical cleanup of the current implementation.

A Practical Quality Comparison

Although the bitstream from AV1 is not finalized yet and much work needs to be done to further improve the quality of the codec, we wanted to get a snapshot of the current state and compare its quality with AVC/H264, HEVC/H265, and VP9. For that purpose, we made two different quality comparisons, the first one with two objective metrics, PSNR and SSIM. PSNR does not always correlate well with perceived quality but is the de-facto standard for video quality comparisons. SSIM is a perception-based quality metric that should give better results in regard to perceived quality.
For the second comparison, we chose to make a side-by-side quality comparison between AV1 and the other codecs. This quality comparison targets a practical use case where the resulting content can be used for Adaptive Bitrate Streaming (ABR). Therefore we have used a fixed Group of Pictures (GOP) size for our experiments and also used Variable Bitrate (VBR) encodings with a target bitrate. This approach is established in the industry but results can vary from scientific evaluations that purely target abstract use cases and theoretical encoder performance through the HM (HEVC reference software) and JM (AVC reference software) reference software that has no practical relevance in the industry.
Let’s first start with the objective quality comparison with PSNR. We encoded the open-source movie Sintel from the Blender Foundation with VBR to the following target bitrates: 100Kbps, 250Kbps, 500Kbps, 1Mbps, 2Mbps, 4Mbps and calculated PSNR and SSIM for the bitrate that has actually been achieved by the individual codec (typical codecs in VBR mode do not hit the target bitrate exactly).
The following encoding settings for the different codecs were used in the Bitmovin Encoding Service:

  • AVC/H264:
    GOP Size: 96 frames (4 seconds), Me_range: 16, Cabac: true, B-Adapt: 2, Me: UMH, Rc-Lookahead: 50, Subme: 8, Trellis: 1, Partitions: All, BFrames: 3, ReferenceFrames: 5, Profile: High, Direct-Pred: Auto
  • HEVC/H265:
    GOP Size: 96 frames (4 seconds), Sao: 1, B-Adapt: 2, CTU: 64, Profile: Main, BFrames: 4, Rc-Lookahead: 25, WeightP: 1, MeRange: 57, Ref: 4, Subme: 3, Tu-Inter-Depth: 1, Me: 3, No-WeightB: 1, Tu-Intra-Depth: 1
  • VP9:
    GOP Size: 96 frames (4 seconds), Cpu-used: 1, Tile-columns: 4, Arnr-Type: Centered, Threads: 4, Arnr-maxframes: 0, Quality: Good, Frame-Parallel: 0, AQ-Mode: none, Arnr-Strength: 3, Tile-Rows: 0
  • AV1:
    Build f3477635d3d44a2448b5298255ee054fa71d7ad9, Enabled experiments by default: adapt_scan, ref_mv, filter_7bit, reference_buffer, delte_q, tile_groups, rect_tx, cdef
    Passes: 1, Quality: Good, Threads: 1, Cpu-used: 1, KeyFrame-Mode: Auto, Lag-In-Frames: 25, End-Usage: VBR

PSNR comparison graph - AV1, VP9, HEVC, H264
The above diagram clearly shows that AV1 already outperforms all the other codecs for each bitrate setting. For bitrates from 1Mbps and higher the quality difference is already pretty big (> 0.5db which is usually clearly visible). VP9 and HEVC/H265 are very similar from a PSNR perspective, however, VP9 was the codec that overshot the target bitrate by far the most.
SSIM comparison graph - AV1, VP9, HEVC, H264
We also compared the four codecs with SSIM. The results can be seen in the above diagram and are quite similar to PSNR with some slight differences. AV1 is still the best performing codec over all bitrates, and AVC/H264 lags behind. However, interestingly AVC/H264 catches up with increased bitrate. An explanation for that could be that in the higher bitrates we can reach nearly the quality of the source material with all codecs, which results in only minor differences between the codecs.
Additionally, we created several side-by-side quality comparisons where we experimentally changed the target bitrate for each codec to reach an average of 500 Kbps. Below you can see the quality comparisons between the encodings comparing the quality of Bitmovin AV1 video encoding with AVC/H264, HEVC/H265, and VP9. We used the well-known Tears of Steel teaser that is 40 seconds long with a 1080p resolution for the comparison, selecting a complex scene that is hard to encode.
AV1 vs H264 side to side comparison
When comparing AV1 video encoding with AVC/H264 the quality difference is very obvious as expected. We can clearly see multiple encoding artifacts and blocking in the right part of the image that has been encoded with AVC/H264. In contrast, the left part with AV1 Video Encoding looks much cleaner without obvious encoding artifacts.
AV1 vs VP9 side to side comparison
Looking at the quality difference between AV1 and VP9 it is not as obvious as with AVC/H264, but still quite visible. Especially the borders of the tiles of the sphere show encoding artifacts and the overall picture in VP9 seems to have quite some noise. We can also identify some blocking artifacts that are not visible in AV1.
AV1 vs HEVC side to side comparison
HEVC/H265 visually looks a bit better than VP9, however, it still has visible encoding artifacts, especially in the lower part of the image and around the arm of the guy with the red coat. When we look closely at the arm we can see that the color is not encoded as nicely as with AV1 and shows some noise.

Conclusion

Bitmovin’s culture and vision have always been to be a technology leader and our passion for video means we consistently tackle the most complex video problems. Why? Because it’s fun and challenging and our team loves a challenge!
Besides that, there are already use cases for an AV1 video encoding where you could use it as your mezzanine format to preserve a high-quality version of your video at a low bit rate that can be used to create your adaptive bitrate renditions or other formats. Using AV1 for that use case would decrease your storage footprint and speed up transfer times inside of your data center or for upload to the cloud.
Furthermore, with the companies behind AOMedia, like AMD, ARM, Intel, NVIDIA, Google, Microsoft, Mozilla, Netflix, and Amazon, it should not take too long to get broad support for AV1. AMD, Intel, and NVIDIA cover the desktop market quite nicely, and ARM and Intel the mobile market. Additionally, the major browser vendors, Google, Microsoft, and Mozilla will make sure that the codec finds its way into the browsers soon after the bitstream freeze. Google, Netflix, and Amazon will make sure that AV1 content will be available quickly and that will further drive adoption and hardware support.
AV1 is the next generation video codec and it’s on track to deliver a 30% improvement over VP9 & HEVC – Learn More

More AV1 Resources:

The post Bitmovin Improves Support AV1 Video Encoding for VoD appeared first on Bitmovin.

]]>
Improving Cloud Scalability: Lessons Learned from Bitmovin’s Experience https://bitmovin.com/blog/cloud-encoding-stability-scalability-improvements/ https://bitmovin.com/blog/cloud-encoding-stability-scalability-improvements/#respond Mon, 03 Apr 2023 19:12:56 +0000 https://bitmovin.com/?p=256571 Introduction One of the main benefits of using a cloud-based SaaS product is the freedom to experiment, pivot and react to changes in your business without the overhead cost and time of buying and maintaining your own systems. The cloud lets you adjust on the fly and can scale infinitely and on-demand to meet your...

The post Improving Cloud Scalability: Lessons Learned from Bitmovin’s Experience appeared first on Bitmovin.

]]>
Introduction

One of the main benefits of using a cloud-based SaaS product is the freedom to experiment, pivot and react to changes in your business without the overhead cost and time of buying and maintaining your own systems. The cloud lets you adjust on the fly and can scale infinitely and on-demand to meet your requirements…right? That expectation means creators of SaaS products like Bitmovin need to be able to react without warning to changing customer needs. Scaling up is an ongoing challenge of optimizing internal systems while bound by the real-world limitations of cloud infrastructure providers. Overcoming one obstacle often results in a new one presenting itself, so the work on improving scalability is never done.

Over the past year, we were fortunate to see a large increase in the volume of encoding jobs processed by our system, which included one case where a customer submitted a batch of 200,000 files! Unexpected spikes like that can cause some stress, but they also help uncover the areas where a little improvement can go a long way toward taking your scale and stability to another level. This post will share the lessons we learned and improvements we’ve made in recent months to enable higher throughput and safe scaling through future spikes in demand.

Highlights of Recent Performance Improvements

Here are some of the highlights of our scalability work done since late last year: 

  • Average time for scheduling decisions dropped from 40 seconds to less than 2 seconds (under high load)
  • 3x more efficient message processing in our API services
  • 4x more efficient Encoding Start Calls, drastically reducing 504 timeouts
  • Added tracing to core services, enabling continuous improvements and bottleneck reduction. So far, we’ve been able to decrease the total volume of database statements executed by a factor of 6 and the average statement size by a factor of 4, greatly improving our overall capacity and scalability.
  • Implemented “Safe Scaleup” algorithm in our scheduler to safeguard the overall system from peaks in pressure if a customer(s) submits a large amount of encoding jobs in a short period of time

Keep reading for more detail about how we were able to make these improvements.

Speeding Up Scheduling Decisions

Customers of Bitmovin’s VOD Encoding service are effectively sharing the pooled resources of our cloud deployments and we have safeguards in place to ensure fair use of those shared resources. We also have a lot of monitoring and alerting in place to ensure everything is performing as efficiently as expected. After seeing longer than expected queuing times and underutilized encoding capacity, the team added time traces to the logs to investigate further. The root cause was identified as slow scheduling decisions, taking an average of 40 seconds with peaks of up to 4 minutes, prompting them to make these changes:

  • Removed penalty for the number of finished encodings –  Our fair scheduling algorithm took into account the number of recently completed encodings, which could impact a customer’s priority more than necessary. The team also saw the calculation took up to 20 seconds and didn’t provide a huge benefit, so they decided it was safe to remove. 
  • Status updates for encoding jobs were moved from sequential to batch processing.
  • Change from UUID v4 to UUID v1 – Database task insertion was detected to be really slow and the team found inefficient updates caused by the random strings in UUID v4. MySQL is optimized for monotonically increasing keys, so changing to UUID v1 which has a time component and thus is monotonically increasing, proved to be much more efficient.

Together, those changes brought the average scheduling decision time from 40 seconds (under load) down to a constant 2 seconds, which was a pretty noticeable improvement. 

improving cloud scalability - Bitmovin

Looking closer at the timeline after improvements, we see the average decision time down to 2 seconds and even the worst-case outliers are under 12 seconds.

improving cloud scalability - Bitmovin
Graph showing average scheduling decision time of 2 seconds after improvements were made

Increasing Message Throughput

We have several internal messaging services to coordinate tasks, one of which is responsible for processing the messages that track the status of encoding jobs, including the update that signals when a job is complete. Slow processing of those messages means the status is not set to FINISHED and gives the perception of much longer encoding times.

Our statistics service is responsible for collecting the statistical data of encoding jobs like the number of output minutes produced and general billing related tasks. During extremely high loads of encoding jobs, there are a LOT of messages that need to be processed. However, the team found that the high volume of messages, combined with optimistic locking exceptions on the database layer could lead to failures during message processing. All the subsequent retries would further strain the system and the throughput of successful messages processed would end up being fairly poor.

The team added additional metrics to analyze the root cause of the performance bottlenecks occurring under extreme loads. This analysis led to 3 main changes that highly improved the throughput of the message processing in both services:

  • Use UUID v1 instead of UUID v2 as datatype for indexed columns
  • Fix automatic dirty checking of Hibernate 
  • Optimization to skip unnecessary subtask updates 

Reducing Timeouts of Encoding Start Calls

Especially for encoding jobs that have a big configuration (e.g., a lot of Streams, Muxings, etc.) we saw that the processing times of an Encoding Start call could sometimes take longer than 60 seconds. Our API gateway is configured with a 60 seconds timeout, thus our customers were getting 504 gateway timeout errors. Many customers did not have good retry mechanisms or strategies in place, so we shared this article that documents our best practices for handling 5xx errors.

When starting an encoding job the Bitmovin API services have to gather all configured data and combine them into a single configuration of the whole encoding. This is done synchronously so that we can validate if the setup of the encoding is correct and inform the customer immediately in case there is any misconfiguration. This is especially slow for encodings with lots of configured streams and muxings as a lot of information for each entity has to be retrieved from the database (for example Input Streams, Muxings, Filters, Sprites, Thumbnails, etc.). Additionally, some updates on the stream level are done before scheduling an encoding to ensure data consistency.

The team needed to add additional performance metrics to this call for an in-depth investigation of performance bottlenecks. This allowed us to find and fix problematic queries and update behaviors. The improvements implemented led to a reduction of the average Encoding Start call time by a factor of four. Additionally, the fix includes improved observability into the Encoding Start process to make further improvements possible.

improving cloud scalability - Bitmovin
Average Encoding Start call duration drops below 10 seconds after improvements

Observability and Performance Improvements

Since the beginning of the year, the team has worked hard to enable tracing with OpenTelemetry to all of our core services. With it, all traces to inbound as well as outbound service calls are reported, including:

  • Sync/async service calls
  • Messaging and Events
  • Resources
  • Database

That provides a live cross-service performance factor which is important to identify performance bottlenecks in the whole workflow. Service-specific RED metrics are recorded to allow further investigations and optimizations. The team also added additional performance metrics to the critical parts of our core services. With those observability improvements, we were able to implement the following improvements to our overall encoding orchestration workflow, which has a huge impact on scalability:

  • Reduced executed DB statement count by factor of 6
  • Reduced DB statement size by factor of 4
  • Reduced internal service calls by factor of 3
  • Reduced complexity of internal calculations by factor of 2

The next steps will be to further add observability capabilities to our core encoding workflow that actually performs the encoding job. With that, we should be able to further tune the turnaround time of single encoding jobs as well.

Implementing Safe Scaleup for Higher Overall Throughput

Our system is optimized to start encoding jobs as soon as possible once the customer has configured everything and executed the encoding start call. Some of our enterprise customers have as many as 200 encoding slots and we observed that when they filled all 200 slots in a very short amount of time, it was not only bringing our system (database, messaging, etc.) but also our cloud providers’ systems (where we request the required infrastructure) under high pressure. Those bursts of requesting infrastructure were often leading to “slow down” responses by the cloud providers’ APIs and “capacity exceeded” responses when acquiring infrastructure, as they can not provide that many instances in that short amount of time.

To avoid these bursts in processing, we now use a better strategy to safely ramp up the usage of the available encoding slots, leading to more normalized usage of our system as well as the infrastructure of our cloud providers. This ultimately means we’re able to support greater overall capacity since we’re requesting resources in a way that allows our infrastructure partners to keep up. In the bottom half of the graph below, we see the customer submitted almost 600 jobs in a short period of time. On the upper half, we see how the system now safely scales the usage up to their limit of 150 concurrent encoding jobs.  

improving cloud scalability - Bitmovin

A Neverending Story 

Being a best-in-class SaaS product means being able to meet your customers’ changing needs instantly, while managing the reality that your infrastructure providers and your own systems have limitations for scaling.  Providing enterprise-level uptimes requires constant vigilance and adaptation to support even modest levels of growth, not to mention huge spikes in demand. It’s impossible to predict every scenario in advance, but by taking advantage of the opportunity to learn and build a more robust system, you can set a higher standard for stability that benefits everyone in the long run. 

The post Improving Cloud Scalability: Lessons Learned from Bitmovin’s Experience appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/cloud-encoding-stability-scalability-improvements/feed/ 0
Bitmovin Video Encoder V2 introduces Per-Title, AV1 support, 2 Pass & 3 Pass Encoding https://bitmovin.com/blog/bitmovin-video-encoding-v2/ Thu, 11 Oct 2018 13:15:28 +0000 http://bitmovin.com/?p=24529 Bitmovin Encoding v2.0 is a major release which adds some powerful new features including: Per-Title, AV1 support, 3-Pass encoding and more - Test it today!

The post Bitmovin Video Encoder V2 introduces Per-Title, AV1 support, 2 Pass & 3 Pass Encoding appeared first on Bitmovin.

]]>
Bitmovin Encoding Version 2

Bitmovin Encoder v2.0 is a major release which adds some powerful new features including: Per-Title, AV1 support, 3-Pass encoding and more

This latest release of our encoder comes with a variety of new features, but also introduces some major changes. Some of those changes bring new default values and behavior patterns with them. For this blog post, we are taking a more in-depth look at what is going to change for you, if you are a user of Bitmovin Video Encoding. We’re covering the new features and how you can benefit from them, we’ll show you some important changes to a few specific settings and explain how to revert back to the previous encoding behavior if required.
Jump right into the sections, which are most relevant to you and your use cases:

New to using Bitmovin Encoding? It’s simple. Sign up for a trial today to see our tech in action or get in touch with our sales experts to discuss your specific requirements and use cases!

Feature overview – what’s new?

Per-Title Encoding out of the box

Per-Title Encoding now comes as an out-of-the-box feature, making this technology available for a wider user base. Whether you are operating a large-scale content delivery network or encoding videos for a limited audience, you will now be able benefit from Per-Title Encoding without any extra integration or configuration effort. Note that Per-Title Encoding has additional pricing impacts that will affect your minutes balance.
[bitmovin_player id=’24652′]
If you would like to see how Per-Title Encoding can improve quality and reduce your bitrates, try our Per-Title Ladder Banchmark Tool and see the new ladder and a report on the improvements over your existing ladder.

AV1 support

At Bitmovin, we’ve been strong supporters of AV1 since the early stages. Following the AV1 bitstream freeze back in June, AV1 is now in at general availability stage in Encoder v2.0, providing unparalleled quality at low bitrates.

Enhanced error reporting

We have overhauled the error reporting process to provide enhanced error descriptions. You’ll be able to pinpoint what exactly went wrong in the process and act accordingly.

2-pass and 3-pass encoding

Two-pass encoding is now the default encoding mode, as it consistently provides a better ratio between quality and bitrate. Additionally, you can also choose to use three-pass encoding, which adds another pass to the process, boosting quality even more. Of course, single-pass encoding is still also possible.  Note that two-pass and three-pass encoding have additional pricing impacts that will affect your minutes balance.

Improvements to input resilience

We have implemented a couple of changes to our Encoder to equip it with a greater level of input resilience. Whenever a frame is dropped or corrupt, our encoder simply duplicates the preceding frame or frames, allowing the encoding process to go on continuously. While the process continues, a warning is shown as a status message.

Breaking changes that may impact your encoding workflows

Some of the changes to our new encoder will impact your existing encoding workflows. In this section, we’ll cover those key breaking changes and show you what they entail exactly. The list below explains how these changes reflect in our API, where to find the related section in our documentation and which exact properties are concerned.

Start Encoding (Encoding Mode): Default behavior for STANDARD changed

This change sets the new default encoding mode to two-pass encoding from single-pass encoding as the previous default. A new encoding mode SINGLE_PASS has been introduced to retain the old behavior.

The following changes impact the encoder’s input resilience:

Add Stream (Decoding Error Mode): Default set to DUPLICATE_FRAMES

With this setting, the encoding will not fail if a frame cannot be decoded, but instead other frames are duplicated to compensate. The encoding can continue and an error message is displayed.

Start Encoding (Trimming): ignoreDurationIfInputTooShort set to true per default

Previously, the encoder would send an error, if the input for the encoding is shorter than the defined duration. With this setting set to true, the encoding will continue in such instances and a warning message is displayed.

Start Encoding (Variable FPS): handleVariableInputFps set to true per default

Impacts the way input streams with variable FPS are being handled. With the new default, the encoder will process files with a dynamic framerate automatically and adjust (e.g. by dropping or duplicating frames) if a constant framerate is required for the output file. For input streams with a fixed framerate, this change does not have any impact.

Start Encoding (Tweaks): audioVideoSyncMode set to RESYNC_AT_START by default

By setting the synchronization mode to “RESYNC_AT_START” the encoder re-syncs the audio and the video stream, based on the values transported in the metadata. This is useful for workflows containing previously encoded content.

Migration quick guide: how to revert back to the old defaults

If you intend to use Bitmovin Encoder in the exact way that you used to before the update, you can do so by simply changing the settings from the new default values to the old values.
The table below shows you which properties you will need to change to revert back to the previous behavior.

API property Change to value API call
Start Encoding > encodingMode SINGLE_PASS Start Encoding in our API Reference
Start Encoding > trimming > ignoreDurationIfInputTooShort FALSE Start Encoding in our API Reference
Start Encoding > handleVariableInputFps FALSE Start Encoding in our API Reference
Start Encoding > tweaks > audioVideoSyncMode STANDARD Start Encoding in our API Reference
Add Stream > decodingErrorMode FAIL_ON_ERROR Add Stream in our API Reference

The post Bitmovin Video Encoder V2 introduces Per-Title, AV1 support, 2 Pass & 3 Pass Encoding appeared first on Bitmovin.

]]>
A.I. Encoding Uses Machine Learning to Speed Up Processing and Improve Quality https://bitmovin.com/blog/chunk-based-3-pass-video-encoding-uses-machine-learning-deliver-unrivalled-quality/ Wed, 04 Apr 2018 17:37:10 +0000 http://bitmovin.com/?p=23035 A.I. Encoding workflow running on a containerized “chunk-based” infrastructure with ML-based machine learning model delivers industries highest quality video encoder. At Bitmovin, we have been known to push the boundaries for exceptional quality in delivering video content. From our efforts as an early mover in the implementation and development of AV1 to our cloud-native solution...

The post A.I. Encoding Uses Machine Learning to Speed Up Processing and Improve Quality appeared first on Bitmovin.

]]>
Machine learning in Video Encoding

A.I. Encoding workflow running on a containerized “chunk-based” infrastructure with ML-based machine learning model delivers industries highest quality video encoder.

At Bitmovin, we have been known to push the boundaries for exceptional quality in delivering video content. From our efforts as an early mover in the implementation and development of AV1 to our cloud-native solution to video encoding, we have been working and researching all parts of the workflow in order to be able to provide great quality streaming, while still reducing bandwidth consumption and file sizes.
The A.I. Encoding is another major step in our video technology research and development efforts.  With the introduction of machine learning, the encoder can make smart decisions about compression settings and visual parameters of each frame, speeding up processing and improving encoding effeciency. The encoder performs a detailed video analysis and machine learning algorithm improves over time, continuously optimizing the encoding parameters. First, let’s talk about the mechanics of A.I. in video encoding.

The 3 passes

AI Chunk-based 3-pass encoding workflow
During the first pass, the entire video file is scanned superficially, meaning that property information, which does not require more in-depth analyses (e. g. motion predictions), is extracted and collected. The data gathered is then entered into an encoding engine, which uses artificial intelligence to produce optimized encoding parameters. Those settings are tuned to content information such as a broad estimate of content complexity, which is easily obtainable and provides an initial level of optimization. Thanks to the AI aspect of the algorithm, the system improves progressively, as it obtains more and more information on which settings deliver high quality results. During the encoding process, the result is checked against objective quality metrics and the results are entered into the AI. As the AI’s database of encoding settings and accompanying results keeps growing, so does the quality of the matching encoding parameters and file attributes.
By the second pass, the encoding parameters for a chunk have been set, and the next step is to distribute the chunk to a specific processing instance based on factors such as complexity. The idea is to get precise data on each chunk to properly attribute resources based on the level of complexity. Following completion of the second pass, the results of both passes are then merged to obtain the necessary information for the encoder to achieve the best possible result.
The third pass basically constitutes the actual encoding process. Using a complex algorithm, the data gained from the analyses in the first two passes is used make a variety of encoding decisions, eventually resulting in an optimum quality at maximum bandwidth efficiency.

What exactly is a chunk?

improving cloud scalability - Bitmovin
The machine learning aspect is an essential part of the procedure, but the “chunk” part in our chunk-based 3-pass encoding routine is just as important. In most encoding solutions, “chunking” means breaking up the video content into segments purely based on time intervals. Following the conclusion of each chunk (e. g. 4 seconds of video), the encoding quality can be adjusted for the next segment. We’ve developed our own encoding logic which creates more coding-efficient chunks and therefore allows for more effective quality adjustments during the process. This results in drastic improvements in perceived quality at the same or even lower bitrates.

A tried and tested algorithm that also learns in the process

Our system grows smarter with each encoding sequence. The AI was trained initially using a large test library consisting of encodes and associated objective quality metrics. Based on the results stored in the database, the AI engine calculates the ideal encoding setting for each individual video, by matching it with similar content and the corresponding results from objective quality metrics testing.
Various objective quality metrics are used in the process. They all work somewhat differently and factor in other aspects, but they also share one common premise: these evaluations try to emulate the way the human eye perceives quality in order to achieve quantifiable results that directly correspond to the human experience. Although this may sound hard to believe to someone new to the subject, these algorithms have grown to become incredibly accurate over the past decade and are capable of delivering convincing analyses. The benefit lies in amassing effectively comparable data, which in turn translates to a highly efficient encoding process.
Another key advantage of applying machine learning to the encoding stems from the ability to adapt to broader scale changes in content technologies. As content in 4K or even 8K resolutions, HDR and wide color gamut becomes more common, the machine learning engine receives more and more input, which eventually allows it to adapt to these technologies. And presumably, as the engine potentially draws new input from every single encoding procedure, it will be able to do so way faster than any human-controlled testing of encoding settings could.

Substantially better quality at lower bitrates – a competitive edge for content providers

With Bitmovin’s introduction of our chunk-based 3-pass encoding scheme, we can confidently claim to outperform most other encoding providers when it comes to providing high quality video encoding at low bitrates. Our 3-pass encoding process delivers unparalleled results and performs well with very complex high resolution content as well as with highly compressed content at low bitrates.
Raising the bar on video quality is a key priority for the future of video content delivery. As bandwidth consumption keeps growing alongside consumer demands for high quality streaming content, the necessity for big leaps in encoding technology will soon become a pressing matter. With our portfolio of solutions, which targets all key points in the video delivery cycle – encoding, storage, CDN, player and analytics – we are perfectly equipped to rise to the coming challenges.
Talk to one of our experts today  and learn what chunk-based 3-pass encoding can do for your content delivery network!
 
Have you already heard of per-title Encoding? By encoding video at bitrates appropriate to the content of the video file, content providers can make significant bandwidth savings as well as quality improvements. Learn more about it here

Resources:

https://demo.bitmovin.com/public/firefox/av1/
https://bitmovin.com/encoding-service/
https://en.wikipedia.org/wiki/Video_quality#Objective_video_quality
https://bitmovin.com/video-player/
https://www.tomsguide.com/us/hdr-tv-explained,news-22227.html
 

The post A.I. Encoding Uses Machine Learning to Speed Up Processing and Improve Quality appeared first on Bitmovin.

]]>
WWDC17 – HEVC with HLS – Apple just announced a feature that we support out of the box https://bitmovin.com/blog/wwdc17-hevc-hls-apple-just-announced-feature-support-box/ Tue, 06 Jun 2017 18:37:08 +0000 https://bitmovin.com/?p=20447   This year Apple announced at their WWDC conference support for HEVC/H.265 with HLS for macOS High Sierra and iOS11 Every year at Apple’s annual Worldwide Developer Conference (WWDC) they present updates and new features that will be available to their products soon. This year Apple announced support for HEVC/H.265 for macOS High Sierra and...

The post WWDC17 – HEVC with HLS – Apple just announced a feature that we support out of the box appeared first on Bitmovin.

]]>
Did you know our video player guarantees playback quality on any screen through our modular architecture, including low-latency, configurable ABR and Stream Lab, the world’s first stream QoE testing service? Check out the Bitmovin Player to learn more.

improving cloud scalability - Bitmovin

 

This year Apple announced at their WWDC conference support for HEVC/H.265 with HLS for macOS High Sierra and iOS11

Every year at Apple’s annual Worldwide Developer Conference (WWDC) they present updates and new features that will be available to their products soon. This year Apple announced support for HEVC/H.265 for macOS High Sierra and iOS 11. According to Netflix’s experiments HEVC/H.265 can reach up to 50% bitrate savings compared to AVC/H.264 which allows to stream better quality to customers and saves storage and bandwidth costs for content providers.
We at Bitmovin are of course always especially interested in the video related updates from Apple. Last year we were a first mover to support fMP4 in HLS after Apple announced it at WWDC16. This year we are even in a position where we already support HEVC/H.265 end-to-end in both our products encoding and player.

Multi Codec Support with Bitmovin’s API

With the Bitmovin API you can encode content with different codecs like AVC/H.264, HEVC/H.265, VP9, and recently also AV1 using MPEG-DASH and HLS. This allows to use the best codec for the platform when streaming content to your users. HEVC/H.265 and VP9 are more efficient than AVC/H.264 allowing to deliver higher quality with the same bitrate, or save costs by delivering similar quality to less bandwidth. VP9 is supported on Google Chrome, Firefox and Android devices which allows you to stream VP9 to about 70% of your users. For Safari there was still the need to use AVC/H.264 until now. With the announcement of Apple we will soon see HEVC/H.265 to be streamed to macOS and iOS devices.
With our flexible Bitmovin API generating content encoded in HEVC/H.265 and mux it to fMP4 segments works out-of-the-box as we do that today for HEVC MPEG-DASH content. The trick to make it available as an HLS asset is just to reference the the segments in the playlist files in the same way we do it today with fMP4 in HLS.

Bitmovin HTML5 and Native Players Already Support HLS with HEVC

Apple supports now HEVC video in fMP4 segments in HLS. While this works on iOS 11 and macOS High Sierra only, the reached audience can be extended using the Bitmovin Player. It supports HEVC HLS in all browsers which support HEVC decoding already out of the box. Also the Bitmovin Player iOS SDK is ready for HEVC HLS on iOS 11.

 

Test Vectors

There are a few ways to deliver HEVC content to users:

  1. HEVC in HLS using MPEG-2 Transport Stream chunks, which Apple doesn’t support
  2. HEVC in HLS using fMP4 segments, which is what Apple announced on WWDC17 and our player supports
  3. HEVC in MPEG-DASH using fMP4 segments

All of these options can already be created using our Bitmovin API. For playback, it depends on the HEVC support in the browser. Obviously, Apple added this for Safari on macOS High Sierra and iOS 11, but also Edge on Windows 10 already supports it. HEVC can be streamed using HLS or MPEG-DASH to Edge with the Bitmovin Player.
We provide test vectors for the three above mentioned types for public testing:

Beside HEVC/H.265 we can also encode the same asset to AVC/H.264 and VP9 which we introduced earlier this year. With this configuration you can deliver the best codec to every device, improving quality and save costs. VP9 is supported on multiple platforms including Google Chrome, Firefox and Android devices. Google Chrome currently has a market share of 57% in North America and Firefox accounts for roughly 12% in that region. Taking these two alone you could probably reach up to 70% of your users with VP9. So basically 70% of your users could benefit from VP9 which could reduce your bitrate by 50% and lower your CDN costs dramatically.
Here are some more test vectors including the VP9 codec:

We wish you happy testing and would love to get your feedback!

Popular video technology guides and articles:

The post WWDC17 – HEVC with HLS – Apple just announced a feature that we support out of the box appeared first on Bitmovin.

]]>
VP9 Codec: MPEG-DASH VP9 for VoD and Live https://bitmovin.com/blog/mpeg-dash-vp9-vod-live/ Fri, 24 Mar 2017 12:45:26 +0000 https://bitmovin.com/?p=18853 VP9 is the next level in video compression and can help you to save up to 50% on your CDN costs, or significantly increase the quality of your streams. We are happy to announce the introduction of full VP9 support for our HTML5 video player and our video encoder, both in the cloud as well...

The post VP9 Codec: MPEG-DASH VP9 for VoD and Live appeared first on Bitmovin.

]]>

VP9 is the next level in video compression and can help you to save up to 50% on your CDN costs, or significantly increase the quality of your streams.

We are happy to announce the introduction of full VP9 support for our HTML5 video player and our video encoder, both in the cloud as well as for containerized deployments that can run on-premise or in your own cloud account with Docker and Kubernetes. VP9 has recently gained popularity as there is still an uncertain royalty situation with HEVC which is the main competitor for VP9. Similar to HEVC, VP9 can perform up to 50% better, as a compression format, than H.264/AVC, especially for UHD or 4K resolutions. This results in higher quality video that can be delivered to the users, or help saving on bandwidth and thus reduce CDN costs by up to 50%!
VP9 is a royalty free codec that is developed by Google as an alternative to the commercial video formats. YouTube has been successfully using VP9 to deliver video content to their users for several years already and claims to deliver the same quality at half the bandwidth used by H.264/AVC. This is why YouTube prefers to stream VP9 on browsers/devices that have support for it, delivering better quality with less bandwidth. When streaming UHD and 4K content, VP9 is getting even more efficient. YouTube has chosen to deliver 4K resolutions only with VP9 and thus locking out Safari users to consume 4K content via YouTube.
This rise in popularity of VP9 is not only caused by the uncertain situation with the HEVC royalties, but also because of the ongoing development of AV1, a royalty-free video coding format developed by the Alliance for Open Media, which can basically be seen as a successor of VP9.
Looking at the range of supported browsers, VP9 is well ahead of HEVC. As of early 2017, VP9 is supported by roughly 75% of the browser market. This includes Google Chrome, Firefox, Opera, and also Microsoft Edge since summer 2016. On the other hand, HEVC is only supported in Microsoft Edge in cases where hardware decoding is available.
The Bitmovin encoder produces segmented VP9, which is perfectly suited for VoD streams as well as live streams. Furthermore our Live-to-VoD workflow fits perfectly with this format and allows you to generate VoD streams out of the live stream right after the stream has finished or even while the live stream is still running.

VoD Encoding for MPEG-DASH VP9

With the Bitmovin API you can create MPEG-DASH VP9 content for live as well as VoD use-cases. First we will demonstrate how to create an encoding job with MPEG-DASH VP9 output with our C# API Client. A full example can be found in our examples list in the GitHub repository.

Setup the Bitmovin API client with your API key

var bitmovin = new BitmovinApi(API_KEY);

Create an output configuration

We are using a Google Cloud Storage bucket as output location for the MPEG-DASH VP9 content. However, if you prefer you could also use an AWS S3, Azure Blob, Scality, FTP, SFTP, or any S3 compatible storage instead.

var output = bitmovin.Output.Gcs.Create(new GcsOutput
{
    Name = "GCS Ouput",
    AccessKey = GCS_ACCESS_KEY,
    SecretKey = GCS_SECRET_KEY,
    BucketName = GCS_BUCKET_NAME
});

Create an encoding and define the cloud region and version to be used

When you create an encoding you can choose the cloud region where the encoder should run. Ideally, the region matches the cloud region in which your bucket resides in so you save egress traffic. Besides the cloud region you can also pinpoint special encoder versions or use our STABLE branch that always points to the latest stable encoder version.

var encoding = bitmovin.Encoding.Encoding.Create(new Encoding.Encoding
{
    Name = "VP9 VoD Encoding C#",
    CloudRegion = EncodingCloudRegion.GOOGLE_EUROPE_WEST_1,
    EncoderVersion = "STABLE"
});

Create an input source

We need to create a source for your input file. If you have stored your input files on an HTTP server, you can just configure this server as source of your inputs with the code below. Please note that many other input sources such as AWS S3, Google Cloud Storage, Azure Blob, Aspera, (S)FTP, Scality and any S3 compatible storage is also supported.

var httpHost = bitmovin.Input.Http.Create(new HttpInput
{
    Name = "HTTP Input",
    Host = INPUT_HTTP_HOST
});

Create video codec configurations and add it to the encoding

A codec configuration contains the encoding related configuration for a video rendition or an audio rendition. You need to link the codec configuration to a stream of your encoding that connects an input stream with the codec configuration. For example, link your input video stream to a H.264 1080p codec configuration will encode this video stream to H.264 1080p output. The following example uses the AUTO selection mode and position “0”, thus links this configuration to the first video stream of the input file. Beside VP9 we also support H.264/AVC and H.265/HEVC as codecs, and resolutions of 8K and higher.

var videoConfig1080p = bitmovin.Codec.VP9.Create(new VP9VideoConfiguration
{
    Name = "VP9_Profile_1080p",
    Width = 1920,
    Height = 1080,
    Bitrate = 4800000,
    Rate = 30.0f
});
var videoStream1080p = bitmovin.Encoding.Encoding.Stream.Create(encoding.Id,
                CreateStream(httpHost, INPUT_HTTP_PATH, 0, videoConfig1080p, SelectionMode.VIDEO_RELATIVE));

Similar to the code above you can add more video codec configurations and then add them to streams of your encoding to generate alternative renditions (e.g., 720p, 360p, etc.). Additionally to the video rendition you may also want to add an audio track. For audio it works in the same way as for video, as you will see in the example below:

var audioConfig = bitmovin.Codec.Aac.Create(new AACAudioConfiguration
{
    Name = "AAC_Profile_128k",
    Bitrate = 128000,
    Rate = 48000
});
var audioStream = bitmovin.Encoding.Encoding.Stream.Create(encoding.Id,
                CreateStream(httpHost, INPUT_HTTP_PATH, 0, audioConfig, SelectionMode.AUDIO_RELATIVE));

Mux the encoded data for MPEG-DASH

In order to create MPEG-DASH, the VP9 encoded data needs to be packaged accordingly. In the following lines of code we will define segmented WebM for MPEG-DASH. Here you also define how long a single segment should be. Note that we also define where the segments should be stored in your output bucket. You have full control over the output location of the video and the audio streams. If you added multiple video renditions you also need to create a segmented WebM muxing for each rendition.
First we will create the required fMP4 muxings for MPEG-DASH:

var videoWebmMuxing1080p = bitmovin.Encoding.Encoding.SegmentedWebm.Create(encoding.Id,
    CreateSegmentedWebmMuxing(videoStream1080p, output, OUTPUT_PATH + "video/1080p", segmentLength));
var audioFMP4Muxing = bitmovin.Encoding.Encoding.Fmp4.Create(encoding.Id,
    CreateFMP4Muxing(audioStream, output, OUTPUT_PATH + "audio/128kbps", segmentLength));

Start the VP9 Encoding

Finally we can start the encoding job to encode your source asset to MPEG-DASH VP9.

bitmovin.Encoding.Encoding.Start(encoding.Id);

With the following code snippet you can wait for the encoding job to be finished:

var encodingTask = bitmovin.Encoding.Encoding.RetrieveStatus(encoding.Id);
while (encodingTask.Status != Status.ERROR &amp;amp;amp;&amp;amp;amp; encodingTask.Status != Status.FINISHED)
{
    // Wait for the encoding to finish
    encodingTask = bitmovin.Encoding.Encoding.RetrieveStatus(encoding.Id);
    Thread.Sleep(2500);
}

Besides that you can also use webhooks to get notified as soon as the encoding job has finished.

Create the MPEG-DASH Manifest

After the encoding is finished we also need an MPEG-DASH manifest in order to be able to playback the content with MPEG-DASH players. With the Bitmovin API you have full control over creating manifests, e.g., create multiple manifests with a different set of qualities for targeting desktop or mobile, etc. When creating the MPEG-DASH manifest you also specify the output and the location and filename of the manifest:

var manifestOutput = new Encoding.Output
{
    OutputPath = OUTPUT_PATH,
    OutputId = output.Id,
    Acl = new List&amp;amp;lt;Acl&amp;amp;gt; {new Acl {Permission = Permission.PUBLIC_READ}}
};
var manifestDash = bitmovin.Manifest.Dash.Create(new Dash
{
	Name = "MPEG-DASH VP9 Manifest",
	ManifestName = "stream.mpd",
	Outputs = new List&amp;amp;lt;Encoding.Output&amp;amp;gt; { manifestOutput }
});

Define the default period and video and audio adaptation sets. In the audio adaptation set you can define the language of the audio track.

var period = bitmovin.Manifest.Dash.Period.Create(manifestDash.Id, new Period());
var videoAdaptationSet =
	bitmovin.Manifest.Dash.VideoAdaptationSet.Create(manifestDash.Id, period.Id, new VideoAdaptationSet());
var audioAdaptationSet = bitmovin.Manifest.Dash.AudioAdaptationSet.Create(manifestDash.Id, period.Id,
	new AudioAdaptationSet { Lang = "en" });

Add the created segmented WebM muxings to the adaptation sets. You also need to define the relative path to the segments based on the manifest output location:

bitmovin.Manifest.Dash.Webm.Create(manifestDash.Id, period.Id, videoAdaptationSet.Id,
    new Manifest.Webm
    {
        Type = SegmentScheme.TEMPLATE,
        EncodingId = encoding.Id,
        MuxingId = videoWebmMuxing1080p.Id,
        SegmentPath = "video/1080p"
    });
bitmovin.Manifest.Dash.Webm.Create(manifestDash.Id, period.Id, audioAdaptationSet.Id,
    new Manifest.Webm
    {
        Type = SegmentScheme.TEMPLATE,
        EncodingId = encoding.Id,
        MuxingId = audioWebmMuxing.Id,
        SegmentPath = "audio/128kbps"
    });

After that the manifest is configured completely, we can start the manifest creation:

bitmovin.Manifest.Dash.Start(manifestDash.Id);

Equally, as for the encoding job, we also need to wait for a successful manifest creation:

var status = bitmovin.Manifest.Dash.RetrieveStatus(manifestDash.Id);
while (status.Status == Status.RUNNING)
{
    status = bitmovin.Manifest.Dash.RetrieveStatus(manifestDash.Id);
    Thread.Sleep(2500);
}

Again, you can also use webhooks here to get notified as soon as the manifest creation is finished. After that, we have an MPEG-DASH manifest for the VP9 encoded content available and can test the playback in MPEG-DASH compatible players like the Bitmovin player, Shaka player, or Dash.js.

Live Encoding for MPEG-DASH VP9

Starting a live encoding for MPEG-DASH VP9 is not much different to starting a VoD encoding. We also have a full example available in our GitHub repository.
Obviously, the input will not be based on a file but rather be an RTMP source. The following shows how to grab the default RTMP input that is available in your account.

var rtmpInput = bitmovin.Input.Rtmp.RetrieveList(0, 100)[0];

When creating the streams in the different qualities, use the rtmpInput instead of the HTTP input from the example above.
The second difference is related to the manifest generation which must be done before the live encoder is started. Just create the MPEG-DASH manifest as in the example above, and pass it in the start encoding call:

bitmovin.Encoding.Encoding.StartLive(encoding.Id, new StartLiveEncodingRequest
{
    StreamKey = "YourStreamKey",
    DashManifests = new List&amp;amp;lt;LiveDashManifest&amp;amp;gt;
    {
        new LiveDashManifest
        {
            ManifestId = manifestDash.Id,
            Timeshift = 300,
            LiveEdgeOffset = 180
        }
    }
});

That is the whole difference when starting a live encoding compared to a VoD encoding.

Playback of MPEG-DASH VP9 Content

There is no difference between the playback of VP9 encoded MPEG-DASH streams and H.264 encoded MPEG-DASH streams. In both cases you set the MPEG-DASH manifest as the source for your player and there is no need to define the type of codec that is used. Below you can see an example of our player with VP9 encoded content through our Bitmovin API.


Video technology guides and articles

The post VP9 Codec: MPEG-DASH VP9 for VoD and Live appeared first on Bitmovin.

]]> Offline Playback with DRM – Requested Features from the Netflix User Base https://bitmovin.com/blog/drm-enabled-offline-playback/ Wed, 18 Jan 2017 14:22:51 +0000 http://bitmovin.com/?p=16653 The following tutorial will show you how to set up offline playback using Bitmovin and ExpressPlay DRM. On 30th November 2016 Netflix announced the support for offline playback, which has been one of the most requested features from the Netflix user base. With this new feature users are now able, not only to playback content...

The post Offline Playback with DRM – Requested Features from the Netflix User Base appeared first on Bitmovin.

]]>
Offline playback using Bitmovin and ExpressPlay

The following tutorial will show you how to set up offline playback using Bitmovin and ExpressPlay DRM.

On 30th November 2016 Netflix announced the support for offline playback, which has been one of the most requested features from the Netflix user base. With this new feature users are now able, not only to playback content while they are online, but also while they do not have an active data connection. For selected movies and shows a “download” button is available in the app that will download the content on the user’s device and make it available to be watched at a later time without the need for an Internet connection.
After this announcement many customers have approached us wanting to know how to set up offline playback with the Bitmovin services. We are happy to say that we have supported offline playback, together with our partner ExpressPlay, for some time. However, getting so many requests motivated us to write this blog post to explain in detail how offline playback can be achieved with the services of Bitmovin and ExpressPlay.
In general, the following steps required to package and playback protected content in an offline playback scenario are:

  • Encode and encrypt your content for offline playback, e.g., one or multiple fMP4 files that can be downloaded
  • Request a DRM token for a persistent license from the ExpressPlay service
  • Pass the token to a DRM client that will retrieve the corresponding persistent license which will get stored within the device
  • Download and play the protected content

The following tutorial will describe how to implement the workflow outlined above with the services from Bitmovin and ExpressPlay. First we will show how to generate multiple encrypted progressive MP4 files using the cloud encoding product from Bitmovin. Then we will use the ExpressPlayer app provided by Intertrust as a client and Marlin Broadband (MBB) licenses for offline playback.

Encode and encrypt your content for offline playback

It is a fun coincidence that we announced our new Bitmovin API at exactly the same day when Netflix announced their support for offline playback. The new Bitmovin API has already full support to encode and package content to be used for offline playback. In the following tutorial I will show you how to do that.
For offline playback you will typically package the content as fMP4 files in different qualities. Just like Netflix, you can then allow your customers to choose between high quality or fast download. The easiest and definitely recommended way to interact with the Bitmovin API is with one of our API clients that are available in multiple programming languages. Currently, we offer API clients for the following languages: PHP, Python, GO, .NET, and JS. More are currently implemented and will be available soon.
For this tutorial we will be using the Bitmovin PHP API client that already has a neat example of how to create content that can be used for offline playback:

Initialize the Bitmovin API Client

$client = new BitmovinClient('INSERT YOUR API KEY HERE');

Create an input configuration

For the sake of simplicity we are using an HTTP(S) input, although many other input sources such as AWS S3, Google Cloud Storage, Microsoft Azure, Aspera, and (S)FTP are also supported.

$videoInputPath = 'INSERT YOUR HTTP VIDEO INPUT PATH HERE';
$input = new HttpInput($videoInputPath);

Create an output configuration

We are using a Google Cloud Storage bucket to directly transfer the encoded fMP4 files. This reduces turn-around times. You could also use other output options like S3, Azure, or (S)FTP. Another option is to store the encoded files on your Bitmovin Cloud Storage from where you can transfer it at a later time.

$gcs_accessKey = 'INSERT YOUR GCS OUTPUT ACCESS KEY HERE';
$gcs_secretKey = 'INSERT YOUR GCS OUTPUT SECRET KEY HERE';
$gcs_bucketName = 'INSERT YOUR GCS OUTPUT BUCKET NAME HERE';
$gcs_prefix = 'path/to/your/output/destination/';
$output = new GcsOutput($gcs_accessKey, $gcs_secretKey, $gcs_bucketName, $gcs_prefix);

Create an encoding profile configuration

An encoding profile configuration contains all the encoding related configurations for video/audio renditions as well as the encoding environment itself. Choose the region and cloud provider where the encoding should take place. Of course it is optimal if it is in the same cloud and region as you store your output 😉

$encodingProfile = new EncodingProfileConfig();
$encodingProfile->name = 'MP4-Muxing-Example';
$encodingProfile->cloudRegion = CloudRegion::GOOGLE_EUROPE_WEST_1;

Add video stream configurations to the encoding profile

In this example we create a full HD H.264 video stream. As said earlier you can also create other video qualities (e.g., 720p, 480p) for your customers to be available for download as well.

$videoStreamConfig_1080 = new H264VideoStreamConfig();
$videoStreamConfig_1080->input = $input;
$videoStreamConfig_1080->width = 1920;
$videoStreamConfig_1080->height = 1080;
$videoStreamConfig_1080->bitrate = 4800000;
$encodingProfile->videoStreamConfigs[] = $videoStreamConfig_1080;

Add an audio stream configuration to the encoding profile

$audioConfig = new AudioStreamConfig();
$audioConfig->input = $input;
$audioConfig->bitrate = 128000;
$audioConfig->name = 'English';
$audioConfig->lang = 'en';
$audioConfig->position = 1;
$encodingProfile->audioStreamConfigs[] = $audioConfig;

Define the Progressive MP4 Output Format

As we want to create single MP4 files we choose the ProgressiveMp4OutputFormat and define the MP4 filename that should be used to store the file on your cloud storage. Additionally, we define a ClearKeyDrm configuration with a key and kid in hex format to encrypt the MP4 files. You can choose key and kid values on your own. However, please use the same values when you generate the Marlin MBB token.

$clearkey_key = 'CLEARKEY KEY';
$clearkey_kid = 'CLEARKEY KID';
$mp4Muxing1080 = new ProgressiveMp4OutputFormat();
$mp4Muxing1080->fileName = "1080p_4800kbps.mp4";
$mp4Muxing1080->streamConfigs = array($videoStreamConfig_1080, $audioConfig);
$mp4Muxing1080->clearKey = new ClearKeyDrm($clearkey_key, $clearkey_kid);

Create Encoding Job and start it

The JobConfig acts as a container for all the previous configurations from above and will be passed to the BitmovinClient. The BitmovinClient will then start the encoding job and wait until it is finished.

$jobConfig = new JobConfig();
$jobConfig->output = $output;
$jobConfig->encodingProfile = $encodingProfile;
$jobConfig->outputFormat[] = $mp4Muxing1080;
$client->runJobAndWaitForCompletion($jobConfig);

After the encoding job is finished you will have one or more encrypted MP4 files in your cloud storage. Next we will see how to create persistent Marlin tokens that can be used for the offline playback.

Request a MBB token for persistent licenses

ExpressPlay Multi-DRM service can be used to issue persistent licenses for Marlin, PlayReady, Widevine modular, and FairPlay. For this example we will describe how to request tokens and persistent licenses using Marlin DRM. The persistent licence is retrieved by using a MBB token that is requested using the REST API provided by ExpressPlay service.
The MBB token is then passed down the client that will use it to retrieve the actual license that may include all the business rules imposed by the service providers such as rental period, output control and more. The license is permanently stored and it will be used by the client to get access to the protected content as long as the rules expressed within are met.
Below is an example to request a MBB token:

https://bb-gen.test.expressplay.com/hms/bb/token?customerAuthenticator=<YOUR_CUSTOMER_AUTHENTICATOR_CODE>&contentId=urn:marlin:kid:67895432987623454756654729382341&contentKey=43210987123478904321098712340987&rightsType=BuyToOwn&actionTokenType=1

The following is a short description of the attributes used. More details and options can be found in the ExpressPlay API reference:

  • customerAuthenticator: Identifies the customer using the API. Can be found at admin.expressplay.com
  • contentId: Use the kid that you used for the ClearKeyDrm configuration to derive the contentId, so it looks like ‘urn:marlin:kid:’
  • contentKey: Use the same value as you used for the ClearKeyDrm key value.
  • rightsType: Specifies the kind of rights. Must be BuyToOwn or Rental.
  • actionTokenType: Must be 1. Signifies license action token.

With the above request you can download the token and store it next to your encrypted media files. In the next step we will see how the content can be played.

Playback content

Currently, the playback of downloaded content is only possible using native players and not browser based players. ExpressPlay provides a client SDK that represents a robust implementation of Marlin DRM thus meeting all the studio requirements. The SDK is used to develop iOS, Android, Mac and Windows applications and is today used by major service providers around the globe. The ExpressPlayer app for Android and iOS, has been developed using the ExpressPlay SDK and is an excellent tool to verify that the entire workflow has been deployed correctly.
Offline Playback with Expressplay
Simply start the app and select “Custom Input”. There you can enter the URL to your generated MP4 file and the URL to your generated token. Click “Process Media” to start the playback.
With the ExpressPlay SDK you can develop your Android or iOS app to playback the DRM protected content on your device. ExpressPlay offers a comprehensive tutorial that guides you through the process of creating a barebone Android ExpressPlay enabled media app to playback the content with a persistent license retrieved by a Marlin MBB token. Similar to the tutorial for Android, ExpressPlay also offers a tutorial for iOS.

What’s next?

Beside the presented use-case with Marlin DRM, the ExpressPlay service also supports persistent licences for Widevine, PlayReady and FairPlay that are used to enable offline playback of downloaded protected content.
To get started providing offline playback for your customers, sign up for a free Bitmovin API key and an account with ExpressPlay today and follow the steps in this example.
DRM Basics with Irdeto & Bitmovin – Watch the Webinar
 

The post Offline Playback with DRM – Requested Features from the Netflix User Base appeared first on Bitmovin.

]]>
Video Encoder: Why You Need a High Speed Encoder for your Video https://bitmovin.com/blog/speed-encoder-comparison/ Thu, 29 Sep 2016 13:03:11 +0000 http://bitmovin.com/?p=10929 Having the fastest cloud encoder on the market is a great USP for us, but how does increasing your video encoding speed benefit your business? Here are three great reasons for you to care about the speed of your encoder. Last month we ran some fairly comprehensive tests comparing our cloud encoding service against the competition. The results were...

The post Video Encoder: Why You Need a High Speed Encoder for your Video appeared first on Bitmovin.

]]>

Encoding speed test Bitmovin Vs Competition
Having the fastest cloud encoder on the market is a great USP for us, but how does increasing your video encoding speed benefit your business? Here are three great reasons for you to care about the speed of your encoder.

Last month we ran some fairly comprehensive tests comparing our cloud encoding service against the competition. The results were great for us and showed that both our standard cloud encoding service and our enterprise system are considerably faster than anything else on the market. It’s very important to Bitmovin that we maintain our position at the top of the fastest list for a variety of reasons. For one, it pushes our developers to constantly optimize and improve every aspect of our encoding system, but there are many more reasons, and most of them are related to our customers and your businesses.

Below are three common examples of where a faster encoder can give you the commercial advantage.

Video Encoding Speed and VoD Libraries

The VoD and OTT industry is expanding quickly. There are new portals popping up all over the internet, many of them are start ups, some of them are new brands and others are satellites of existing brands. As well as new sites publishing new libraries, the established portals are also regularly adding new content, and sometimes entire new categories to their existing libraries as they attempt to broaden their audience base and keep their current audience tuned in.
Encoding an entire library into MPEG-DASH and HLS
In all of the examples listed above, one of the major tasks is encoding the new library into the required adaptive profiles. In many cases we are talking about thousands of videos and the encoding process can take weeks. Using a faster encoder is a very easy way to dramatically reduce your time to market from the when you receive your new library, to the moment that they are all available online for your users.
The second way to reduce the encoding time is by spinning up extra encoding instances. That means running multiple encodings simultaneously in the cloud. This gives you enormous potential to increase your encoding outputs.
Bitmovin has a list of examples, where companies have come to us with 8 week encoding estimates, and we have delivered in 2 weeks or even faster.  Read a VoD customer user case.

Winning the “First to Air” Advantage in the Broadcast Industry

A recent study published on techcrunch shows that 62% of Americans now get their news from social media. This puts social media marketing at the top of the list for News Publishers, and that means that social sharing is of paramount importance.

So the question that broadcasters are asking is; “How do we maximize social shares of our news content”

There are obviously a few factors to consider, but one of the major ones is being the first to publish that particular piece of news. Video encoding speed plays a vital role in this delivery chain.
First published advantage using a high speed encoder
The graphic on this page illustrates just one situation where the encoder can give you the competitive advantage. Let’s imagine that several news broadcasters are covering a live event, perhaps Tiger Woods is on the final day and final hole of the US open and is one shot off the lead. (We’ll forget about the exclusivity of the broadcast rights for the example.)  He hit’s his second shot towards the green.. it’s close… it drops! Tiger Woods wins the US Open with a spectacular Eagle on the 18th hole! At that second every news agency at the scene will take a snippet from the live stream, fire it to their encoder, and publish a video of that shot through their social networks. The first news agency to publish it will win the lions share of the social engagement which means the lion share if ad revenue. Everyone other broadcaster will be left fighting over the scraps.

User Generated Video in Social Platforms

User generated video is another very fast growing trend that we at Bitmovin are watching carefully. Our encoding service is used by a lot of platforms that provide this, both as a social service, or for internal usage. Social media video is a fast growing and competitive industry, where end users have a lot of choice, not only between brands but between different experiences and sharing models. This makes user experience even more important, not only in viewing video content but also in the user experience as they upload and create their content.
The advantage of using a faster encoding service is in the time delay between when a user uploads their video file, to when they can view it in adaptive format and start sharing it. There are several factors affecting this process and some of them are out of your control, such as users internet connection or the size of the file, but as they say: “Control the controllables”. The video encoding speed is something that you can control, and should be looked at as a priority for your social video platform.

Download Encoding-Speed-Comparison
To learn more about the Bitmovin Cloud Encoding Service you can sign up for a free account, or contact our solutions team for a personal demonstration.
Here are some new types of Encoding that you might be interested in:

The post Video Encoder: Why You Need a High Speed Encoder for your Video appeared first on Bitmovin.

]]>
API Update – Closed Captioning, Video with Multiple Audio Tracks and More https://bitmovin.com/blog/closed-captioning-video-multiple-audio-tracks/ Tue, 29 Sep 2015 10:08:26 +0000 http://bitmovin.com/?p=7467 Closed Captioning Closed captioning is the process of adding text to to video files allowing the user to read along with the audio. This is usually employed to assist hearing impaired viewers, but also has other applications. Closed captioning is an important part of delivering online video and Bitmovin’s API is fully capable of managing your CC needs. Contact us for...

The post API Update – Closed Captioning, Video with Multiple Audio Tracks and More appeared first on Bitmovin.

]]>
Closed Captioning

Closed captioning is the process of adding text to to video files allowing the user to read along with the audio. This is usually employed to assist hearing impaired viewers, but also has other applications. Closed captioning is an important part of delivering online video and Bitmovin’s API is fully capable of managing your CC needs. Contact us for more information.
After introducing our new developer section for the Bitmovin encoding API we have received a lot of great feedback from our customers. The integration of our encoding service with their system works flawlessly using one of our API clients available in Java, JavaScript, Ruby, Python, PHP and Node.js. Each client is available open source on Github.

Bitmovin Encoding Service API – New Features with Examples

In the last couple of weeks we have enabled some really great features available via our API. Here is a list with links to a full example in our PHP API client:

Beside the examples in our PHP API client, there are also examples available for our other API clients. They can be used as a skeleton to test and integrate the feature into your systems easily. Let us know how you like our new features!

Try the Bitmovin Encoding Service for free

If you want to encode MPEG-DASH and HLS content with these new features, you can use our Bitmovin encoding service that offers a free plan with 2.5GB encoding output per month. That’s great for testing and playing around with those new features!

The post API Update – Closed Captioning, Video with Multiple Audio Tracks and More appeared first on Bitmovin.

]]>
HLS Encryption through Bitmovins Cloud Video Encoding Service https://bitmovin.com/blog/hls-encryption/ Thu, 20 Aug 2015 13:08:30 +0000 http://bitmovin.com/?p=7651 HLS Encryption Bitmovin supports HLS encryption with AES-128 and SAMPLE-AES now. Bitmovin already supports DRM for MPEG-DASH with Widevine Modular and Microsoft PlayReady. The next logical step was to support encryption for HLS with AES-128 and SAMPLE-AES as well. How to Encrypt your HLS Content The easiest way to encrypt your HLS content using AES-128 or SAMPLE-AES...

The post HLS Encryption through Bitmovins Cloud Video Encoding Service appeared first on Bitmovin.

]]>
HLS Encryption

Bitmovin supports HLS encryption with AES-128 and SAMPLE-AES now. Bitmovin already supports DRM for MPEG-DASH with Widevine Modular and Microsoft PlayReady. The next logical step was to support encryption for HLS with AES-128 and SAMPLE-AES as well.

How to Encrypt your HLS Content

apple_hls_640[1]
The easiest way to encrypt your HLS content using AES-128 or SAMPLE-AES is to start with one of our API clients. Currently, encryption is only supported through the API and examples are only implemented for the PHP and Python API clients. We will add comprehensive examples for all other API clients in the next few days, so stay tuned.

API Client Examples

Let’s look at the example using the PHP API client. The important part here is to use the HLSEncryptionConfig object. For testing you can use the data provided in our example below:

$hlsEncryptionConfig = new HLSEncryptionConfig();
$hlsEncryptionConfig-&gt;method = HLSEncryptionMethods::SAMPLE_AES;
$hlsEncryptionConfig-&gt;key = 'cab5b529ae28d5cc5e3e7bc3fd4a544d';
$hlsEncryptionConfig-&gt;iv = '08eecef4b026deec395234d94218273d';
$hlsEncryptionConfig-&gt;uri = 'https://your.license.server/getlicense';
$jobConfig = new JobConfig();
...
$jobConfig-&gt;hlsEncryptionConfig = $hlsEncryptionConfig;
$job = Job::create($jobConfig);

The parameters of the HLSEncryptionConfig have the following meaning:

  • method: You can either encrypt your content using AES_128 or SAMPLE_AES
  • key: You need to provide a key that will be used to encrypt the content (16 byte; 32 hexadecimal characters).
  • iv: The initialization vector is optional. If it is not provided, we will generate one for you. (16 byte; 32 hexadecimal characters)
  • uri: If provided, this URI will be placed in the M3U8 playlist file to retrieve the decryption key for playout. Otherwise a keyfile will be generated together with the content that will be referenced from the M3U8 playlist file.

HLS and MPEG-DASH Encryption

It is also possible to create encrypted HLS content and DRM protected MPEG-DASH content for Widevine and PlayReady with one single job. There are also examples available for the PHP and Python API client:

The important parts are to use the HLSEncryptionConfig and CombinedWidevinePlayreadyDRMConfig objects. For testing you can use the data provided in our example below:

$combinedWidevinePlayreadyDRMConfig = new CombinedWidevinePlayreadyDRMConfig();
$combinedWidevinePlayreadyDRMConfig-&gt;pssh = 'CAESEOtnarvLNF6Wu89hZjDxo9oaDXdpZGV2aW5lX3Rlc3QiEGZrajNsamFTZGZhbGtyM2oqAkhEMgA=';
$combinedWidevinePlayreadyDRMConfig-&gt;key = '100b6c20940f779a4589152b57d2dacb';
$combinedWidevinePlayreadyDRMConfig-&gt;kid = 'eb676abbcb345e96bbcf616630f1a3da';
$combinedWidevinePlayreadyDRMConfig-&gt;laUrl = 'http://playready.directtaps.net/pr/svc/rightsmanager.asmx?PlayRight=1&amp;ContentKey=EAtsIJQPd5pFiRUrV9Layw==';
$combinedWidevinePlayreadyDRMConfig-&gt;method =  DRMEncryptionMethods::MPEG_CENC;
$hlsEncryptionConfig = new HLSEncryptionConfig();
$hlsEncryptionConfig-&gt;method = HLSEncryptionMethods::SAMPLE_AES;
$hlsEncryptionConfig-&gt;key = 'cab5b529ae28d5cc5e3e7bc3fd4a544d';
$hlsEncryptionConfig-&gt;iv = '08eecef4b026deec395234d94218273d';
$hlsEncryptionConfig-&gt;uri = 'https://your.license.server/getlicense';
$jobConfig = new JobConfig();
...
$jobConfig-&gt;drmConfig = $combinedWidevinePlayreadyDRMConfig;
$jobConfig-&gt;hlsEncryptionConfig = $hlsEncryptionConfig;
$job = Job::create($jobConfig);

Playback your HLS Encrypted Content

To test your content, you can go to our bitdash HLS demo page and paste the link to your M3U8 manifest file from your encoding job (currently only with AES-128, SAMPLE-AES is work in progress on the client side). Of course you can also test the content on iOS devices by just pasting the M3U8 manifest URL to your address bar of the Safari browser.
We are happy to help you with HLS encryption for your content. Just contact us at. Contact Support.

The post HLS Encryption through Bitmovins Cloud Video Encoding Service appeared first on Bitmovin.

]]>