Gernot Zwantschko – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Wed, 10 May 2023 15:00:20 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg Gernot Zwantschko – Bitmovin https://bitmovin.com 32 32 What is Per-Title Encoding? How to Efficiently Compress Video https://bitmovin.com/blog/what-is-per-title-encoding/ Sun, 15 Nov 2020 20:45:17 +0000 http://bitmovin.com/?p=21673 What is per-title encoding? By encoding video at bitrates appropriate to the content of the video file, content providers can make significant bandwidth savings as well as quality improvements. Introduction Per-Title Encoding is not a new concept. In fact, you can find research online that dates back several years, including this 2012 presentation: Choosing the...

The post What is Per-Title Encoding? How to Efficiently Compress Video appeared first on Bitmovin.

]]>
What is per-title encoding? By encoding video at bitrates appropriate to the content of the video file, content providers can make significant bandwidth savings as well as quality improvements.

Introduction

Per-Title Encoding is not a new concept. In fact, you can find research online that dates back several years, including this 2012 presentation: Choosing the Segment Length for Adaptive Bitrate Streaming and the 2011 paper: Dynamic Adaptive Streaming over HTTP Dataset, both from our own co-founders. Most of the early research concluded that Per-Title Encoding was effective in test environments, but was not suitable for commercial application because it did not work with a fixed bitrate ladder. This is due to the fact that every piece of content is different, and so each video would require individual analysis as a first step.
In 2015 Netflix managed to mitigate the overhead of the extra analysis step and implement Per-Title Encoding at scale. As a result, they increased the quality of experience and achieved significant bandwidth savings. These optimizations are achieved by increasing or decreasing the bitrate of each bitrate ladder entry based on a complexity measurement for each input file. It sounds simple enough, but believe me, there is a good reason that Netflix took years to make Per-Title Encoding a viable part of their video delivery workflow.
But to fully understand the complexity of the challenge, it’s best to start at the beginning:

What is Per-Title Encoding?

Put simply, it’s a form of encoding optimization that customizes the bitrate ladder of each video, based on the complexity of the video file itself. The ultimate goal is to select a bitrate that provides enough room for the codec to encapsulate enough information to present a perfect viewing experience, but no more. Another way of thinking about it is that the optimized adaptive package has been reduced down to just the information that viewers can actually enjoy. Anything beyond the human eye’s ability to perceive is stripped out. (Test your content and see a comparison of your existing bitrate ladder against the optimized Per-Title Ladder)

Complexity comparison for video encoding complexity
Complex scenes with motion and textures require higher bitrates than scenes with less movement.

This “optimal” bitrate varies for each type of content, and even from title to title. Action scenes or sports scenes typically require a higher bitrate to store the information, as they contain a lot of motion and fewer redundancies making each scene more complex. Therefore they also contain fewer opportunities to compress data, without impacting the perceived quality of the content. On the other hand, documentaries typically have far less motion during any given scene, which leaves a codec more possibilities to compress the given information more effectively without losing perceptual quality. If you take those characteristics and adjust the encoding profile accordingly, you are able to lower the bitrate but still maintain a very good perceived quality for your content.
In order to decide which bitrate fits best for each specific piece of content, you need a good quality metric to measure against. Gathering this information typically requires several encodings where different types of content, all of which need to be encoded with a variety of different bitrate settings. Once that is complete a PSNR (Peak to Signal Noise Ratio) analysis of each encoding needs to be performed to form an objective impression of the effectiveness of these encoding parameters.

Quick Fact #1: If you compare an encoding with a PSNR of 45 dB or higher with its source video, you won’t notice any difference although less information is used to render this content. On the other hand, a PSNR of 35 dB or lower would definitely show noticeable differences between the encoding and its source file.

Based on the results of this analysis, you can derive a custom bitrate ladder to encode each content file accordingly. This approach works and will result in an improved quality of experience and in the vast majority of cases, a reduction in bandwidth usage as well. This is of course of paramount importance to most online content providers, VoD and OTT platforms in particular.
This is OK for a start, but as you apply this optimization to a large number of titles, you will begin to see limitations in the PSNR metric. An improved method for analyzing the visual quality of an image is the Structural SIMilarity (SSIM) index. SSIM is a method for measuring the similarity between two images. Essentially one image is taken as a control, and considered “perfect quality” and the second image in them compared against it and is a useful method for measuring the results of your optimization. SSIM is a perception-based model and is focused on changes (structure, luminance) that impact the perceived quality. This provides an improved impression about the quality of our content compared to PSNR but is also a bit more compute-intensive. (Download the Per-Title Encoding Whitepaper)

How you can do it with Bitmovin?

Let’s quickly recap what we have learned until now. While a fixed bitrate ladder is not ideal for every type of content, being able to create an optimal bitrate ladder for each and every encoding is very time-consuming and expensive. So we need a way to efficiently adapt a given bitrate ladder to the complex needs of the content.

Explanation of Per-Title Encoding workflow
As shown in the figure above, the first step of a Bitmovin Per-Title encoding is to compute a “complexity factor” for a given input. With our API, this is done during the “complexity analysis” of your input file. H264 provides an option called CRF (Constant Rate Factor). This factor specifies a “quality level”, which is achieved by varying the bitrate based on the amount of motion detected in the video content. The average bitrate of a CRF encoding, allows us to get an impression of the overall complexity of this video asset and to derive a “complexity factor” from it.

Quick Fact #2: The complexity factor has a value range of 0.5 to 1.5. Content which has a complexity factor between 0.5 and 1 is considered to be less complex, while a complexity factor between 1 and 1.5 relates to content with a higher complexity.

Adjusted Per-Title Encoding Profile

With this “complexity factor” we can adjust the given bitrate ladder. Keep in mind that we do not just consider the “complexity factor” but also the resolution of the bitrate ladder entry we want to adjust.

Resolution Bitrate
[kbps]
Complexity
Impact
Width Height low high
1920 auto 4300 1.5 0.3
1280 auto 2500 1.3 0.4
1280 auto 1900 1 0.45
960 auto 1300 0.9 0.9
640 auto 800 0.45 1
480 auto 450 0.4 1.3
320 auto 260 0.3 1.5

For low complexity content, we can reduce the higher bitrate levels to a higher degree, without losing on visual quality and also gain the most from the bitrate savings. The lower bitrate levels are adjusted as well, but not as significant as the higher bitrate levels to avoid degradation of visual quality.
For high complexity content, it basically works the other way around. We do not adjust the high bitrate levels too much as a significant increase in bitrate would usually not gain much on visual quality. However, low bitrate levels are increased to a higher degree because adding bitrate can significantly increase visual quality.
Why is that so? Modern codecs work more efficiently on larger resolutions because they often contain bigger uniform areas, which can be compressed more effectively. Because of that, fewer bits are needed per pixel to achieve a similar quality with larger resolutions compared to smaller resolutions.

ABR encoded content

The following results show how this approach actually works. Less complex input files have bigger adjustments of their upper bitrate ladder entries and smaller ones for the lower values. Although the bitrate was reduced by 30%, the PSNR and SSIM stayed almost the same, so your customers would experience the same quality using less bandwidth, so your distribution costs would be reduced. This can be seen in the Cartoon “Glass Half”. Its PSNR value dropped from 51.51 dB to 47.78 dB, however, it is still very good quality, and your customers won’t notice because the differences are too small for the human eye to perceive. (remember Quick Fact #1).

Complexity Width Static
Bitrate
Bits
per
Pixel
PerTitle
Bitrate
Bits
per
Pixel
Bitrate
change
Static
PSNR
PerTitle
PSNR
PSNR
change
Static
SSIM
PerTitle
SSIM
SSIM
change
Sintel 0.837 1920 4207000 0.112 3187000 0.085 -24.25% 45.15 43.83 -2.91% 0.983 0.979 -0.42%
Animation 1280 2355000 0.141 1854000 0.111 -21.27% 38.98 38.63 -0.92% 0.966 0.963 -0.35%
24 FPS 1280 1790000 0.107 1497000 0.090 -16.37% 38.57 38.27 -0.77% 0.962 0.959 -0.33%
2.353 DAR 960 1196000 0.127 1019000 0.108 -14.80% 36.92 36.69 -0.62% 0.946 0.943 -0.32%
640 709000 0.170 657000 0.157 -7.33% 35.12 35.03 -0.25% 0.917 0.915 -0.14%
480 391000 0.166 365000 0.155 -6.65% 32.77 32.64 -0.40% 0.887 0.886 -0.14%
320 220000 0.211 209000 0.200 -5.00% 30.98 30.93 -0.18% 0.853 0.853 -0.09%
Droneflight 0.814 1920 4144000 0.067 2984000 0.048 -27.99% 41.56 39.95 -3.88% 0.971 0.960 -1.20%
Documentary 1280 2384000 0.086 1792000 0.065 -24.83% 37.74 37.25 -1.29% 0.949 0.942 -0.74%
30 FPS 1280 1795000 0.065 1451000 0.053 -19.16% 37.26 36.88 -1.02% 0.942 0.942 -0.01%
1.778 DAR 960 1211000 0.078 999000 0.064 -17.51% 36.07 35.81 -0.71% 0.926 0.936 1.08%
640 726000 0.105 663000 0.096 -8.68% 34.63 34.55 -0.22% 0.901 0.921 2.26%
480 397000 0.102 365000 0.094 -8.06% 33.52 33.46 -0.17% 0.878 0.899 2.43%
320 224000 0.130 210000 0.122 -6.25% 32.49 32.46 -0.11% 0.853 0.876 2.76%
ToS 1.046 1920 4232000 0.115 4289000 0.116 1.35% 42.71 42.78 0.16% 0.985 0.985 0.02%
Movie 1280 2409000 0.147 2453000 0.150 1.83% 38.23 38.26 0.09% 0.973 0.973 0.02%
24 FPS 1280 1822000 0.111 1859000 0.113 2.03% 37.69 37.72 0.08% 0.969 0.970 0.03%
2.400 DAR 960 1209000 0.131 1260000 0.137 4.22% 36.01 36.10 0.25% 0.958 0.958 0.08%
640 718000 0.175 751000 0.183 4.60% 33.64 33.72 0.23% 0.931 0.932 0.10%
480 396000 0.172 420000 0.182 6.06% 30.95 31.04 0.31% 0.898 0.900 0.17%
320 224000 0.219 239000 0.233 6.70% 28.46 28.54 0.28% 0.854 0.855 0.19%
Caminandes 1.091 1920 4156000 0.084 4272000 0.086 2.79% 40.98 41.08 0.26% 0.973 0.973 0.03%
Animation 1280 2339000 0.106 2425000 0.110 3.68% 37.37 37.43 0.18% 0.963 0.963 0.05%
24 FPS 1280 1772000 0.080 1841000 0.083 3.89% 36.81 36.90 0.23% 0.959 0.959 0.07%
1.778 DAR 960 1184000 0.095 1282000 0.103 8.28% 35.52 35.68 0.43% 0.948 0.949 0.14%
640 709000 0.128 774000 0.140 9.17% 33.82 33.96 0.42% 0.929 0.931 0.16%
480 391000 0.126 437000 0.140 11.76% 32.19 32.36 0.54% 0.910 0.912 0.23%
320 221000 0.160 251000 0.182 13.57% 30.35 30.53 0.59% 0.884 0.886 0.23%
Motocross 2.283 1920 4461000 0.086 6198000 0.120 38.94% 37.30 38.95 4.42% 0.957 0.969 1.18%
Action 1280 2527000 0.110 3800000 0.165 50.38% 35.22 36.58 3.85% 0.942 0.956 1.55%
25 FPS 1280 1921000 0.083 2882000 0.125 50.03% 34.22 35.67 4.24% 0.929 0.947 1.99%
1.778 DAR 960 1289000 0.099 1935000 0.149 50.12% 33.36 34.75 4.17% 0.918 0.938 2.20%
640 776000 0.135 1167000 0.203 50.39% 31.81 32.97 3.66% 0.892 0.913 2.34%
480 432000 0.133 650000 0.201 50.46% 30.03 31.07 3.47% 0.854 0.877 2.61%
320 248000 0.172 372000 0.258 50.00% 28.22 29.06 2.99% 0.810 0.828 2.29%
Glass Half 0.764 1920 3022000 0.061 2091000 0.042 -30.81% 51.51 47.78 -7.22% 0.997 0.995 -0.27%
Cartoon 1280 1942000 0.088 1377000 0.062 -29.09% 38.21 37.82 -1.02% 0.982 0.980 -0.25%
24 FPS 1280 1502000 0.068 1166000 0.053 -22.37% 37.93 37.59 -0.91% 0.981 0.978 -0.24%
1.778 DAR 960 996000 0.080 794000 0.064 -20.28% 35.53 35.28 -0.68% 0.967 0.964 -0.26%
640 602000 0.109 541000 0.098 -10.13% 33.10 33.02 -0.24% 0.941 0.940 -0.13%
480 333000 0.107 302000 0.097 -9.31% 31.22 31.15 -0.23% 0.912 0.911 -0.15%
320 190000 0.137 177000 0.128 -6.84% 29.44 29.39 -0.15% 0.876 0.875 -0.10%

Highly complex videos on the other hand also show the expected behavior. While upper bitrate ladder entries are adjusted less, the lower bitrate ladder entries were increased accordingly in order to be able to still achieve an improved level of quality, which wouldn’t be the case if a fixed bitrate ladder would have been used.
All of these examples of per title encoding with playable comparisons are available to view on our demonstration page.

Can this workflow be optimized further?

Of course, there is always room for further improvements and optimizations, but there is are tradeoffs between efficiency and costs. Given a defined set of resolutions, the best possible bitrate ladder can be found by encoding the input file with a different set of bitrates for each resolution and performing a PSNR analysis for each of those. Those results will show us, which bitrate (x-axis) provides the best possible quality (y-axis) for a specific resolution. This bitrate is at the apex of each blue line, shown in the graph below. Based on that, you get a convex hull, which is then used, to select the pairs of resolution and bitrate which fit best for your encoding and its bitrate ladder.

Per-Scene Adaptation convex hull and pair resolutions
(Source: Netflix)
Although this allows us to define an “optimal” bitrate ladder for a particular video, it requires several encodings (e.g. 5 encodings with a different bitrate per rendition, and 5 renditions = 25 encodings to determine the final bitrate ladder). It isn’t guaranteed that those encodings are sufficient to determine a bitrate ladder, because you can’t tell beforehand if your range of bitrate is sufficient to determine the quality behavior of a resolution. This makes this approach more expensive also and increases the time it takes to determine the bitrate ladder.
In our described workflow, we are using one CRF encoding to evaluate the complexity factor of one low resolution encoding to adjust the bitrate of all entries in our bitrate ladder. Creating a CRF encoding for each resolution would allow us to adjust the bitrate more specifically and would lead to further quality improvements and/or bandwidth savings.
Another optimization would be to specifically analyze your set of input files to evaluate their characteristics and calculate an optimized complexity factor, which represents the complexity of your input files more precisely. These results can also be used with a machine learning approach.
Another simple optimization would be to take the specific details of the input file as a reference and adjust the bitrate ladder accordingly. This can be done already using Stream Conditions, so codec configurations can be skipped if they don’t meet certain conditions, e.g. that the input file resolution needs greater or equal than the width/height configured in the codec configuration. In this example, the conditions avoid any possibility of upscaling, which goes hand in hand with losing quality.

Conclusion

Even though optimizing your content requires a little extra processing in the form of trial encodings, it is definitely worth it. The relatively small increase in encoding cost is easily outweighed by the bandwidth savings and the overall improvement in customer experience.

For more information, reach out to our solutions team for a demonstration or find out more about how Bitmovin can help you to solve complex problems in your video workflow.

The post What is Per-Title Encoding? How to Efficiently Compress Video appeared first on Bitmovin.

]]>
To Play, or Not to Play – AutoPlay Policies for Safari 14 and Chrome 64 https://bitmovin.com/blog/autoplay-policies-safari-14-chrome-64/ Wed, 16 Sep 2020 07:55:57 +0000 http://bitmovin.com/?p=21516 Updates to the autoplay policies in Safari and Chrome could have significant implications for both advertisers and content providers As of September 2017, Safari 11 on macOS and iOS, as well as Chrome for Desktop and Mobile introduced a new auto-play policy. The main goal of these policies was to improve a user’s browsing experience...

The post To Play, or Not to Play – AutoPlay Policies for Safari 14 and Chrome 64 appeared first on Bitmovin.

]]>
Autoplay policies for safari and chrome

Updates to the autoplay policies in Safari and Chrome could have significant implications for both advertisers and content providers

As of September 2017, Safari 11 on macOS and iOS, as well as Chrome for Desktop and Mobile introduced a new auto-play policy. The main goal of these policies was to improve a user’s browsing experience by eliminating distractions and surprising media playbacks of unmuted content. Thus providing users with more control over the autoplay capabilities on individual websites.
Since then, Apple has made significant improvements to its playback and autoplay capabilities. For any app or service to successfully run an autoplay element, the video must either come without an audio track or with a muted attribute. The video element will automatically pause when and if the video becomes unmuted without user interaction or if the video is no longer onscreen.
Additional options are available in the respective website preferences pane to allow or disable autoplay, disable audio in general, or more. In addition to that, both browsers are going to introduce an automated approach, which decides if auto-play will be blocked for media elements with sound in general or if auto-play is disabled at all.

Safari

Safari 11.0  Behavior

Safari 11.0 originally shipped with iOS 11.0 and macOS 10.11 – but currently is on the 14.1th iteration which was released on September 16th, 2020, and continues to implement the following autoplay policies:
Safari 14, Chrome 64 - Bitmovin
Only the original changelog in the app store held this information:
Safari 14, Chrome 64 - Bitmovin
However, Apple has since updated its developer content to include autoplay information on its best practices page.
Safari 11 (and currently Safari 14) is using the so-called “automatic inference engine”, which decides, if media elements with sound are allowed to auto-play on the visited website by default, which won’t be allowed for most websites, according to the Safari team. In addition to that, a new power-saving feature will prevent the playback of muted videos, if they are off-screen, or hidden in a background tab.

Developer recommendations from Safari:

  • This policy applies to all ways how video can be used on a website (e.g. background videos, video-as-animated-gif, …). Therefore you should check how this impacts our website accordingly.
  • Assume that it always requires user gestures in order to start the playback of <video> or <audio> elements, as users can disable auto-play for any type of content by now.
  • Auto-play restrictions are on a per-element basis. So instead of using multiple media elements to play multiple videos consecutively, use one media element and change its source (e.g. preroll-ads followed by the actual content, playlists, and so on)
  • Don’t play Ads without showing media controls, as they might not be able to play automatically due to the new policy, and your users won’t be able to start the playback on their own. audio tracks containing “silence” are not recognized as muted. Therefore, an audio track has to be muted or not set at all.

It’s also now possible to enable inline video playback in Safari 14.1 by including the  <video playsinline>  element. Without it, the video will play in full-screen mode by default.

Google Chrome 64

New Behavior

Google Chrome’s (chromium) latest auto-play policy for Mobile and Desktop was released in its stable version with Chrome 64 in November 2018. One of their main goals is to unify the autoplay behavior across platforms and to allow the user to control which websites and contents can be played automatically. Doing so, they won’t get surprised by an unexpected media playback or increased data and power usage by their device due to that. Further, it makes a developer’s life easier as well, as the auto-play behavior will be the same for Desktop and Mobile (see table below).
While muted autoplay is always allowed, unmuted autoplay requires any of the following conditions to be fulfilled:

  • User interaction with the website is required
    • clicking anywhere on the document, navigation, …
    • scrolling is excluded as a valid user interaction in this context
  • MEI (Media Engagement Index) threshold has to be crossed (Desktop only)
  • User has added a PWA (Progressive Web App) to their home screen (Mobile only)

Auto-play in Iframes requires a delegation of the auto-play approval from the origin by adding a new HTML attribute “allow=”autoplay”” to it. Otherwise, unmuted autoplay will be denied.

&amp;lt;iframe src="myvideo.html" allow="autoplay"&amp;gt;

A first step into the direction of the unified approach was released with Chrome 63 in October 2017, where users can disable audio playback completely for individual websites. The following table shows how the unified autoplay approach for Desktop and Mobile is going to look like in its final stage:
Safari 14, Chrome 64 - Bitmovin
(Source: Google Slides – Chrome Autoplay)
Another step of this unification process are two changes, which should make muted autoplay more reliable:

  • Removing the block autoplay setting that is currently available on Chrome for Android
  • Removing autoplay blocking on mobile when data saver mode is enabled

These two changes should encourage sites and advertisers to use muted videos instead of animated gifs, which will reduce the overall bandwidth consumption on both sides.

How do they evaluate the Media Engagement Index?

While for Safari only the name of their “automatic inference engine” is available, Google Chrome’s “Media Engagement Index” or MEI, comes with a little bit more information about how it will influence those new auto-play restrictions. Beginning with Chrome 62 Canary and Dev in September 2017, they will start collecting data for the MEI. It will be used to evaluate the interest of the user into the media available on a visited website. The conditions, which influence this new metric are not finalized yet, but Google already presented how their initial approach will look like:

  • Consumption of the video must be greater than 7 seconds
  • Audio must be present and unmuted
  • Tab with video is active
  • Size of the largest dimension of the video must be greater than 256px

So, the MEI score will be highest on sites, and therefore enable unmuted autoplay (Desktop only), which mainly provide video content, while other websites like news sites, or blogs will have a lower score, as they are more focused on textual content than videos, so they are more likely not to be able to autoplay their videos. Nevertheless, also the number of visits of a certain website by the user impacts this metric. Google also provided some example scenarios for that on slide 8 in their Autoplay Policy Presentation, which should explain how the MEI will impact the autoplay functionality in Chrome for Desktop.

Developer recommendations by Google

  • Use auto-play sparingly. Autoplay can be a powerful engagement tool, but it can also annoy users if undesired sound is played or they perceive unnecessary resource usage (e.g. data, battery) as the result of unwanted video playback.
  • If you do want to use autoplay, consider starting with muted content and let the user unmute if they are interested in exploring more. This technique is being effectively used by numerous sites and social networks.
  • Unless there is a specific reason to do so, we recommend using the browser’s native controls for video and audio playback. This will ensure that autoplay policies are properly handled.
    Prompt users to add your mobile site to the homescreen on Android devices. This will automatically give your application unmuted autoplay privileges.

How to check if auto-play is available?

Both browser vendors recommend the same best practice in order to detect the availability of auto-play, by listening to the promise returned by the play() function of an HTMLMediaElement, if it got rejected or resolved.

var promise = document.querySelector('video').play();
if (promise !== undefined) {
 promise.then(_ =&amp;gt; {
   // Autoplay started!
 }).catch(error =&amp;gt; {
   // Autoplay was prevented.
   // Show a "Play" button so that user can start playback.
 });
}

The Bitmovin Player team is constantly monitoring the landscape to ensure that we have solutions ready for changes just like this. Sign up for a free test account and get your Bitmovin Player up and running in just a few minutes.

Video technology guides and articles

The post To Play, or Not to Play – AutoPlay Policies for Safari 14 and Chrome 64 appeared first on Bitmovin.

]]>
To Play, or Not to Play #2 – Firefox blocks audible autoplay by default! https://bitmovin.com/blog/firefox-blocks-audible-autoplay/ Wed, 20 Feb 2019 18:51:30 +0000 https://bitmovin.com/?p=26669 Restrictive autoplay policies in browsers were definitely one of the most disruptive changes that took place in 2018. While it was rolled out with the right intentions to help users enjoy their journey through the web, without being “annoyed” by loud, auto-playing audio/video content, it certainly sent website owners and video developers scrambling to make...

The post To Play, or Not to Play #2 – Firefox blocks audible autoplay by default! appeared first on Bitmovin.

]]>
Safari 14, Chrome 64 - Bitmovin
Restrictive autoplay policies in browsers were definitely one of the most disruptive changes that took place in 2018. While it was rolled out with the right intentions to help users enjoy their journey through the web, without being “annoyed” by loud, auto-playing audio/video content, it certainly sent website owners and video developers scrambling to make last minute changes to adopt autoplay rules to continue providing a smooth playback experience for their users.

A little “Browser History”

Let’s have a quick look at what happened last year, here’s our original post for more details:

  • Google started to roll out their new autoplay policy in April 2018 with Chrome 66, along with its MEI (Media Engagement Index). MEI collected anonymous user data to derive a score for a website to determine if audible autoplay is allowed or not. To do that, it takes into account if a user is visiting this site frequently, how long a user consumes its content, and some other website traffic factors.
  • In September 2018, Apple followed suit and introduced Autoplay policies in Safari 11.0 along with its “automatic interference engine” that blocked autoplay by default for most websites. That’s all that is known about the behaviour of this engine. However, a user could whitelist websites to allow autoplay and define a general rule for all websites in the “Preferences” section.
  • MS Edge (Edge HTML 18) followed shortly and implemented their own autoplay control behavior. Their policy allowed users to decide for themselves if they wanted to allow autoplay or not by default, globally or per-site.
  • Firefox 59 had already implemented autoplay policy, which could be enabled via a flag but it wasn’t obvious to every user. It only allowed users to disable or enable autoplay globally, not per-site.

Quick Tip: In case you are wondering what data is tracked by Google’s MEI, you can view this data by opening this URL in your Chrome browser: chrome://media-engagement/

Fast Forward to Today

Let’s take a look at the autoplay horizon now.
Mozilla Firefox / Firefox for Android
On February 4th, Mozilla announced their updated autoplay policy for all users, as part of Firefox 66 beta release. The stable version is expected to be released on March 19th. This will block any non muted autoplay content by default for all websites where user interaction was not detected before. The autoplay policy will also become part of Firefox for Android, and therefore replace its existing one. Like other browsers, a user will have the ability to whitelist certain pages.
“When Firefox for Desktop blocks autoplay audio or video, an icon appears in the URL bar. Users can click on the icon to access the site information panel, where they can change the “Autoplay sound” permission for that site from the default setting of “Block” to “Allow”. Firefox will then allow that site to autoplay audibly. This allows users to easily curate their own whitelist of sites that they trust to autoplay audibly.”
 
Safari 14, Chrome 64 - Bitmovin
(Source: hacks.mozilla.com)
Google Chrome / Chrome for Android
While their autoplay policy implementation for video content didn’t change much, there is still a lot going on when it comes to handling autoplay capabilities for Web Audio APIs. Latest details about that and how the MEI works is explained here.
There’s one more thing – Progressive Web Applications! Those are available on all desktop platforms now and received an exemption. PWAs are allowed to autoplay with sound by default for pages within the scope of the web app manifest.
Safari
At this point there aren’t any other known changes to the autoplay behavior in Safari, nor in the current technology previews of Webkit itself.

What does it mean for web developers?

The good news is that the autoplay policies behave very similarly across browsers now and have two big commonalities that you can rely on as a developer:

  1. Muted autoplay is always allowed
    By adding the `muted` attribute to the video element you will always be able to autoplay your video content.
  2. The play() API call of the HTMLMediaElement returns a Promise which either resolves if autoplay is successful, or rejects in case autoplay isn’t possible, along with an error message explaining why. This is a best-practise approach recommended by every browser vendor so far.
var promise = document.querySelector('video').play();
if (promise !== undefined) {
    promise.then(_ =&gt; {
 &nbsp;&nbsp;// Autoplay started!
}).catch(error =&gt; {
 &nbsp;&nbsp;// Autoplay was prevented.
 &nbsp;&nbsp;// Show a "Play" button so that user can start playback.
});
}

How does it work with the Bitmovin Player?

This is how the Bitmovin Player does it too. Its play() API call also returns a promise which either rejects or resolves and therefore can be easily used to handle these situations.

player.load(sourceConfig).then((value) =&gt; {
&nbsp;&nbsp;&nbsp;&nbsp;player.play().then((success) =&gt; {
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;console.log("play() successful!", success)
&nbsp;&nbsp;&nbsp;&nbsp;}, (error) =&gt; {
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;console.log("play() failed!", error)
&nbsp;&nbsp;&nbsp;&nbsp;})
},
() =&gt; {
&nbsp;&nbsp;&nbsp;&nbsp;console.log('Error while loading new content');
});

If you are using the player configuration `autoplay`, you can leverage the `warning` event of the player which is fired in this case as well:

const playerConfig = {
 &nbsp;&nbsp;&nbsp;key: 'YOUR_PLAYER_KEY_HERE',
 &nbsp;&nbsp;&nbsp;playback: {
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;autoplay: true,
 &nbsp;&nbsp;&nbsp;},
 &nbsp;&nbsp;&nbsp;events: {
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;warning: (warning) =&gt; {
 &nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;if (warning.code === 1303) {
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;console.log(warning.message);
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;}
 &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;},
 &nbsp;&nbsp;&nbsp;},
 &nbsp;};

At Bitmovin, our main goal is to not only arm developers with the best tools and video products, but also provide timely information and solutions to disruptive changes in the streaming landscape. By using our player, we take the proactive steps needed to continue to provide a seamless and smooth experience for your users despite of what happens in the background!
Hope this helps you prepare for the upcoming changes in Firefox 66!
Try it out yourself with our 30 day Free Trial!
Try for Free
 

Video technology guides and articles

The post To Play, or Not to Play #2 – Firefox blocks audible autoplay by default! appeared first on Bitmovin.

]]>
RFC compliant HLS content and how to create it https://bitmovin.com/blog/rfc-compliant-hls-content-create/ Fri, 12 May 2017 11:27:46 +0000 https://bitmovin.com/?p=19966 This guide will show you how to generate RFC compliant HLS to ensure you are playing smoothly on every Apple operating system and device Alongside MPEG-DASH, HLS, is one of the most popular streaming formats out there, and because it was initially created independently by Apple for their own environment, it is natively supported on...

The post RFC compliant HLS content and how to create it appeared first on Bitmovin.

]]>
Create RFC compliant HLS content

This guide will show you how to generate RFC compliant HLS to ensure you are playing smoothly on every Apple operating system and device

Alongside MPEG-DASH, HLS, is one of the most popular streaming formats out there, and because it was initially created independently by Apple for their own environment, it is natively supported on every Apple device and OS that they have produced so far. Nevertheless, despite the fact that all the components of their streaming solution come from the same company, there are a number of minimum requirements and limitations that you will need to be aware of if you want to provide broad support across all Apple powered platforms such as iOS, tvOS or macOS.
RFC compliance for HLS

Minimum requirements

If you want or need to support older Apple devices, you should also keep their specific capabilities in mind, so your customers are still able to consume your content on their device. Apple’s authoring requirements are split into general requirements, which apply to all platforms. However, some of those requirements, are overruled by specific requirements for tvOS, iOS or macOS. The whole list consists of more than 100 “required” or “recommended” points, which covers every aspect of the video workflow from creating HLS content, to its delivery and security aspects.
For the sake of simplicity we will focus on the requirements that you are most likely to need to be aware of in order to provide proper HLS content to your customers.

General requirements

Video encoding

  • Required video codec: H264/AVC
  • Profile and Level must not exceed High Profile, Level 4.2
  • Keyframe interval should be 2 seconds
  • Each video segment must start with an I-Frame
  • De-interlaced content only
  • VoD: The measured average segment bitrate must be within 10% of the AVERAGE-BANDWIDTH attribute (required in the master playlist)
  • VoD: The measured peak bitrate must be within 10% of the BANDWIDTH attribute
  • All video renditions should have the same aspect ratios
  • A segment duration of 6 seconds should be used

Audio encoding

  • You must provide Stereo audio
  • AAC should be used for audio stream with a bitrate > 64 kbps

Storage/Delivery

  • Provide correct content types:
    • Playlists: application/x-mpegURL or vnd.apple.mpegURL
    • Media Segments: video/MP2T
  • Playlists must be delivered using gzip content-encoding

A hint for iOS Apps

  • Ignoring one of the following limits could result in a denial of your app to be distributed through the Apple store
  • If you are using videos in your app, which exceeds either 10 minutes duration, or 5 MB per 5 minute period, you must use HTTP Live Streaming.
    If this video content is delivered over cellular networks, you are required to provide at least one stream with a bandwidth 64kbps or lower (this can be an audio-only stream or an audio stream with a still image)

Is my content compliant?

To make your life easier, you can check whether or not your HLS content complies with Apple’s requirements and recommendations with their validation tool: “mediastreamvalidator”. It can simply parse your HLS playlist and check if it is compliant with the HTTP Live Streaming specification, or it can also download all the segments of your content, and determine their duration, average and maximum bitrate, and various other properties. The latter is the default setting.
You can download this tool here in the developer area of Apple. In order to use it, an Apple developer account is required, as well as an Apple device, which is running Apple’s own OS (OS X, macOS).

How does the validation work?

mediastreamvalidator <hlsContentURL> will load the playlist, and its variant playlists, if present, and validate them against the HTTP Live Streaming specification. Then, the validator checks, if all segment URI’s are actually reachable and starts to download them. During that phase, it also checks the provided content type of the content, and throws an error message, if it is missing or invalid.
This enables the mediastreamvalidator to measure the peak and average bitrate of a variant stream, to compare the results with the values of the BANDWIDTH, and AVERAGE-BANDWIDTH attribute, which have to be stated in the master playlist. For Video on Demand content, those values are allowed to be 10% off the provided value stated in the master playlist. Otherwise a “Must Fix” error message is thrown (see picture below).
Must fix issues
If you just want to check if your playlists are valid, mediastreamvalidator –parse-playlist-only <hlsContentURL> is the right command for you. It will load the playlist files only, and validate them as described in the beginning.
If everything is fine, you will just see the validation results for each variant stream of your content.
Validation Results RFC HLS

Recommended settings

Those renditions are recommended by Apple. As mentioned before, each rendition should have the same aspect ratio, otherwise the customer would see changes in the resolution, when the player adapts the quality of the stream, which lowers the users Quality of Experience (QoE). Further it is also important, that you setup up your encoding in a way, so that the resulting bitrate of your content, actually provides the average bit rate and peak bitrate, mentioned in your master playlist.

Video average bit rate (kb/s) Resolution Frame rate
145 416 x 234 ≤ 30 fps
365 480 x 270 ≤ 30 fps
730 640 x 360 ≤ 30 fps
1100 768 x 432 ≤ 30 fps
2000 960 x 540 same as source
3000 1280 x 720 same as source
4500 same as source same as source
6000 same as source same as source
7800 same as source same as source

Encoding example

So, let’s create an HLS content, which will pass the mediastreamvalidator with flying colours. The following example is using our Bitmovin PHP API Client, which is available on Github, so is the example as well.

1. Get the Bitmovin PHP API Client

You can either download it from Github or install it using composer. Please see the API client’s repository for more information about the setup.

2. Initialize the Bitmovin API Client

In order to use the API client, we have to initialize it first.

$client = new \Bitmovin\BitmovinClient('INSERT YOUR API KEY HERE');

That’s it^^. The client is now ready to use. Now, we can start preparing the configurations for your input source, output destination, and encoding, containing all the renditions, we want to create for the compliant HLS content.

3. Create a input configuration

For the sake of simplicity we are using an HTTP(S) input, although many other input sources such as AWS S3, Google Cloud Storage, Microsoft Azure, Aspera, and (S)FTP are also supported.

$inputURL = 'http://example.com/path/to/your/movie.mp4';
$input = new HttpInput($inputURL);

4. Create an output configuration

Here you define either to directly transfer your encoding results to your preferred storage type (AWS S3, Google Cloud Storage, Microsoft Azure, (S)FTP), or you can store the encoding on your own Bitmovin storage. Direct transfer as well as storage are features of our new Bitmovin API. The first allows you to keep the turnaround time of your encoding very low, so your encoded content becomes available as quickly as possible on your own storage. The latter enables you to keep a backup of your encoding, transfer them later on to another storage and so on. If you want to do both, you can do that as well now :). As AWS S3 is a very common cloud storage, we will use it for this example.

//CREATE AN OUTPUT
$gcsAccessKey = 'YOUR-ACCESS-KEY';
$gcsSecretKey = 'YOUR-SECRET-KEY';
$gcsBucketName = 'YOUR-BUCKETNAME';
$gcsPrefix = 'PATH/TO/YOUR/OUTPUT-DESTINATION/';
$gcsOutput = new GcsOutput($gcsAccessKey, $gcsSecretKey, $gcsBucketName, $gcsPrefix);

5. Create an encoding profile configuration

An encoding profile configuration contains all the encoding related configurations for video/audio renditions as well as the encoding environment itself.

Create encoding profile configuration:

$encodingProfile = new EncodingProfileConfig();
$encodingProfile->name = 'HLS compliant content #1';
$encodingProfile->cloudRegion = CloudRegion::GOOGLE_EUROPE_WEST_1;
$rate = 30;//framerate of your input file
$keyFrameInt = 2; //key frame interval in seconds
$segmentLength = 6; //segment duration in seconds

Create video quality configuration

I have created a little helper function for creating the recommended video stream configurations provided by Apple. It basically just creates an H264VideoStreamConfig and sets its properties. In order to achieve the recommended key frame interval of two seconds, we just have to provide the maxGop property. Its value describes the maximum size in frames of a Group Of Pictures. If the maximum size is reached, our encoding service will to create a new GOP, which starts with an key frame. Therefore, we need to know the frame rate of the input file and the interval we want to achieve. If you multiply those values, you get the GOP size, our encoding service has to use (see below).

function createH264VideoStreamConfig($input, $profile, $bitrate, $width, $height = null, $rate = null, $keyFrameInterval = 2)
{
   $videoStreamConfig = new H264VideoStreamConfig();
   $videoStreamConfig->input = $input;
   $videoStreamConfig->width = $width;
   $videoStreamConfig->height = $height;
   $videoStreamConfig->bitrate = $bitrate;
   $videoStreamConfig->rate = $rate;
   $videoStreamConfig->profile = $profile;
   if (!is_null($rate))
   {
       $videoStreamConfig->maxGop = $rate * $keyFrameInterval;
   }
   return $videoStreamConfig;
}

With this little helper it is easy to setup the recommended renditions and their required properties:

$encodingProfile->videoStreamConfigs[] = createH264VideoStreamConfig($httpInput, H264Profile::HIGH, 4800000, 1920, null, $rate, $keyFrameInt);
$encodingProfile->videoStreamConfigs[] = createH264VideoStreamConfig($httpInput, H264Profile::HIGH, 3000000, 1280, null, $rate, $keyFrameInt);
$encodingProfile->videoStreamConfigs[] = createH264VideoStreamConfig($httpInput, H264Profile::MAIN, 2000000, 960, null, $rate, $keyFrameInt);
$encodingProfile->videoStreamConfigs[] = createH264VideoStreamConfig($httpInput, H264Profile::MAIN, 1100000, 768, null, $rate, $keyFrameInt);
$encodingProfile->videoStreamConfigs[] = createH264VideoStreamConfig($httpInput, H264Profile::MAIN, 730000, 640, null, $rate, $keyFrameInt);
$encodingProfile->videoStreamConfigs[] = createH264VideoStreamConfig($httpInput, H264Profile::BASELINE, 365000, 480, null, $rate, $keyFrameInt);
$encodingProfile->videoStreamConfigs[] = createH264VideoStreamConfig($httpInput, H264Profile::BASELINE, 145000, 416, null, $rate, $keyFrameInt);

Create audio quality configuration

$asc160 = new AudioStreamConfig();
$asc160->input = $httpInput;
$asc160->bitrate = 160000;
$asc160->rate = 48000;
$asc160->name = 'English';
$asc160->lang = 'en';
$asc160->position = 1;
$encodingProfile->audioStreamConfigs[] = $asc160;

An AudioStreamConfig results in an audio stream, which is using the AAC codec. As our target bitrate is greater than 64kbps, we fulfill this recommendation as well.

Create Encoding configuration

This configuration object acts as a container for all the previous configurations from above and will be passed to the BitmovinClient in order to start the encoding.

// CREATE JOB CONFIG
$jobConfig = new JobConfig();
// ASSIGN OUTPUT
$jobConfig->output = $gcsOutput;
// ASSIGN ENCODING PROFILES TO JOB
$jobConfig->encodingProfile = $encodingProfile;
// ENABLE HLS OUTPUT
$hlsOutput = new HlsOutputFormat();
$hlsOutput->segmentLength = 6;
$jobConfig->outputFormat[] = $hlsOutput;

Lets check, if we met all the requirements/recommendations:

  • Required video codec: H264/AVC
    We are using the H264VideoStreamConfig, whose name already indicate the usage of the H264/AVC codec.
  • Profile and Level must not exceed High Profile, Level 4.2
    We are using the HIGH profile for the HD and FullHD renditions only and their bitrate, framerate and resolution don’t exceed the requirements for level 4.2
  • Keyframe interval should be 2 seconds
    We provided maxGop which causes the encoder provide a key frame every two seconds
  • Each video segment must start with an I-Frame
    Our encoding service, is doing that by default, as this is crucial to do seamless transitions between renditions during playback.
  • De-interlaced content only
    Our test content isn’t interlaced, therefore there is no need for that, however it is support by our encoding service as well.
  • All video renditions should have the same aspect ratios
    Yes, we only provide a value for width, our encoding service calculate the respective height based on the aspect ratio of the input file
  • A segment duration of 6 seconds should be used
    Yes, as we provide a segment length of 6 seconds in our encoding configuration, however this is just a recommendation, so using the default value of 4 seconds is fine as well.

Start the encoding

Finally, we can start the encoding. runJobAndWaitForCompletion() will return as soon as the encoding is finished and transferred/stored successfully.

$client->runJobAndWaitForCompletion($jobConfig);

As you can see creating HLS compliant content is not too hard as long as you provide correct encoding settings. In this case, our API Client also takes of the creation of the HLS master and variant playlists. However, you also have the possibility to create the HLS playlists exactly as you need them, with our API as well. How this can be done, will be part of a different blog post 😉
Not long ago, Apple announced the support to use fragmented MP4 content for HLS, which would eliminate the need to encode your content explicitly for devices that supports HLS only. This being said, you could reduce your storage costs by more than 50%, and your CDN costs as well, as you could use one single content type for all available platforms. Read more about it here in our Blog Post about fMP4 and how you can create it with our Bitmovin API.

The post RFC compliant HLS content and how to create it appeared first on Bitmovin.

]]>
Halve your Encoding, Packaging and Storage Costs – HLS with fragmented MP4 https://bitmovin.com/blog/halve-encoding-packaging-storage-costs-hls-fragmented-mp4/ Tue, 13 Dec 2016 14:55:08 +0000 http://bitmovin.com/?p=15133 By using a single package format you can reduce your encoding, packaging and storage costs by halve and decrease your CDN costs by up to 10% as fMP4 has less overhead than MPEG-TS At this years Worldwide Developer Conference (WWDC 2016), Apple introduced fragmented MP4 (fMP4) for HLS. Although this announcement was not such a...

The post Halve your Encoding, Packaging and Storage Costs – HLS with fragmented MP4 appeared first on Bitmovin.

]]>

fMP4 HLS cost reductions

By using a single package format you can reduce your encoding, packaging and storage costs by halve and decrease your CDN costs by up to 10% as fMP4 has less overhead than MPEG-TS

At this years Worldwide Developer Conference (WWDC 2016), Apple introduced fragmented MP4 (fMP4) for HLS. Although this announcement was not such a big deal for Apple, the impact on the rest of the media industry is huge. In this blog post I will try to explain why, and also address some of the frequently asked questions around the topic of fMP4 for HLS.

Why is this such a big change? After all, it’s just a container format. It’s still HLS.

Very true, but this new container format halves encoding/packaging and storage costs. In the past you were required to multiplex each rendition/bitrate/resolution into two containers, MPEG2 Transport Stream (TS) for HLS and fMP4 for DASH, or maintain “just in time” packagers that do not scale well and cost you money on every request. Now that HLS supports fMP4 it can share the same encoded segments as the DASH manifest. On top of that, there is now also the potential for a major reduction in CDN costs for some businesses. TS is less efficient than DASH, with up to 10% more overhead. This means that CDN costs for HLS content can be reduced by up to 10% in certain cases.

Why do we need two formats, HLS and DASH, isn’t one enough?

Technically speaking it should be, and everybody would prefer that. Only having one format would make all of our lives much easier. Unfortunately, due to the proprietary nature of the Apple infrastructure, HLS is required on Safari, iOS and tvOS while on the other hand you need DASH to get native HTML5 playback on all other browsers.

Isn’t it possible to playback HLS in HTML5 on all browsers?

It is – and we can do this too – but for high resolutions and bitrates (just think about 360° videos in 4k and higher resolutions) it’s obviously not as effective (you need to remultiplex every chunk in javascript) and this costs performance, battery and latency, which makes such videos unplayable on non state of the art devices.

Which devices support HLS with fMP4?

It’s supported on iOS10, macOS and tvOS. Considering that Apple users are adopting new versions of iOS quite quickly – trends show that 80% of all Apple iOS users are already using iOS10 (https://mixpanel.com/trends/#report/ios_10), it’s already a large user base that could potentially benefit from HLS with fMP4. Same applies for OSX where HLS with fMP4 is available on Safari 10 which has by far the largest market share compared with other Safari desktop versions (https://www.stetic.com/market-share/browser/).

What does this mean for SVOD, DRM use cases?

For SVOD and DRM use cases the situation is a little bit more complicated. HLS with fMP4 as well as DASH support MPEG Common Encryption (MPEG-CENC). MPEG-CENC supports two major encryption modes, AES-CTR (Counter Mode) and AES-CBC (Cipher-Block Chaining) which are incompatible. Fairplay with fMP4 HLS uses AES-CBC, while PlayReady and Widevine with fMP4 DASH are using AES-CTR. This makes it currently impossible to use a single content encoding for all DRM systems – for the moment. I think that this will change in the future and Widevine has already added AES-CBC support on Chromecast and Android N devices (https://www.widevine.com/product_news.html). When Widevine continues to broaden their AES-CBC support and Playready follows it will be possible to use single encrypted encoding for all platforms.

pasted image 0
This simplified diagram describes the current situation:

  • Streaming – DASH and HLS manifests are compatible since the beginning because HLS could be seen as a subset of the DASH standard.
  • Container Format – This was a problem in the past as HLS just supported MPEG-TS segments and HLS is required on iOS. DASH on the other hand is container format agnostic and due to the fact that all recent browsers only support fMP4 natively, DASH was used with fMP4 mainly and therefore everybody was required to generate and store both formats. With the recent changes it’s possible to use HLS and DASH with the same segments which reduces your encoding/packaging efforts and storage footprint by half.
  • Encryption Mode – This is only needed for SVOD/DRM use cases but here we have still an incompatibility that needs to be resolved. As FairPlay uses CBC and Widevine and PlayReady mainly use AES-CTR this is a problem and you are still required to generate two versions of your segments – one for Fairplay and another one for Widevine and PlayReady. Nevertheless, as described Widevine has just recently announced that they now also support AES-CBC for Chromecast and Android N devices (https://www.widevine.com/product_news.html). If Widevine and PlayReady continue to broaden that support we will have single format.
  • Codec – On the codec layer, both, H264 as well as H265 can be multiplexed in fMP4 and if the browser or device supports it the player can playback the content.

So I think we all agree now that HLS with fMP4 is something that is pretty useful. Content encoding, packaging and playback solutions are still very rare on the market for this workflow but Bitmovin already offers a complete end-to-end workflow through our Bitmovin API that allows you to encode and playback HLS with fMP4.

HLS fMP4 from Encoding to Playback

Let’s start with the actual encoding of your content. Our new API can be utilized easily, with the PHP, Python, and Go API clients, which are already available. We will use the PHP API Client and its example to show you how to create fMP4 HLS content.
1. Get the Bitmovin PHP API Client
You can either download it from Github or install it using composer. Please see the API client’s repository for more information about the setup.
2. Initialize the Bitmovin API Client
In order to use the API client, we have to initialize it first.

$client = new \Bitmovin\BitmovinClient('INSERT YOUR API KEY HERE');

That’s it^^. The client is now ready to use. Now, we can start preparing the configurations for your input source, output destination, and encoding, containing all the renditions, you want to create for the fMP4 HLS content.
3. Create a input configuration
For the sake of simplicity we are using an HTTP(S) input, although many other input sources such as AWS S3, Google Cloud Storage, Microsoft Azure, Aspera, and (S)FTP are also supported.

$videoUrl = 'http://example.com/path/to/your/movie.mp4';
$input = new HttpInput($videoUrl);

4. Create an output configuration
Here you define either to directly transfer your encoding results to your preferred storage type (
AWS S3, Google Cloud Storage, Microsoft Azure, (S)FTP), or you can store the encoding on your own Bitmovin storage. Direct transfer as well as storage are features of our new Bitmovin API. The first allows you to keep the turnaround time of your encoding very low, so your encoded content becomes available as quickly as possible on your own storage. The latter enables you to keep a backup of your encoding, transfer them later on to another storage and so on. If you want to do both, you can do that as well now :). As AWS S3 is a very common cloud storage, we will use it for this example.

$s3AccessKey = 'INSERT YOUR S3 ACCESS KEY HERE';
$s3SecretKey = 'INSERT YOUR S3 SECRET KEY HERE';
$s3BucketName = 'INSERT YOUR S3 BUCKET NAME HERE';
$s3Prefix = 'path/to/your/output/destination/';
$s3Output = new S3Output($s3AccessKey, $s3SecretKey, $s3BucketName, $s3Prefix);

5. Create an encoding profile configuration
An encoding profile configuration contains all the encoding related configurations for video/audio renditions as well as the encoding environment itself. It’s now possible to select which region and cloud provider should be used to encode your content. This enables you, to locate the encoding infrastructure, where your input and/or output bucket is located. This could improve your download and upload speeds and keeps your costs for egress traffic low.
Create an encoding profile configuration

$encodingProfileConfig = new EncodingProfileConfig();
$encodingProfileConfig-&gt;name = 'Test Encoding FMP4';
$encodingProfileConfig-&gt;cloudRegion = CloudRegion::AWS_EU_WEST_1;

Create an video quality configuration

$videoConfig = new H264VideoStreamConfig();
$videoConfig-&gt;input = $input;
$videoConfig-&gt;width = 1920;
$videoConfig-&gt;height = 1080;
$videoConfig-&gt;bitrate = 4800000;
$encodingProfileConfig-&gt;videoStreamConfigs[] = $videoConfig;

Create an audio quality configuration

$audioConfig = new AudioStreamConfig();
$audioConfig-&gt;input = $input;
$audioConfig-&gt;position = 1;
$audioConfig-&gt;bitrate = 128000;
$audioConfig-&gt;name = 'English';
$audioConfig-&gt;lang = 'en';
$encodingProfileConfig-&gt;audioStreamConfigs[] = $audioConfig;

You might have noticed, that you need to provide an input for each Audio/Video stream configuration. This is another feature of our new API. Now, you can provide several input files for an encoding, create all the renditions you need, and use them in your manifest afterwards. One typical use-case would be, if you have separate files for your video and audio tracks.
6. Select Output Formats
Besides HLS and DASH, we do support Smooth Streaming, MP4, TS and of course HLS fMP4 by now as well. We will only use “HLS fMP4” in this example, but you could create all of them at once with one encoding attempt too.

$outputFormats = array();
$outputFormats[] = new HlsFmp4OutputFormat();

7. Create Encoding configuration
This configuration object acts as a container for all the previous configurations from above and will be passed to the BitmovinClient in order to start the encoding.

$jobConfig = new JobConfig();
$jobConfig-&gt;output = $s3Output;
$jobConfig-&gt;encodingProfile = $encodingProfileConfig;
$jobConfig-&gt;outputFormat = $outputFormats;

8. Start the encoding
Finally, we can start the encoding. runJobAndWaitForCompletion() will return as soon as the encoding is finished and transferred/stored successfully.

$client-&gt;runJobAndWaitForCompletion($jobConfig);

By reading this line, the encoding might be finished already 🙂 If not, we can use the spare time to quickly setup a Bitmovin player example to playback the created content once it is finished.

Playback

Now that we successfully created our HLS fMP4 content, we want to play it as well. This is as simple as it is for HLS TS content, because it works exactly in the same way. The minimum player configuration would look like the following:

{
   key: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
   source: {
       hls: "https://example.com/path/to/your/fmp4-hls-content-master-playlist.m3u8"
   }
}

A full example page would like the following:

&lt;!DOCTYPE html&gt;
&lt;html&gt;
&lt;head&gt;
   &lt;meta charset="utf-8"&gt;
   &lt;meta http-equiv="Content-Type" content="text/html; charset=utf-8"/&gt;
   &lt;title&gt;V6 fMP4 HLS&lt;/title&gt;
   &lt;script src="https://bitmovin-a.akamaihd.net/bitmovin-player/stable/6/bitmovinplayer.min.js"&gt;&lt;/script&gt;
&lt;/head&gt;
&lt;body&gt;
&lt;div id="unique-player-id"&gt;&lt;/div&gt;
&lt;script type="text/javascript"&gt;
   var player = bitmovin.player("unique-player-id");
   var conf = {
      key: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
      source: {
          hls: "https://example.com/path/to/your/fmp4-hls-content-master-playlist.m3u8"
      }
   };
   player.setup(conf).then(function (value) {
       console.log("Successfully created bitmovin player instance");
   }, function (reason) {
       console.log("Error while creating bitmovin player instance");
   });
&lt;/script&gt;
&lt;/body&gt;
&lt;/html&gt;

With this configuration for our player, and the encoded content from our encoding-service, you can now play your HLS fMP4 content everywhere, with one SINGLE format.

HLS fMP4 Demo

Now, we have HLS fMP4 content as well as a player example, which is ready to be used. Before we give that a try, let’s have a look what actually changed compared to a conventional HLS playlist.
As fragmented MP4 requires an initialization segment, which you might already know from MPEG-DASH, it has to be referenced using the EXT-X-MAP tag (see picture below). Then the player is able to play the fragmented MP4 segments properly. Therefore, we had to add the EXT-X-MAP tag in every variant playlist and use EXT-X-VERSION with value “6” or higher.
playlist-fmp4-big
 

Does it play everywhere?

Below you can see the Bitmovin HTML5 Adaptive Streaming Player in action, playing Fragmented MP4 through an HLS manifest on every browser. This demo will work on iOS 10, macOS, tvOS and all recent browser, including Edge, FireFox, Chrome and Safari, etc.

How much storage space can be saved?

Based on the video of that blog post we created a quick storage and CDN savings case. In the past it was necessary to encode DASH and HLS TS to playback this video on all browsers and platforms. Now it’s possible to use just HLS fMP4 for everything. The table below shows the output in Bytes for each format and the combination of DASH and HLS TS. In terms of encoding output, packaging and storage you would save here 50.89% as you just need the HLS fMP4 output which is more than half of the actual DASH and HLS TS output as the HLS TS output is bigger than the DASH output due to the fact that TS is less efficient than fMP4.
HLS fMP4HLS TSDASH + HLS TSSavings

Encoding, Packaging, Storage 263,661,961 B 273,183,364 B 536,845,325 B 50.89%
CDN 263,661,961 B 273,183,364 B 536,845,325 B 3.49%

This advantage helps us also on the CDN as shown in the second row. HLS fMP4 is more efficient than HLS TS and for every user that you serve with content you would need less bits to deliver the same quality. In that example you would immediately save 3.49% on your CDN costs. This could be much more on other services but our TS output is already optimized for low overhead.

Conclusion

All in all, HLS with fMP4 is already very useful, as long as you don’t have to deal with DRM protected content. It enables you to greatly improve your storage consumption, which reduces your overall storage and CDN costs, and it enables you to use a more efficient output format across all devices. If you want to give it a try you could use our new Bitmovin API, just a request an API key for free and try it out.

Video technology guides and articles

The post Halve your Encoding, Packaging and Storage Costs – HLS with fragmented MP4 appeared first on Bitmovin.

]]>
Integrate BuyDRM with Bitmovin https://bitmovin.com/blog/integrate-buydrm-multi-drm-system/ Thu, 06 Oct 2016 13:30:05 +0000 http://bitmovin.com/?p=11520 Integrate BuyDRM with the Bitmovin Cloud Encoder This tutorial will show you how to create an encoding with MultiDRM protection using the KeyOS MultiKeyTM Service from BuyDRM as your DRM key provider. Furthermore, you can use our native HTML5 Bitmovin player to play your DRM protected content on every device out there. The following sections...

The post Integrate BuyDRM with Bitmovin appeared first on Bitmovin.

]]>
BuyDRM tutorial Bitmovin Multi-DRM

Integrate BuyDRM with the Bitmovin Cloud Encoder

This tutorial will show you how to create an encoding with MultiDRM protection using the KeyOS MultiKeyTM Service from BuyDRM as your DRM key provider. Furthermore, you can use our native HTML5 Bitmovin player to play your DRM protected content on every device out there.
The following sections will explain each step you need to take to achieve a successful encoding with DRM protection using our encoding services and BuyDRM.

Requesting the Keys from BuyDRM’s KeyOS Multikey Service

First of all, we need to get the keys necessary to encrypt your encoded content. To do this we will issue an HTTP POST request against BuyDRM’s KeyOS Multikey Service API, which will provide us the “pssh box” necessary for Widevine DRM and the “content Key” which is required for PlayReady DRM.
That request requires 4 parameters:

  • Your “KeyOS User Key”
    It can be obtained from the BuyDRM KeyOS support team
  • KeyID
    It is a randomly generated GUID. You can use the same key for multiple content that you want to encode and protect. As a result you will get a “group of contents” using a single license. If you don’t want to do this, we recommend using unique Key ID every time you make a request to the API.
  • ContentID
    This is also a randomly generated GUID, but is unique within the KeyOS system.
  • MediaID
    You can think of this value as a filename. You can use something like “my file” or an ID that makes sense to you.

These 4 parameters are required for the request body. The whole HTTP POST request will look like the following (please see the highlighted placeholder for the required parameters):

POST /pck HTTP/1.1
Host: packager.licensekeyserver.com
Content-Type: text/xml; charset=utf-8
SOAPAction: http://tempuri.org/ISmoothPackager/RequestEncryptionInfo
<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
   <s:Body>
       <RequestEncryptionInfo xmlns="http://tempuri.org/">
           <ServerKey>08CC2D0F-DF0C-4217-9460-326B60C41E7E</ServerKey>
           <RequestXml>
               <![CDATA[
                   <KeyOSEncryptionInfoRequest>
                   <APIVersion>5.0.0.2</APIVersion>
                   <DRMType>smooth</DRMType>
                   <EncoderVersion>BuyDRMAP v1.0</EncoderVersion>
                   <UserKey>__YOUR_KEY_OS_USER_KEY__</UserKey>
                   <KeyID>__YOUR_KID__</KeyID>
                   <ContentID>__YOUR_CONTENT_ID__</ContentID>
                   <MediaID>__YOUR_MEDIA_ID__</MediaID>
                   <fl_GeneratePRHeader>true</fl_GeneratePRHeader>
                   <fl_GenerateWVHeader>true</fl_GenerateWVHeader>
                   </KeyOSEncryptionInfoRequest>
                   ']]>
           </RequestXml>
       </RequestEncryptionInfo>
   </s:Body>
</s:Envelope>

Javascript Example

BuyDRM already provides a basic Javascript example (see below), which you can use to test the retrieval of the necessary keys from their API. It relies on jQuery and guid.js.

<!DOCTYPE html>
<html lang="en">
<head>
    <script type="text/javascript" src="http://code.jquery.com/jquery-3.1.0.min.js"></script>
    <script type="text/javascript" src="https://raw.githubusercontent.com/dandean/guid/master/guid.js"></script>
    <meta charset="UTF-8">
    <title>Sample Bitcodin Setup</title>
</head>
<body>
    <script type="text/javascript">
        (function($, Guid) {
            function requestKeysFromKeyOS() {
                // Function is used to translate Base64 encoded values into HEX values.
                function base64ToHex(str) {
                    for (var i = 0, bin = atob(str.replace(/[ \r\n]+$/, '')), hex = []; i < bin.length; ++i) {
                        var tmp = bin.charCodeAt(i).toString(16);
                        if (tmp.length === 1) tmp = '0' + tmp;
                            hex[hex.length] = tmp;
                    }
                    return hex.join('');
                }
                // Get keys required for DRM protection from KeyOS Key Management API. jQuery will be used to make a request.
                var keyosUserKey = 'PLEASE, GET FROM KEYOS SUPPORT';
                var fileName = 'File ' + Guid.raw();
                var kid = Guid.raw();
                var cid = Guid.raw();
                return new Promise(function(fulfill, reject) {
                    console.log('Getting keys from KeyOS API.');
                    $.ajax({
                        type: 'POST',
                        url: 'https://packager.licensekeyserver.net/pck',
                        contentType: 'text/xml; charset=utf-8',
                        headers: {
                            SOAPAction: 'http://tempuri.org/ISmoothPackager/RequestEncryptionInfo'
                        },
                        data: '<s:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">' +
                                  '<s:Body>' +
                                    '<RequestEncryptionInfo xmlns="http://tempuri.org/">' +
                                      '<ServerKey>08CC2D0F-DF0C-4217-9460-326B60C41E7E</ServerKey>' +
                                      '<RequestXml><![CDATA[' +
                                        '<KeyOSEncryptionInfoRequest>' +
                                          '<APIVersion>5.0.0.2</APIVersion>' +
                                          '<DRMType>smooth</DRMType>' +
                                          '<EncoderVersion>BuyDRMAP v1.0</EncoderVersion>' +
                                          '<UserKey>' + keyosUserKey + '</UserKey>' +
                                          '<KeyID>' + kid + '</KeyID>' +
                                          '<ContentID>' + cid + '</ContentID>' +
                                          '<fl_GeneratePRHeader>true</fl_GeneratePRHeader>' +
                                          '<fl_GenerateWVHeader>true</fl_GenerateWVHeader>' +
                                          '<MediaID>' + fileName + '</MediaID>' +
                                        '</KeyOSEncryptionInfoRequest>' +
                                      ']]></RequestXml>' +
                                    '</RequestEncryptionInfo>' +
                                  '</s:Body>' +
                                '</s:Envelope>',
                        success: function (data, status, jqXHR) {
                            try {
                                console.log('KeyOS API responded.', data);
                                var requestResult = $($.parseXML(jqXHR.responseText)).find('RequestEncryptionInfoResult').text();
                                var response = $($.parseXML(requestResult));
                                if (!response)
                                    throw new Error('Response from KeyOS Key Management API is malformed.');
                                var responseStatus = response.find('Status').text();
                                var responseMessage = response.find('Message').text();
                                var responseLogId = response.find('LogId').text();
                                if (responseStatus === '1')
                                    throw new Error('Response from KeyOS Key Management API contains an error: ' + responseLogId + ', ' + responseMessage);
                                console.log('KeyOS API responded with success.', data);
                                var psshBox = response.find('WVHeader').text();
                                var contentKey = response.find('ContentKey').text();
                                fulfill({
                                    pssh: psshBox,
                                    // Return ContentKey as HEX, not Base64 encoded byte array.
                                    contentKey: base64ToHex(contentKey),
                                    // Return KeyID as HEX, not GUID.
                                    keyId: kid.split('-').join(''),
                                    // Return ContentID as HEX, not GUID.
                                    contentId: cid.split('-').join(''),
                                    prLAUrl: 'http://sldrm.licensekeyserver.com/core/rightsmanager.asmx',
                                    wvLAUrl: 'http://widevine.licensekeyserver.com'
                                });
                            }
                            catch(err) {
                                console.log('KeyOS API responded with error.', err);
                                reject({
                                    error: err.message
                                });
                            }
                        },
                        error: function() {
                            reject({
                                error: 'Sorry, error while trying to get data from KeyOS Key Management API'
                            });
                        }
                    });
                });
            }
            requestKeysFromKeyOS();
        })(jQuery, Guid);
    </script>
</body>
</html>

Create an Encoding with Multi DRM Configuration (PlayReady + Widevine) with our Java API Client

The following Configuration can be found in our Java API Client example at line 77 . You will need to enter the values you obtained from BuyDRM’s API earlier. Based on that example, all you need to do is to replace: __YOUR_KID__ with your randomly generated KeyID from before, __BUYDRM_CONTENT_KEY__ and __BUYDRM_PSSH__ with the ContentKey and PSSH Information you obtained earlier from the request against BuyDRM’s API.
CreateJobWithPlayreadyWidevineCombinedDRM.java (@GitHub):

CombinedDrmConfig drmConfig = new CombinedDrmConfig();
drmConfig.kid = "__YOUR_KID__"; //HEX Format required
drmConfig.key = "__BUYDRM_CONTENT_KEY__"; //HEX Format required
drmConfig.laUrl = "http://pr.test.expressplay.com/playready/RightsManager.asmx";
drmConfig.pssh = "__BUYDRM_PSSH__"; //BASE64 encoded string required

BuyDRM has prepared an example based on our Javascript API Client, which comes with the example for obtaining the keys from their API and it creates a DRM protected Encoding with our encoding service. BuyDRM customers can access that here.

Setup the Bitmovin HTML5 Player with DRM

We already have a tutorial in place, which explains how to configure the Bitmovin HTML5 Player with BuyDRM in order to play your DRM protected content.
We also offer other API clients for the most popular languages including PHP, java, node.js, python, Ruby and C#/.NET. You can try out our encoding service for free, just sign up for a free account. This account will allow you up to 10 encodings or 2.5GB per month.
Our encoding API is well documented, and you can find information on how to use it in our support section.

You may also find our tutorial on using multiple DRMs useful.

Video technology guides and articles

 

The post Integrate BuyDRM with Bitmovin appeared first on Bitmovin.

]]>
Integrate Axinom Multi-DRM with Bitmovin https://bitmovin.com/blog/integrate-axinom-multi-drm-bitmovin/ Mon, 12 Sep 2016 14:49:27 +0000 http://bitmovin.com/?p=10627 The following tutorial will show you how to use Axinom DRM together with the Bitmovin Cloud Encoding system to create a video distribution platform, ready to target multiple devices and browsers. By following this tutorial and using the supporting links that you will find below, you can create a video on demand service with the...

The post Integrate Axinom Multi-DRM with Bitmovin appeared first on Bitmovin.

]]>
Multi DRM with Axinom and Bitmovin

The following tutorial will show you how to use Axinom DRM together with the Bitmovin Cloud Encoding system to create a video distribution platform, ready to target multiple devices and browsers.

By following this tutorial and using the supporting links that you will find below, you can create a video on demand service with the same speed and quality as Netflix. Axinom provides, as part of Axinom DRM a License Server product that supports multiple DRM technologies. The current DRM technologies supported by Axinom DRM are Microsoft PlayReady, Widevine Modular and Apple FairPlay Streaming. In this blog post we give a high-level overview of integrating Axinom Multi DRM with Bitmovin.
Request your evaluation account today at https://drm.axinom.com/evaluation-account/ and receive one month of free evaluation along with a fact sheet and full documentation of Axinom DRM.

Encode and DRM-Protect Content with Bitmovin

In order to produce DRM-protected content the Bitmovin Cloud Encoding API is used. The easiest way to get started is to use one of our API clients.

Create an Encoding Job with a PlayReady Config

Let’s look at the example using the PHP API client. The important part here is to use the PlayReadyDRMConfig object. For testing you can also use the data provided in our example below:

$playreadyDRMConfig = new PlayReadyDRMConfig();
$playreadyDRMConfig->keySeed = 'KEY_SEED';
$playreadyDRMConfig->kid = '7459975db2f848eda32556e4f34d19c7';
$playreadyDRMConfig->laUrl = 'https://drm-playready-licensing.axtest.net/AcquireLicense';
$playreadyDRMConfig->method =  DRMEncryptionMethods::MPEG_CENC;
$jobConfig = new JobConfig();
...
$jobConfig-&gt;drmConfig = $playreadyDRMConfig;
$job = Job::create($jobConfig);

The parameters of the PlayReady configuration have the following meaning:

  • keySeed: The Key Seed to be used to generate the Content Key based on the Content Key ID. Use the Key Seed provided to you in your Axinom DRM account fact sheet.
  • kid: The content key ID in hex format.
  • laUrl: Axinom DRM testing environment’s PlayReady license acquisition URL.
  • method: Currently we support MPEG-CENC.

Create an Encoding Job with a Widevine Config

Let’s look at the example using the PHP API client. The important part here is to set the WidevineDRMConfig object providing your Widevine data.

$widevineDRMConfig = new WidevineDRMConfig();
$widevineDRMConfig->provider = 'SIGNER';
$widevineDRMConfig->signingKey = 'SIGNING_KEY';
$widevineDRMConfig->signingIV = 'SIGNING_IV';
$widevineDRMConfig->requestUrl = 'https://keyserver.axtest.net/api/GetContentKey';
$widevineDRMConfig->contentId = '7459975d-b2f8-48ed-a325-56e4f34d19c7';
$widevineDRMConfig->method = DRMEncryptionMethods::MPEG_CENC;
$jobConfig = new JobConfig();
...
$jobConfig-&gt;drmConfig = $widevineDRMConfig;
$job = Job::create($jobConfig);

The parameters of the Widevine configuration have the following meaning:

  • provider: The name of the identity that signs the Widevine request. Use the name provided to you in your Axinom DRM account fact sheet.
  • signingKey: The key to be used to sign the Widevine request. Use the key provided to you in your Axinom DRM account fact sheet.
  • signingIV: The IV to be used to sign the Widevine request. Use the IV provided to you in your Axinom DRM account fact sheet.
  • requestUrl: The URL of the Axinom Widevine Key Server’s GetContentKey API function.
  • contentId: The ID of a Content Key to be requested.
  • method: Currently we support MPEG-CENC.

Axinom offers as part of Axinom DRM the Axinom Widevine Key Server which implements Google’s Common Encryption API for Widevine DRM. Additionally, the Axinom Widevine Key Server adds support for other DRM systems as well. See the Axinom DRM documentation provided to you after signup for more details.

Create an Encoding Job with a Multi DRM Config

It is also possible to encrypt your content to be played by multiple DRM systems, in this case PlayReady and Widevine clients. You need to get a common content encryption key and key identifier from both systems. With these values you can encrypt the content as follows:

$combinedWidevinePlayreadyDRMConfig = new CombinedWidevinePlayreadyDRMConfig();
$combinedWidevinePlayreadyDRMConfig->pssh = 'CAESEAs1DAhLy0uWqHOMJPbpkcUaDXdpZGV2aW5lX3Rlc3QiJDBCMzUwQzA4LTRCQ0ItNEI5Ni1BODczLThDMjRGNkU5OTFDNSoCSEQ=';
$combinedWidevinePlayreadyDRMConfig->key = 'CONTENT_KEY';
$combinedWidevinePlayreadyDRMConfig->kid = '7459975db2f848eda32556e4f34d19c7';
$combinedWidevinePlayreadyDRMConfig->laUrl = 'https://drm-playready- licensing.axtest.net/AcquireLicense';
$combinedWidevinePlayreadyDRMConfig->method =  DRMEncryptionMethods::MPEG_CENC;
$jobConfig = new JobConfig();
...
$jobConfig-&gt;drmConfig = $combinedWidevinePlayreadyDRMConfig;
$job = Job::create($jobConfig);

The parameters of the combined configuration have the following meaning:

  • key: This is the common content encryption key in hex format.
  • kid: This is the common content key ID in hex format.
  • laUrl: Axinom DRM testing environment’s PlayReady license acquisition URL.
  • pssh: The Widevine pssh box that can be obtained using the Axinom Widevine Key Server.
  • method: Currently we only support MPEG-CENC.

Create an Axinom DRM Entitlement Message and an Axinom DRM License Token

Axinom DRM Entitlement Message is an Axinom DRM specific message to authorize and instruct the Axinom DRM License Server to generate a license. Axinom DRM Entitlement Message is represented as a JSON data structure. It shall be delivered to the Axinom DRM License Server along with a license request as a signed JSON Web Token, henceforth referred to as Axinom DRM License Token, in the X-AxDRM- Message HTTP header.
The following is a sample Axinom DRM Entitlement Message.

{
    "version": 1,
    "com_key_id": "69e54088-e9e0- 4530-8c1a- 1eb6dcd0d14e",
    "message": {
        "type": "entitlement_message",
        "keys": [
            {
                "id": "7459975d-b2f8- 48ed-a325- 56e4f34d19c7"
            }
        ]
    }
}
  • com_key_id: The ID of the Communication Key that is used for signing when encoding the message into an Axinom DRM License Token.
  • keys[0].id: The ID of the Content Key to be included in the license.

The following is a sample Axinom DRM Entitlement Message encoded into an Axinom DRM License Token.

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiNjllNTQwODgtZTllMC00NTMwLThjMWEtMWViNmRjZDBkMTRlIiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiNzQ1OTk3NWQtYjJmOC00OGVkLWEzMjUtNTZlNGYzNGQxOWM3In1dfX0.2wf-pgc-TcSMCuIT6nCCk3nOw8S-fd_S0K8GkskfOrU

Note: A customer of Axinom DRM will typically implement an authorization backend to serve Axinom DRM License Tokens to player applications. See the Axinom DRM documentation for recommended workflows.

Setup the Bitmovin Player to Work with Axinom DRM

In order for the Bitmovin Adaptive Streaming HTML5 Player to request DRM licenses from the Axinom DRM License Server the player must be configured accordingly. The Axinom DRM License Server’s license acquisition URLs for desired DRM systems must be specified. Additionally, the configuration must instruct the player to include Axinom DRM License Tokens in the X-AxDRM- Message HTTP header for license requests.

Axinom DRM License Server License Acquisition URLs

  • PlayReady: http://drm-playready-licensing.axtest.net/AcquireLicense
  • Widevine: http://drm-widevine-licensing.axtest.net/AcquireLicense

Note: Both HTTP and HTTPS can be used.

Configuring the Bitmovin Adaptive Streaming HTML5 Player

Add sample config that specifies Axinom DRM License Server’s PlayReady and Widevine license acquisition URLs. For both DRMs the player configuration instructs the player to include an Axinom DRM License Token in the X-AxDRM- Message HTTP header for license requests. The Axinom DRM License Token must be the same token acquired as described in “Create an Axinom DRM Entitlement Message”.

var conf = {
  ...
  source: {
          hls: 'https://yourserver/manifests/stream.m3u8',
          dash: 'https://yourserver/manifests/stream.mpd',
          drm: {
            widevine: {
                LA_URL: 'https://drm-widevine-licensing.axtest.net/AcquireLicense',
                headers: [{
                  name: 'X-AxDRM-Message',
                  value: 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiNjllNTQwODgtZTllMC00NTMwLThjMWEtMWViNmRjZDBkMTRlIiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiNzQ1OTk3NWQtYjJmOC00OGVkLWEzMjUtNTZlNGYzNGQxOWM3In1dfX0.2wf-pgc-TcSMCuIT6nCCk3nOw8S-fd_S0K8GkskfOrU'
                }]
            },
            playready: {
                LA_URL: 'https://drm-playready-licensing.axtest.net/AcquireLicense',
                headers: [{
                  name: 'X-AxDRM-Message',
                  value: 'eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJ2ZXJzaW9uIjoxLCJjb21fa2V5X2lkIjoiNjllNTQwODgtZTllMC00NTMwLThjMWEtMWViNmRjZDBkMTRlIiwibWVzc2FnZSI6eyJ0eXBlIjoiZW50aXRsZW1lbnRfbWVzc2FnZSIsImtleXMiOlt7ImlkIjoiNzQ1OTk3NWQtYjJmOC00OGVkLWEzMjUtNTZlNGYzNGQxOWM3In1dfX0.2wf-pgc-TcSMCuIT6nCCk3nOw8S-fd_S0K8GkskfOrU'
                }]
            }
        }
     }
};

The Bitmovin Player API is well documented and perfectly equipped to deal with Multi DRM integrations. You can find more information on how to configure it in our support section under Player Documentation.

The post Integrate Axinom Multi-DRM with Bitmovin appeared first on Bitmovin.

]]>
Webhooks for the Video Encoding API https://bitmovin.com/blog/webhooks-encoding-api/ Fri, 29 Jul 2016 14:33:58 +0000 http://bitmovin.com/?p=10146 Our new notification service gives you real time status updates on your video encodings with RESThooks. Have you ever been on a family holiday and had your kids nagging you the entire way: “Are we there yet, are we there yet!?”. If you have, then you have some idea what it is like for an application trying to find...

The post Webhooks for the Video Encoding API appeared first on Bitmovin.

]]>
webhook in the bitmovin API provide a better way to monitor processes

Our new notification service gives you real time status updates on your video encodings with RESThooks.

Have you ever been on a family holiday and had your kids nagging you the entire way: “Are we there yet, are we there yet!?”. If you have, then you have some idea what it is like for an application trying to find out if a process has finished running. Constant requests for the current state of a process can be resource intensive, not to mention annoying.
Polling the state of an encoding job in your adaptive video workflow is a perfect example of this situation. It requires extra implementation and maintenance and also increases the server load. It would be much more efficient for our system to notify your system as soon as the encoding is complete. We aren’t holding out too much hope that the kids will settle for this workflow, but we can now offer exactly this solution for your encoding infrastructure through the latest version of our API.

How does it work?

Due to the many and varied methods available for implementing webhooks there were a lot of different potential solutions. A REST interface was a logical choice and when we came across RESTHooks, we liked it!
In this first release we offer two resources: Events and Subscriptions.
The four events available are:

  • encoding.finished
  • encoding.error
  • transfer.finished
  • transfer.error.

We will continue to expand this list in subsequent API releases.
In order for your system to be notified when one of these events fires, you can create a subscription by assigning a callback URL. This URL will receive a POST request with a defined request-body. In the case that our request to your callback-URL fails, we will try it three more times using an exponential backoff strategy, until we mark the triggered event as “aborted”.
And it’s as simple as that. Once that notification hits your system, you can do whatever you like with it to enhance your view and control your encoding processes.

Example Time!

Firstly, you probably want to know which events are available. I already told you that! But it’s a fair question, because by the time I finish typing this post we may have added a few more ;-). Keep up to date with the latest features in our release notes.

GET Notifications/Events

Each event comes with an unique ID which is needed to create a subscription.
Response:

[
  {
    "id": "46faf62b-4f80-4f8a-b9e4-b23bf050f43c",
    "name": "encoding.finished",
    "description": "Occurs once an encoding is finished and ready to be transferred or to be consumed by other bitmovin services"
  },
  {
    "id": "c8115a85-4ce9-4836-9023-a6db1b45bd04",
    "name": "transfer.finished",
    "description": "Occurs once a transfer has finished successfully"
  },
  {
    "id": "175dd18f-602e-449d-8ad8-714645ae2e4e",
    "name": "encoding.error",
    "description": "Occurs if an encoding has failed"
  },
  {
    "id": "ffcdd105-8fc7-408b-ab1f-ee2090966df2",
    "name": "transfer.error",
    "description": "Occurs if a transfer has failed"
  }
]

POST Notifications/Subscriptions

Creating a subscription for an event is quite easy. All you need is a valid endpoint URL which is able to receive a POST request. The request body for a subscription to the “encoding.finished”-event would look like this:
Request

{
  "eventId": "46faf62b-4f80-4f8a-b9e4-b23bf050f43c",
  "url": "<your-callback-url>"
}

Response:

{
  "id": "<your-subscription-id>",
  "event": {
    "id": "46faf62b-4f80-4f8a-b9e4-b23bf050f43c",
    "name": "encoding.finished",
    "description": "Occurs once an encoding is finished and ready to be transferred or to be consumed by other bitmovin services"
  },
  "url": "<your-callback-url>"
}

GET Notifications/Subscriptions/[id]/Trigger

The subscription-ID from before, can be used to get a list of all events which have been triggered so far. You will get the latest callbackAttempt as well as how often we have tried to send a request to your callback URL.
Response:

[
  {
    "id": "<event-trigger-id>",
    "callbackAttempts": [
      {
        "id": "<callback-attempt-id-1>",
        "connectionTime": "2016-07-15T17:24:36.236",
        "attemptNumber": 1,
        "errorMessage": null,
        "status": "successful",
        "responseCode": 200,
        "responseData": "ok",
        "method": "POST"
      }
    ],
    "subscription": {
      "id": "<subscription-id>",
      "event": {
        "id": "<event-id>",
        "name": "encoding.finished",
        "description": "Occurs once an encoding is finished and ready to be transferred or to be consumed by other bitmovin services"
      },
      "url": "<your-callback-url>"
    },
    "payload": "{\"jobId\":123456,\"status\":\"Finished\",\"speed\":\"premium\", ... }",
    "status": "completed",
    "latestCallbackAttempt": {
      "id": "<callback-attempt-id-1>",
      "connectionTime": "2016-07-15T17:24:36.236",
      "attemptNumber": 1,
      "errorMessage": null,
      "status": "successful",
      "responseCode": 200,
      "responseData": "ok",
      "method": "POST"
    }
  }
]

More Examples and Documentation

We have already updated our API reference, where you will find all available calls to interact with that service. We have also integrated our notification-service into the Java API Client. Other API Clients will follow shortly.
As this service is currently a beta, we greatly appreciate your feedback, so please tell us what you think about it, what you like, what you hate, what you think we should add next.

The post Webhooks for the Video Encoding API appeared first on Bitmovin.

]]>
Dropbox Video Streaming – Encoding Integration https://bitmovin.com/blog/dropbox-video-streaming-encoder-inputs-outputs/ Tue, 07 Apr 2015 07:48:28 +0000 http://bitmovin.com/?p=7569 Dropbox Video Streaming Integration and New User Interface The Bitmovin team has worked hard to improve the user interface of our portal and now the latest version is ready to use. Among the many other improvements, the mobile view has been optimized to enable our customers to monitor the status of their encodings from any place with any...

The post Dropbox Video Streaming – Encoding Integration appeared first on Bitmovin.

]]>
Dropbox Video Streaming Integration and New User Interface

The Bitmovin team has worked hard to improve the user interface of our portal and now the latest version is ready to use. Among the many other improvements, the mobile view has been optimized to enable our customers to monitor the status of their encodings from any place with any device. We hope you like it as much as we do!
Bitmovin Portal - Cloud Encoding Input Overview

Dropbox Integration

Bitmovin now integrates seamlessly with Dropbox, which can be used to deliver input files input in to our system. This means  you can host your files on Dropbox and encode them directly with Bitmovin. We can also automatically transfer the completed files back to your Dropbox account when the encoding is finished.
Although video streaming directly from Dropbox in a production workflow is not recommended, it can be very useful for testing purposes, and this integration opens up many new possibilities to improve your video streaming workflow.
 
Encode MPEG-DASH & HLS directly from your DropBox

Encode MPEG-DASH & HLS directly from your DropBox

The post Dropbox Video Streaming – Encoding Integration appeared first on Bitmovin.

]]>