Christopher Mueller – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Thu, 14 Dec 2023 17:14:22 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg Christopher Mueller – Bitmovin https://bitmovin.com 32 32 MPEG-DASH (Dynamic Adaptive Streaming over HTTP) https://bitmovin.com/blog/dynamic-adaptive-streaming-http-mpeg-dash/ https://bitmovin.com/blog/dynamic-adaptive-streaming-http-mpeg-dash/#comments Mon, 28 Feb 2022 10:39:36 +0000 http://bitmovin.com/?p=7351 Welcome to our comprehensive guide to MPEG-DASH in 2022. In this quick and informative article, Bitmovin CTO and Co-founder Christopher Mueller describes everything you need to know about MPEG-Dash (Dynamic Adaptive Streaming over HTTP). Chapters: 1: The History of MPEG-DASH (Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1) MPEG-DASH (Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)...

The post MPEG-DASH (Dynamic Adaptive Streaming over HTTP) appeared first on Bitmovin.

]]>
Welcome to our comprehensive guide to MPEG-DASH in 2022. In this quick and informative article, Bitmovin CTO and Co-founder Christopher Mueller describes everything you need to know about MPEG-Dash (Dynamic Adaptive Streaming over HTTP).

Chapters:

  • The History of MPEG-DASH?
  • What is MPEG-DASH (in a Nutshell)?
  • Media Presentation Description (MPD)
  • Segment Referencing Schemes
  • Conclusion and Further Reading

1: The History of MPEG-DASH (Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1)

MPEG-DASH (Dynamic Adaptive Streaming over HTTP, ISO/IEC 23009-1) is a vendor-independent, international standard ratified by MPEG and ISO. Previous adaptive streaming technologies – such as Apple HLS, Microsoft Smooth Streaming, Adobe HDS, etc. – have been released by vendors with limited support of company-independent streaming servers as well as playback clients. As such a vendor-dependent situation is not desired, standardization bodies started a harmonization process, resulting in the ratification of MPEG-DASH back in 2012.


Key-Targets and Benefits of MPEG-DASH:

  • reduction of startup delays and buffering/stalls during the video
  • continued adaptation to the bandwidth situation of the client
  • client-based streaming logic enabling the highest scalability and flexibility
  • use of existing and cost-effective HTTP-based CDNs, proxies, caches
  • efficient bypassing of NATs and Firewalls by the usage of HTTP
  • common Encryption – signalling, delivery & utilization of multiple concurrent DRM schemes from the same file
  • simple splicing and (targeted) ad insertion
  • support for efficient trick mode

In recent years, MPEG-DASH has been integrated into new standardization efforts, e.g., the HTML5 Media Source Extensions (MSE) enabling the DASH playback via the HTML5 video and audio tag, as well as the HTML5 Encrypted Media Extensions (EME) enabling DRM-protected playback in web browsers. Furthermore, DRM-protection with MPEG-DASH is harmonized across different systems with the MPEG-CENC (Common Encryption) and MPEG-DASH playback on different SmartTV platforms is enabled via the integration in Hybrid broadcast broadband TV (HbbTV 1.5 and HbbTV 2.0). The usage of the MPEG-DASH standard has also been simplified by industry efforts around the DASH Industry Forum and their DASH-AVC/264 recommendations, as well as forward-looking approaches such as the DASH-HEVC/265 recommendation on the usage of H.265/HEVC within MPEG-DASH.

Here is a table showing MPEG-DASH Standards:

MPEG-DASH standards
Today, MPEG-DASH is gaining more and more deployments, accelerated by VOD platforms such as Netflix or Google which have adopted this important standard. With these two major sources of internet traffic taken into account, 50 % of total internet traffic is already MPEG-DASH.

2: What is MPEG-DASH (in a Nutshell)?

The basic idea of MPEG-DASH is this: Chop the media file into segments that can be encoded at different bitrates or spatial resolutions. The segments are provided on a web server and can be downloaded through HTTP standard-compliant GET requests (as shown in the figure below) where the HTTP Server serves three different qualities, i.e., Low, Medium, and Best, chopped into segments of equal length. The adaptation to the bitrate or resolution is done on the client-side for each segment, e.g., the client can switch to a higher bitrate – if bandwidth permits – on a per-segment basis. This has several advantages because the client knows its capabilities, received throughput, and the context of the user best.

Here’s an example of an MPEG-DASH workflow:

Bitmovin - MPEG-DASH workflow
In order to describe the temporal and structural relationships between segments, MPEG-DASH introduced the so-called Media Presentation Description (MPD). The MPD is an XML file that represents the different qualities of the media content and the individual segments of each quality with HTTP Uniform Resource Locators (URLs). This structure provides the binding of the segments to the bitrate (resolution, etc.) among others (e.g., start time, duration of segments). As a consequence, each client will first request the MPD that contains the temporal and structural information for the media content, and based on that information it will request the individual segments that fit best for its requirements.

3: Media Presentation Description (MPD)

The MPEG-DASH Media Presentation Description (MPD) is a hierarchical data model. Each MPD could contain one or more Periods. Each of those Periods contains media components such as video components e.g., different view angles or with different codecs, audio components for different languages or with different types of information (e.g., with director’s comments, etc.), subtitle or caption components, etc. Those components have certain characteristics like the bitrate, frame rate, audio channels, etc. which do not change during one Period. Nevertheless, the client is able to adapt during a Period according to the available bitrates, resolutions, codecs, etc. that are available in a given Period.


Furthermore, a Period could separate the content, e.g., for ad insertion, changing the camera angle in a live football game, etc. For example, if an ad should only be available in high resolution while the content is available from standard definition to high definition, you would simply introduce your own Period for the ad which contains only the ad content in high definition.
After and before this Period, there are other Periods that contain the actual content (e.g., movie) in multiple bitrates and resolutions from standard to high definition.

What are AdaptationSets?

Typically, media components such as video, audio or subtitles/captions, etc. are arranged in AdaptationSets. Each Period can contain one or more AdaptationSets that enable the grouping of different multimedia components that logically belong together.
For example, components with the same codec, language, resolution, audio channel format (e.g., 5.1, stereo), etc. could be within the same AdaptationSet. This mechanism allows the client to eliminate a range of multimedia components that do not fulfill its requirements. A Period can also contain a Subset that enables the restriction of combinations of AdaptationSets and expresses the intention of the creator of the MPD. For example, allowing high definition content only with 5.1 audio channel format.

This graph shows an example of an MPD Model

MPD model
An AdaptationSet consists of a set of Representations containing interchangeable versions of the respective content, such as different resolutions and bitrates, etc. Although one single Representation would be enough to provide a playable stream, multiple Representations give the client the possibility to adapt the media stream to its current network conditions and bandwidth requirements. This guarantees a smooth playback.


Of course, there are also further characteristics beyond the bandwidth describing the different representations and enabling adaptation. Representations may differ in the used codec, the decoding complexity and therefore the necessary CPU resources, or the rendering technology, just to name a few examples. Representations are chopped into Segments to enable the switching between individual Representations during playback. Those Segments are described by a URL and in certain cases by an additional byte range if those segments are stored in a bigger, continuous file.


The Segments in a Representation usually have the same length in terms of time and are arranged according to the media presentation timeline, which represents the timeline for the synchronization, enabling the smooth switching of Representations during playback. Segments could also have an availability time signalled as wall-clock time from which they are accessible for live streaming scenarios. In contrast to other systems, MPEG-DASH does not restrict the segment length or give advice on the optimal length. This can be chosen depending on the given scenario, e.g., longer Segments allow more efficient compression as Group of Pictures (GOP) could be longer or less network overhead, as each Segment will be requested through HTTP and with each request a certain amount of HTTP overhead is introduced. In contrast, shorter Segments are used for live scenarios as well as for highly variable bandwidth conditions like mobile networks, as they enable faster and flexible switching between individual bitrates.

Subsegments

Segments may also be subdivided into smaller Subsegments which represent a set of smaller access units in the given Segment. In this case, there is a Segment index available in the Segment describing the presentation time range and byte position of the Subsegments, which may be downloaded by the client in advance to generate the appropriate Subsegment requests using HTTP 1.1 byte range requests. During the playback of the content, arbitrary switching between the Representations is not possible at any point in the stream and certain constraints have to be considered. For example, Segments are not allowed to overlap, and dependencies between segments are also not allowed. To enable the switching between Representations, MPEG-DASH introduced Stream Access Points (SAP) on which this is possible. As an example, each Segment typically begins with an IDR-frame (in H.264/AVC) to be able to switch the Representation always after the transmission of one segment.

4: Segment Referencing Schemes

Segments are typically referenced through URLs as defined in RFC3986, using HTTP or HTTPS restricted possibly by a byte range. The byte range can be signaled through the attribute range and must be compliant with the RFC2616. Segments are part of a Representation, while elements like BaseURL, SegmentList, SegmentTemplate, and SegmentList can add additional information, such as location, availability, and further properties. Specifically, a representation should only contain one of the following options:

  • one or more SegmentList elements
  • one SegmentTemplate
  • one or more BaseURL elements, at most one SegmentBase element and no SegmentTemplate or SegmentList element.

SegmentBase

SegmentBase is the most trivial way of referencing segments in the MPEG-DASH standard as it will be used when only one media segment is present per Representation, which will then be referenced through a URL in the BaseURL element. If a Representation should contain more segments, either SegmentList or SegmentTemplate must be used.


For example, Representation using SegmentBase could look like this:




<Representation mimeType="video/mp4"
                   frameRate="24"
                   bandwidth="1558322"
                   codecs="avc1.4d401f" width="1277" height="544">
  <BaseURL>http://cdn.bitmovin.net/bbb/video-1500k.mp4</BaseURL>
  <SegmentBase indexRange="0-834"/>
</Representation>


The Representation example above references one single segment through the BaseURL which is the 1500 kbps video quality of the corresponding content. The index of the quality is described by the SegmentBase attribute indexRange. This means that the information about Random Access Points (RAP) and other initialization information is contained in the first 834 bytes.

SegmentList

The SegmentList contains a list of SegmentURL elements which should be played back by the client in the order at which they occur in the MPD. A SegmentURL element contains a URL to a segment and possibly a byte range. Additionally, an index segment could occur at the beginning of the SegmentList.

Here is an example of Representation using SegmentList:

<Representation mimeType="video/mp4"
                   frameRate="24"
                   bandwidth="1558322"
                   codecs="avc1.4d401f" width="1277" height="544">
  <SegmentList duration="10">
    <Initialization sourceURL="http://cdn.bitmovin.net/bbb/video-1500/init.mp4"/>
    <SegmentURL media="http://cdn.bitmovin.net/bbb/video-1500/segment-0.m4s"/>
    <SegmentURL media="http://cdn.bitmovin.net/bbb/video-1500/segment-1.m4s"/>
    <SegmentURL media="http://cdn.bitmovin.net/bbb/video-1500/segment-2.m4s"/>
    <SegmentURL media="http://cdn.bitmovin.net/bbb/video-1500/segment-3.m4s"/>
    <SegmentURL media="http://cdn.bitmovin.net/bbb/video-1500/segment-4.m4s"/>
  </SegmentList>
</Representation>

SegmentTemplate

The SegmentTemplate element provides a mechanism to construct a list of segments from a given template. This means that specific identifiers will be substituted by dynamic values to create a list of segments. This has several advantages. For example, SegmentList based MPDs can become very large because each segment needs to be referenced individually. Compared with SegmentTemplate, this list could be described by a few lines that indicate how to build a large list of segments.

Here is a number based SegmentTemplate:

<Representation mimeType="video/mp4"
                   frameRate="24"
                   bandwidth="1558322"
                   codecs="avc1.4d401f" width="1277" height="544">
  <SegmentTemplate media="http://cdn.bitmovin.net/bbb/video-1500/segment-$Number$.m4s"
                      initialization="http://cdn.bitmovin.net/bbb/video-1500/init.mp4"
                      startNumber="0"
                      timescale="24"
                      duration="48"/>
</Representation>


The example above shows the number-based SegmentTemplate mechanism. As you can see, instead of having multiple individual segment references through SegmentURL as shown in the SegmentList example, a SegmentTemplate can describe this use case in just a few lines. This is what makes the MPD more compact. This is especially useful for longer movies with multiple Representations where an MPD with SegmentList could have multiple megabytes. This would heavily increase the startup latency of a stream, as the client has to fetch the MPD before it could start with the actual streaming process.

Time-Based SegmentTemplate

The SegmentTemplate element could also contain a $Time$ identifier, which will be substituted with the value of the t attribute from the SegmentTimeline. The SegmentTimeline element provides an alternative to the duration attribute with additional features, such as:

  • specifying arbitrary segment durations
  • specifying exact segment durations
  • specifying discontinuities in the media timeline

The SegmentTimeline also uses run-length compression, which is especially efficient when having a sequence of segments with the same duration. When SegmentTimline is used with SegmentTemplate then the following conditions must apply:

  • at least one sidx box shall be present
  • all values of the SegmentTimeline shall describe accurate
    timing, equal to the information in the sidx box

Here’s an example of an MPD excerpt with a SegmentTemplate that is based on a SegmentTimeline

<Representation mimeType="video/mp4"
                   frameRate="24"
                   bandwidth="1558322"
                   codecs="avc1.4d401f" width="1277" height="544">
  <SegmentTemplate media="http://cdn.bitmovin.net/bbb/video-1500/segment-$Time$.m4s"
                      initialization="http://cdn.bitmovin.net/bbb/video-1500/init.mp4"
                      timescale="24">
    <SegmentTimeline>
      <S t="0" d="48" r="5"/>
    </SegmentTimeline>
  </SegmentTemplate>
</Representation>


The resulting segment requests of the client would be as follows:

  • http://cdn.bitmovin.net/bbb/video-1500/init.mp4
  • http://cdn.bitmovin.net/bbb/video-1500/segment-0.m4s
  • http://cdn.bitmovin.net/bbb/video-1500/segment-48.m4s
  • http://cdn.bitmovin.net/bbb/video-1500/segment-96.m4s

5: Conclusion and Further Reading

MPEG-DASH is a very broad standard and this is just a brief overview of some essential features and mechanisms.
We continue to write informative posts about the MPEG-DASH standard. In the meantime, you can try out MPEG-DASH on your own and encode content to MPEG-DASH through a Cloud-based Encoding Service.
More Readings:

Did you know?

Bitmovin has a range of video streaming services that can help you deliver content to your customers effectively.
Its variety of features allows you to create content tailored to your specific audience, without the stress of setting everything up yourself. Built-in analytics also help you make technical decisions to deliver the optimal user experience.
Why not try Bitmovin for Free and see what it can do for you.
We hope you found this guide useful! If you did, please don’t be afraid to share it on your social networks!

The post MPEG-DASH (Dynamic Adaptive Streaming over HTTP) appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/dynamic-adaptive-streaming-http-mpeg-dash/feed/ 1
Low Latency Streaming: What is it and How can it be solved? https://bitmovin.com/blog/cmaf-low-latency-streaming/ Fri, 26 Oct 2018 08:18:05 +0000 http://bitmovin.com/?p=24688 Latency is a major challenge for the online video industry. This article takes us through what latency is, why it’s important for streaming and how CMAF low latency streaming can help to solve the problems. Live stream “latency” is the time delay between the transmission of actual live content from the source to when it...

The post Low Latency Streaming: What is it and How can it be solved? appeared first on Bitmovin.

]]>
- Bitmovin

Latency is a major challenge for the online video industry. This article takes us through what latency is, why it’s important for streaming and how CMAF low latency streaming can help to solve the problems.

Live stream “latency” is the time delay between the transmission of actual live content from the source to when it is received and displayed by the playback device. Or to put it another way, the difference between the moment when the actual event is captured on camera or the live feed comes out of a playout server, and the time when the end user actually sees the content on their device’s screen.
Typical broadcast linear stream delay ranges anywhere from 3-5 seconds whereas online streaming has historically been anywhere from 30 seconds to over 60 seconds depending on the viewing device and the video workflow used.
The challenge for the online streaming industry is to reduce this latency to a range closer to linear broadcast signal latency (3-5 sec) or even lower, depending on the application needs. Therefore, many video providers have taken steps to optimize their live streaming workflows by rolling out new streaming standards like the Common Media Application Format (CMAF) and making changes to encoding, CDN delivery, and playback technologies to close the latency gap and to provide near real-time streaming experience for end-users. This reduced latency for online linear video streaming is commonly referred to as “Low Latency”.

Streaming Latency Continuum
Streaming Latency Continuum

Linear stream/signal latency represents a continuum, as indicated in the diagram above. This diagram illustrates the historic reality of online streaming protocols such as HLS and DASH exhibiting higher latency, and nonadaptive bitrate protocols like RTP/RTSP and WebRTC exhibiting much lower sub-second latency. The discussion here is based on the adaptive bitrate protocols, HLS and MPEG-DASH.

Why is this important for me?

The main goal of Low Latency streaming is to keep playback as close as possible to real-time broadcasts so users can engage and interact with content as it’s unfolding. Typical applications include sports, news, betting, and gaming. Another class of latency-sensitive applications includes feedback data as part of the interactive experience – an example is the ClassPass virtual fitness class, as announced by Bitmovin here.
Other interactive applications include game shows and social engagement. In these use-cases, synchronizing latency across multiple devices becomes valuable for viewers to have a similar chance to answer questions, or provide other interactions.

What is CMAF?

Common Media Application Format (CMAF) was introduced in 2016 and was co-authored by Apple and Microsoft to create a standardized transport container for streaming VoD and linear media using the MPEG-DASH or HLS protocols.
The main goals were:
1) Reduce overhead/encoding and delivery costs through standardized encryption methods
2) simplify complexities associated with video streaming workflows and integrations (ex: DRM, advertising, closed captioning, caching, etc)
3) support a single format that can be used to stream across any online streaming device.
When we originally posted our thoughts on CMAF, adoption was still in its infancy. But, in recent months we have seen increased adoption of CMAF across the video workflow chain and by device manufacturers. As end-user expectations to stream linear content with latency equivalent to traditional broadcast have continued to increase, and content rights to stream real-time have become more and more commonplace, CMAF has stepped in as a viable solution.

What is CMAF Low Latency?

When live streaming, the media (video/audio) is sent in segments that are each a few seconds (2-6 sec) long. This inherently adds a few seconds of delay from transmission to playback as the segments have to be encoded, delivered, downloaded, buffered, and then rendered by the player client, all of which is limited at a minimum by the segment size.

CMAF now comes with a low latency mode where each segment can be split up into smaller units, called “chunks”, greatly reducing latency.

CMAF now comes with a low latency mode where each segment can be split up into smaller units, called “chunks” where each chunk can be 500 milliseconds or lower depending on encoder configurations. With low latency CMAF or chunked CMAF, the player can now request incomplete segments and get all available chunks to render instead of waiting for the full segment to become available, thereby cutting latency down significantly.

CMAF Chunks for low latency
CMAF Chunks for low latency

As shown in the diagram above, a “chunk” is the smallest referenceable media unit, by definition, containing a “moof” and “mdat” atom. The mdat holds a single IDR (Instantaneous Decoder Refresh) frame, which is required to begin every “segment”.  A “segment” is a collection of one or more “fragments”, and a “fragment” is a collection of one or more chunks. The “moof” box as shown in the diagram, is required by the player for decoding and rendering individual chunks.
At the transmit end of the chain, encoders can output each chunk for delivery immediately after encoding it, and the player can reference and decode each one separately.

What are we doing to solve the latency problem?

The Bitmovin Player has supported CMAF playback for a while now. Recently, we also added support for CMAF low latency playback for HTML5 (web) and native apps (mobile) platforms. The Bitmovin Player can be configured to turn on low latency mode which then enables the player to allow chunk-based decoding and rendering without having to wait for the full segment to be downloaded.
The Bitmovin Player optimizes start-up logic, determines buffer sizes, and adjusts playback rate to achieve near to real live streaming latency. From our testing, this can go as low as 1.8 seconds while maintaining stream stability and good video quality.
CMAF low latency is compatible with the rest of the features that Bitmovin Player already supports today. (Ex: ads, DRM, analytics, closed captioning).

Standard vs Chunked Segmented Streams
Standard vs Chunked Segmented Streams

In the diagram shown above, player buffering and decoding behavior is shown, contrasting the standard segment (standard latency) mode with the chunked segment mode, corresponding to low latency streaming.
The diagram shows that in non-chunked segments, with a segment size of 4xC (where C is the size of the lowest granularity unit, the chunk, measured in milliseconds) and three-segment buffering, a 14xC-second player latency is typically achieved.
In contrast, chunked segments with CMAF are shown to achieve a 2xC second latency as opposed to a 14xC-second latency, thereby achieving a 7 times improvement in latency.

Are there any trade-offs?

In short, yes. There are some considerations, and some tradeoffs when trying to achieve low latency while still providing a high-quality viewing experience.
Buffer Size: Ideally, we want to render frames as soon as the player receives them. This means we have to maintain a really small buffer size. But, this also introduces instability in the viewing experience especially when the player encounters any unexpected interruptions (like dropped frames or frame bursts) due to network or encoder issues. Without enough locally stored frames, the player stalls or freezes until the buffer refreshes with new frames. This in turn requires the player to re-synch its presentation timing and leads to perceived distortions in the playback experience. Therefore, it’s recommended to maintain at least a 1-second buffer to allow the player to provide a smoother playback experience for viewers that can withstand some network disruptions.
DRM is another factor that might introduce additional delay in start-up time, the license delivery turnaround time will block content playback even though low latency is turned on. In this case, the player adjusts to the latest live frame upon successful license delivery, and the latency is consistent with the set low latency value.

How can I monitor these tradeoffs?

For all of the above reasons, balancing a robust, scalable online streaming platform with minimal re-buffering and stream interruptions against the time-sensitive behavior of low latency CMAF streaming can be challenging. The solution is a holistic view of the streaming experience, provided by Bitmovin Analytics.
Bitmovin Analytics provides insights into session quality so customers can monitor the performance of low latency streaming sessions and make real-time decisions to adjust player and encoding configurations to improve the experience. Bitmovin offers all existing video quality metrics (e.g. Startup time, Buffer Rate) and a few additional metrics to specifically monitor low latency streaming at a content level, such as:

  • Target Latency
  • Observed Latency
  • Playback Rate
  • Dropped Frames
  • Bandwidth Used

Besides the player, what else causes latency?

Chunked CMAF streams and low latency-enabled players are key elements in reducing latency in online streaming. However, there are other components in the video delivery chain that introduce latency at each step that need to be considered for further optimization:

  • Encoder: The encoder needs to be able to ingest live streams as quickly as possible with the encoding configuration optimized to produce the right size of chunks and segments that can then be uploaded to the Origin Server for delivery.
  • First Mile Upload: The upload time depends on the connection type at the upload facility (wired, wireless) and affects overall latency.
  • CDN: The CDN technologies need to allow for chunk-based transfers and to adopt the right caching strategies to propagate chunks across the different delivery nodes in a time-sensitive fashion.
  • Last Mile: The end user’s network conditions also influence overall latency i.e. if the user is on a wired or WiFi or cellular connection. It also depends on how close the user is to the CDN edge.
  • Playback: As discussed earlier, the player needs to optimize start behavior and balance buffering and playback rate to enable quick download and rendering to always be as close as possible to live time.

These steps are shown below in the end-to-end video flow diagram.

Chunked encoding flow
Chunked encoding flow

With chunked segments, from our testing, we’ve seen end-to-end latency as low as 1.8 seconds. However, the customer needs to consider their entire workflow set up to ensure latency is optimized along the full chain to achieve the lowest latency achievable with their specific workflow and network.

In conclusion …

As viewers migrate from a large screen TV by appointment experience to a time-shifted, place-shifted experience with multi-device online streaming, content producers and rights holders have responded by getting more premium content available online, as well as brand new classes of media experiences online involving interactivity and an emphasis on low latency delivery and playback.
The Bitmovin low latency solution was shown here to consist of the Bitmovin Player and Bitmovin Analytics products working together to balance the needs of low latency live streaming on multi-devices while providing the level of insights needed to proactively determine the viewers’ quality of experience, and to take action in case undesired consequences appear as a result of low latency streaming.

Video technology guides and articles

The post Low Latency Streaming: What is it and How can it be solved? appeared first on Bitmovin.

]]>
Video Infrastructure as a Service – The Bitmovin API https://bitmovin.com/blog/video-infrastructure-service-bitmovin-api/ Wed, 30 Nov 2016 07:27:52 +0000 http://bitmovin.com/?p=13528 The new Bitmovin API delivers a comprehensive video infrastructure solution by integrating all aspects of the adaptive streaming workflow into one easy to use interface: Encoding, Player, Analytics, Storage and CDN Video Infrastructure via an API Product development at Bitmovin is driven by our customers’ business needs. By working closely with our users, we have built a...

The post Video Infrastructure as a Service – The Bitmovin API appeared first on Bitmovin.

]]>
Bitmovin API connects to all aspects of the streaming workflow

The new Bitmovin API delivers a comprehensive video infrastructure solution by integrating all aspects of the adaptive streaming workflow into one easy to use interface: Encoding, Player, Analytics, Storage and CDN

Video Infrastructure via an API

Product development at Bitmovin is driven by our customers’ business needs. By working closely with our users, we have built a deep understanding of how video infrastructure needs fit into a wide variety of different business models. This level of understanding helps us to prioritize features on our product roadmap and keep ourselves a step ahead of the game. Over the last few years, this feedback process has allowed us to create a vision for a complete video infrastructure, and with the release of the Bitmovin API this vision has become a reality. API Reference.
API workflow from the Bitmovin video infrsatructure
The typical workflow for media companies to prepare videos has usually 5 major components: Encoding, Storage, CDN, Player and Analytics, and the Bitmovin API provides all of these components now through one API. A video source file cannot be simply moved online as it usually has very high bitrates and is probably also encoded with a compression standard that is not supported on all the major devices. The internet, with its “best effort”, nature could transport such files but it could not guarantee that your users will not see frequent buffering and long startup delays and even playback failures with a video that is not prepared for internet delivery. This is where the Bitmovin API hooks into your workflow, you can now use our encoding API to prepare your files for internet delivery and store the encoded video on our storage. By doing so your files are already available through a CDN and you could get the CDN paths of your files from our API. By connecting our player then with the CDN path of the video you have created a Netflix like streaming system. To see what your users are experiencing you could connect our analytics to the player and monitor closely what’s happening with your streams. Its important to say that we provide all of these components through an API and they are also nicely integrated but you could also use each component individually, e.g., just the encoding or just the player, as everything is available through the API.

Flexibility

Flexibility was a very high priority during the design of the new API. We have seen through experience, that our customers have a wide variety of different use cases, each requiring unique configuration. With the old REST API this was often difficult to customize, because a lot of things were too rigidly defined. This prevented us from leveraging the full potential of our encoder, which is a very flexible multi-cloud enabled service. The new API resolves this, which on one hand has increased the complexity of the REST interface, but by building an abstraction layer into our API clients to provide simple workflow examples, we have maintained a high level of usability, while at the same time giving our solution architects tools that allow them to tailor the system easily for individual customers through these API clients.

Analytics

For quite some time we have had our own analytics in place to monitor our website and to identify potential errors. This system has helped us to catch errors early and in most cases before our customers. As many of you probably already know, our player is integrated with a lot of analytics solutions due to the fact that we have a very flexible player interface. We have integrated our own analytics through the same interface which means that it is not bound to a specific player and can be used with any player.
analytics
The analytics is realtime and, in line with our philosophy, available through the API. This means you can easily integrate it into your backend and create metrics and graphs as you see fit. We will provide examples and a reporting dashboard as an open source component, where you will simply need to insert your API key to get started.
adaptation-small
You could analyze the adaptation behavior of your video player and setup experiments to compare different adaptation logics. We also use this to debug sessions that had frequent buffering or other problems and it really helps to get behind the actual problem.

Storage and CDN

We have also integrated many CDNs (Akamai, Level3, Fastly, Highwinds, CloudFront, Limelight, CloudFlare, etc.) and Storages (AWS S3, GCS, Azure Blob, FTP, SFTP, Aspera, etc.) over the last few years, but we have also seen that many customers where struggling with the integration phase. Based on customer requests we have tightened up these integrations so you can use these storage options directly through our API. Of course you still have the option to use your own storage, but for users looking to bootstrap quickly it’s now much easier.

Design & Architecture

The Bitmovin API is based on microservices with high availability in multiple clouds, e.g., AWS, Google, etc. As a company, we have gained considerable knowledge in this area, as many services in the old encoding API are also based on microservices. The difference here is that this time we have built the entire system as a microservice architecture with a single API gateway that is hosted in a high availability configuration.
Each microservice is encapsulated in a Docker container and we maintain 4 separate branches of each service; internal, canary, production and enterprise. We can route every request based on certain characteristics into one of these 4 branches. A typical roll-out of a feature looks as follows: We test it on our internal branch, and if all system tests run through we deploy a new version into our canary branch. A certain percentage of our production traffic will then be routed into this canary branch (this could be configured on a per service basis). If everything works well, after one week, we create a release version of this service and deploy it into production. This version will be explicitly tested against our enterprise customer use cases and if all tests run through, we deploy it as a canary for our enterprise branch which means we route a certain percentage of our enterprise requests onto this branch. If everything works well, after a week, we deploy the service into production for the enterprise branch.
Canary workflow release branches at Bitmovin
A typical deployment workflow of a feature is as follows:

  1. A developer pushes code to a microservice git repository hosted on GitHub.
  2. Internally, we work with git flow and trigger builds when somebody opens a pull request.
  3. This pull request will be reviewed by at least another developer and needs to pass all of our unit and integration tests before we create a first SNAPSHOT for our system tests.
  4. The builds will be automatically triggered and executed on CodeShip where all unit and integration tests will be performed.
  5. If the feature passes all tests, an image will be created and pushed to DockerHub.

Our management layer consists currently of about 1000 containers (this includes all 4 branches) that are all hosted in an High Availability (HA) configuration, which means that each service has at least two containers that are on different hosts and different availability zones or regions. For the orchestration of these containers we use Rancher, which provides us with an overlay network and service discovery for all of these containers. If the image is available on DockerHub, the developer could test the feature on our internal branch. This means he/she will create a new service and connect it to the internal branch in Rancher. Once online, we can execute our system tests on our internal branch to see if everything is working as expected.
deployment-worflow
Encoder upgrades are now also more transparent for our customers. For each encoding you can specify the encoder version that you want to use, or use one of our tags like STABLE or BETA that always contain the latest stable version or the latest beta version. This allows you to decide when you want to upgrade to a new encoder version and provides you full control and transparency into our development.
encoder-version-small-2

The API Specification

The REST API specification describes all resources of our new API and should only be used to develop API clients. These clients are advanced wrappers on top of our REST API itself and have been made in order to help you integrate the service into your apps. Everything that can be done using the REST API can be done using those clients. Using our API clients is recommended to get the best from our service. They are all open-source and available on Github. You should only use our REST API to develop new API clients. We are currently in the process of writing API clients for the new API and we have already some available:

During the next few months we will add more API clients prioritized by customer requests and we will constantly update these clients when we add features to our API. On our list are currently Go, Java, C#, Node, Ruby, JavaScript, CMD Client and Scala.
api-spec-med
Every response of our API is contained in an envelope. This means that each response has a predictable set of attributes with which you can expect to interact. The envelope format is described in the API specification in the section Response Format. All error codes of the API are described in the section Error Codes. We will provide further details for this errors and dedicated support pages but you should also get resolution suggestions directly from the API with the error code.
An example of a successful response from the API looks as follows:

{
  "requestId": "6d84e126-d10c-4e52-bbfb-bd4c92bc8333",
  "status": "SUCCESS",
  "data": {
    "result": {
      "name": "Production-ID-678",
      "description": "Project ID: 567",
      "id": "cb90b80c-8867-4e3b-8479-174aa2843f62",
      "bucketName": "video-bucket",
      "cloudRegion": "EU_WEST_1",
      "createdAt": "2016-06-25T20:09:23.69Z",
      "modifiedAt": "2016-06-25T20:09:23.69Z"
    },
    "messages": [
      {
        "date": "2016-06-25T20:09:23.69Z",
        "id": "cb90b80c-8867-4e3b-8479-174aa2843f62",
        "type": "INFO",
        "text": "Input created successfully"
      }
    ]
  }
}

Each response contains a unique requestId that identifies your request. It can be used to trace the path of your request and for debugging in error cases. Internally this is a correlation ID that will be forwarded by every microservice and is also present in our logs. This helps us to identify problems quickly and accurate. The status could only have two values SUCCESS and ERROR. The data field then contains the actual response from the service.
In this case the user created an input on AWS S3. The result field contains the service specific result and additional messages. Messages are used in successful and error responses to communicate details about the workflow. The Message object contains information of a single message. The type of the message could be either, INFO, ERROR, WARNING, DEBUG or TRACE. Typically you will see INFO messages and exceptional some WARNING and ERROR messages. DEBUG and TRACE will only be shown, if enabled for your account, or on beta features.
Error responses have the same base structure as successful responses but the status indicates that the response is an error. In that case the requestId is more important and can be handed over directly to our support, which could then provide additional details.

{
  "requestId": "6d84e126-d10c-4e52-bbfb-bd4c92bc8333",
  "status": "ERROR",
  "data": {
    "code": 1000,
    "message": "One or more required fields are not valid or not present",
    "developerMessage": "Exception while parsing input fields",
    "links": [
      {
        "href": "https://bitmovin.com/docs",
        "title": "Tutorial: DASH and HLS Adaptive Streaming with AWS S3 and CloudFront"
      }
    ],
    "details": [
      {
        "date": "2016-06-25T20:09:23.69Z",
        "id": "cb90b80c-8867-4e3b-8479-174aa2843f62",
        "type": "ERROR",
        "text": "Exception while parsing field 'accessKey': field is empty",
        "field": "accessKey",
        "links": [
          {
            "href": "https://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html",
            "title": "Managing Access Keys for your AWS Account"
          }
        ]
      }
    ]
  }
}

The data field of the error response contains a specified error code that you can lookup to get further information. Beside that we include a message that could be used in your app or presented to a user to indicate this error. The developerMessage is something that you will typically log on your side and targets the developer of the application. It provides more details and should help to fix the error. Additionally, we include links that help you to resolve the error. In this example the AccessKey for the AWS S3 input was missing and the link targets a tutorial how you typically set this up. The service could also include messages with further details as in that case a parsing exception occurred on the accessKey field. The API responds with that error and also includes a link how you can get your access and secret keys from AWS.

Feature Overview

The Bitmovin API enables a lot of encoding new use cases for our customers and we want to highlight some here:

  • Hollywood-/Netflix-grade DRM: Widevine, PlayReady, Marlin and PrimeTime DRM with MPEG-CENC
  • Fairplay DRM with HLS
  • DASH Clear Key encryption
  • HLS Sample AES and AES-128 encryption
  • Advanced Encoding Options for H264/AVC
  • Advanced Encoding Options for H265/HEVC
  • 4K and 60FPS Livestreaming
  • Livestreaming with DRM
  • Livestreaming with direct output to S3 and GCS
  • Multi-Cloud: Configure cloud and region for every encoding (AWS, Google), for both, Live and OnDemand
  • Add filters to VoD and live encoding, e.g., watermark
  • Multiple outputs per encoding
  • Direct MP4, MPEG2-TS, etc. output
  • Multi tenant support
  • Closed Captions/Subtitles: Add multiple sidecar files to encoding, e.g., WebVTT, SRT, TTML (SMPTE-TT, EBU-TT or DFXP), etc.
  • Support for 608/708 closed captions
  • Extract and embed closed captions and subtitles
  • Add custom metadata to resources
  • Multiple different input files for one encoding (multi-audio or multi-view use cases)
  • Allow separate audio and video input files for encodings
  • Direct pass through for AWS S3 and GCS without storing files
  • Optimized turnaround times for short clips
  • Fragmented MP4 with HLS (MPEG-CENC)
  • Create thumbnails and sprites directly with the encoding

Complete new products like Analytics, Storage and CDN are now also part of the API. Our Analytics has its roots in our development process, where it is used internally for health monitoring of our releases and error reporting. You never know if everything is still working as expected after a Chrome or Firefox update and so we build the original Analytics system to monitor our own website which helps us to identify problems early on, even before our customers see them. We provide a lot of statistics for developers, which help you to debug specific events and work out what is happening. For example, a video is not playing as it should on a specific Chrome version or you are seeing frequent rebuffering in US with a specific ISP.
Same for startup delay or errors that occur or quality that is probably worse under certain circumstances. You could identify problems earlier by comparing your current performance in terms of buffering, startup delay, quality, etc. to the day before or to the last week/month and so on. These metrics can also be compared to our user base so that you see where you stay compared to industry average, e.g., is your buffering average higher than the average buffering of our whole user base.

What will happen next?

We will continue to extend this API with new features and use cases and will also implement more API clients in different languages. We want to provide the complete video infrastructure as a service for our customers making it as easy as possible for you to setup flexible video workflows. Beside that, our development roadmap is also highly influenced by our customers, so feel free to reach out to us if a critical feature for your use case is missing – we will make it possible 😉

Video technology guides and articles

The post Video Infrastructure as a Service – The Bitmovin API appeared first on Bitmovin.

]]>
Bitmovin C# API Client for .NET – The New Kid on the Block https://bitmovin.com/blog/c-video-streaming-encoding-api-net/ Fri, 22 Jul 2016 08:35:38 +0000 http://bitmovin.com/?p=9958 In November 2015 Microsoft open sourced the .NET framework to make it cross-platform available on Windows, Linux and Mac OS, perhaps making it a very interesting API Client for your next project. We already offer a great range of API clients for our encoding service for adaptive streaming video, including node.js, ruby, javascript, php, python, java, and we are constantly...

The post Bitmovin C# API Client for .NET – The New Kid on the Block appeared first on Bitmovin.

]]>
Bitmovin API for C# video streaming .net

In November 2015 Microsoft open sourced the .NET framework to make it cross-platform available on Windows, Linux and Mac OS, perhaps making it a very interesting API Client for your next project.

We already offer a great range of API clients for our encoding service for adaptive streaming video, including node.js, ruby, javascript, php, python, java, and we are constantly adding new ones based on customer requests. As many of our customers use the .NET framework in their backend, this API client was the next logical step. The fact that Microsoft open sourced the .NET framework last November (2015), to make it cross-platform available, makes this API useful to a much broader range of developers. C# video streaming just got easier for everyone!

Bitmovin C# API Client for .NET

The Bitmovin C# API Client for .NET can be used on Windows with the .NET framework, on Linux with the Mono framework, and on OS X with the Mono framework.
You can easily install the API client through NuGet. NuGet is an extension for Visual Studio which allows you to search, install, uninstall and update external packages in your projects and solutions. It is pre-installed on the more recent versions of Visual Studio, but you can also add NuGet directly by downloading it from nuget.org.
Afterwards you can install the API client with the Package Manager Console:

Install-Package bitcodin-dotnet -Version 1.0.0

Detailed information on how to find, install, remove, and update NuGet packages using the Manage NuGet Packages dialog box can be found at docs.nuget.org.

Get Started

Before you can start you need to get your Bitmovin API-Key from the web portal.
api-key
Afterwards it is very simple to get up and running. Simply instantiate the Bitmovin API as shown in the following:

using com.bitmovin.bitcodin.Api;
public static class BitcodinApiTest
{
  public static void Run()
  {
    const string apiKey = "YOUR_API_KEY";
    var bitApi = new BitcodinApi(apiKey);
  }
}

You can also find a lot of examples that show how you can use our encoding service in the example folder packaged with the client. It’s always best to start with an example as a basis for your implementation so you can start with a working model and just need to modify what is specific to your use case.

Encoding Example

The following example shows how you can encode a video from your AWS S3 bucket into MPEG-DASH and HLS. The first step is to create an input in our encoding service. An input is a video that you want to encode to MPEG-DASH and HLS. In this example the input video is coming from AWS S3. You just have to enter your AccessKey, SecretKey, Bucket name, Region and ObjectKey which is the path to the video file in the Bucket in lines 12 to 16. Beside that you need to set the correct permissions on your bucket so that we are allowed to access your input video file. The following permissions need to be set on the input bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "s3:GetObject"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR_BUCKET_NAME/*"
            ]
        },
        {
            "Effect": "Allow",
            "Action": [
                "s3:ListBucket",
                "s3:GetBucketLocation"
            ],
            "Resource": [
                "arn:aws:s3:::YOUR_BUCKET_NAME"
            ]
        }
    ]
}

If everything is inserted correctly you should see that the API client was able to create your input and it should write the following on the console “Created Input: your-video.mp4”.

public static class CreateS3Job
{
    public static void Run()
    {
      /* Create BitcodinApi */
      const string apiKey = "YOUR_API_KEY";
      var bitApi = new BitcodinApi(apiKey);
      /* Create URL Input */
      var s3InputConfig = new S3InputConfig
      {
        AccessKey = "ACCESSKEY",
        SecretKey = "SECRETKEY",
        Bucket = "BUCKET",
        Region = "REGION",
        ObjectKey = "path/to/file.ext"
      };
      Input input;
      try
      {
        input = bitApi.CreateInput(s3InputConfig);
      }
      catch (BitcodinApiException e)
      {
        Console.WriteLine("Could not create input: " + e);
        return;
      }
      Console.WriteLine("Created Input: " + input.Filename);

The next step is that you create your encoding profile. An encoding profile describes multiple resolutions and bitrates that you want to create from your source file for adaptive streaming. In this example we create 3 resolutions and bitrates from your input video, i.e., 1920×1080@4.8Mbps, 1280×720@2.4Mbps and 854×480@1.2Mbps. We are using the H.264 video codec in that profile with the MAIN profile and our PREMIUM encoder which is the default encoder. If a video file is not working with the PREMIUM encoder you should switch to the STANDARD encoder.

      /* Create EncodingProfile */
      var videoConfig1 = new VideoStreamConfig
      {
        Bitrate = 4800000,
        Width = 1920,
        Height = 1080,
        Profile = Profile.MAIN,
        Preset = Preset.PREMIUM
      };
      var videoConfig2 = new VideoStreamConfig
      {
        Bitrate = 2400000,
        Width = 1280,
        Height = 720,
        Profile = Profile.MAIN,
        Preset = Preset.PREMIUM
      };
      var videoConfig3 = new VideoStreamConfig
      {
        Bitrate = 1200000,
        Width = 854,
        Height = 480,
        Profile = Profile.MAIN,
        Preset = Preset.PREMIUM
      };
      var encodingProfileConfig = new EncodingProfileConfig { Name = "DotNetTestProfile" };
      encodingProfileConfig.VideoStreamConfigs.Add(videoConfig1);
      encodingProfileConfig.VideoStreamConfigs.Add(videoConfig2);
      encodingProfileConfig.VideoStreamConfigs.Add(videoConfig3);

We also add one audio quality to the output encoding with 128Kbps. It is also possible to add multiple audio qualities with different bitrates in adaptive streaming but this is typically not needed as audio with 128Kbps already sounds quite good. Lower bitrates make sense for some use cases but usually it’s enough to provide one as the video bitrates are much higher anyway and audio bitrate does not make much of a difference in the whole system.

      var audioStreamConfig = new AudioStreamConfig
      {
        DefaultStreamId = 0,
        Bitrate = 128000
      };
      encodingProfileConfig.AudioStreamConfigs.Add(audioStreamConfig);
      EncodingProfile encodingProfile;
      try
      {
        encodingProfile = bitApi.CreateEncodingProfile(encodingProfileConfig);
      }
      catch (BitcodinApiException e)
      {
        Console.WriteLine("Could not create encoding profile: " + e);
        return;
      }

The last step is that we submit this encoding job to the system and wait until the encoding is finished. Afterwards you could transfer your encoded assets back to your AWS S3 storage. This could be done through our API or through the web interface, but make sure that you set the right permissions on your bucket so that we are allowed to write files into it.

      /* Create Job */
      var jobConfig = new JobConfig
      {
        EncodingProfileId = encodingProfile.EncodingProfileId,
        InputId = input.InputId
      };
      jobConfig.ManifestTypes.Add(ManifestType.MPEG_DASH_MPD);
      jobConfig.ManifestTypes.Add(ManifestType.HLS_M3U8);
      Job job;
      try
      {
        job = bitApi.CreateJob(jobConfig);
      }
      catch (BitcodinApiException e)
      {
        Console.WriteLine("Could not create job: " + e);
        return;
      }
      JobDetails jobDetails;
      do
      {
        try
        {
          jobDetails = bitApi.GetJobDetails(job.JobId);
          Console.WriteLine("Status: " + jobDetails.JobStatus +
                            " - Enqueued Duration: " + jobDetails.EnqueueDuration + "s" +
                            " - Realtime Factor: " + jobDetails.RealtimeFactor +
                            " - Encoded Duration: " + jobDetails.EncodedDuration + "s" +
                            " - Output: " + jobDetails.BytesWritten / (double)1024 / 1024 + "MB");
        }
        catch (BitcodinApiException)
        {
          Console.WriteLine("Could not get any job details");
          return;
        }
        if (jobDetails.JobStatus == JobStatus.ERROR)
        {
          Console.WriteLine("Error during transcoding");
          return;
        }
        Thread.Sleep(2000);
      } while (jobDetails.JobStatus != JobStatus.FINISHED);
      Console.WriteLine("Job with ID " + job.JobId + " finished successfully!");
    }
}

After everything is finished you could follow our tutorial how you setup adaptive streaming with AWS S3 and CloudFront. It shows you how to setup the CDN and deploy our player so that you can efficiently stream your assets to your users in the best possible quality, creating a reliable C# video streaming workflow.

Whats Next?

We are constantly adding new API clients and improving/extending the ones that are available. If you are missing an important language, please reach out to us so that we can put it onto our list. Beside that, we are working hard on our unified Bitmovin API that incorporates the encoding and our player as well as a lot of new features and functionalities, so stay tuned.

The post Bitmovin C# API Client for .NET – The New Kid on the Block appeared first on Bitmovin.

]]>
Multi-Cloud Video Encoding https://bitmovin.com/blog/multi-cloud-video-encoding-service/ Tue, 12 Jul 2016 07:16:54 +0000 http://bitmovin.com/?p=9509 Bitmovin was built from the ground up with Multi-Cloud in mind. It can run on virtually any cloud! From the first line of code, the Bitmovin Cloud Encoding Service was designed to be a true Multi-Cloud service, which means it can easily move from one cloud to another providing you with a very high level...

The post Multi-Cloud Video Encoding appeared first on Bitmovin.

]]>
multi-cloud-encoding-service-head

Bitmovin was built from the ground up with Multi-Cloud in mind. It can run on virtually any cloud!

From the first line of code, the Bitmovin Cloud Encoding Service was designed to be a true Multi-Cloud service, which means it can easily move from one cloud to another providing you with a very high level of control and flexibility! This helps you to remove vendor lock-in’s, offers you more flexibility and makes your application resilient to cloud outages and maintenance. (Download the Encoding Datasheet)

What is Multi-Cloud?

Multi-Cloud sounds like another buzzword, and on some levels it is, but let’s take a closer look. Multi-Cloud is simply a deployment strategy that enables your services and applications to run concurrently on different cloud providers such as AWS, Google or Azure. This is currently a big trend in the IT industry. In a recent survey from Dimensional Research, including more than 650 IT decision makers, the vast majority (77%) responded that they intended to implement Multi-Cloud architectures.
There are various reasons for the popularity of Multi-Cloud, but one of the most obvious and important ones is that this prevents a vendor lock-in. The ability to easily switch between clouds gives you a much better starting point when it comes to negotiating terms with your cloud provider, and avoids situations where you might get locked in with a single provider. Beside that it allows your IT infrastructure to be more flexible, scalable and resilient to cloud outages and maintenance.

3 Reasons Why Multi-Cloud Makes Sense

1. Advanced Cloud Capabilities

Nobody is perfect and it’s very unlikely that any one provider will ever be the best across all aspects of cloud infrastructure. Each cloud provider has advantages and disadvantages and they offer different services and features. For example, Google is particularly strong in Big Data and services like Google BigQuery work really well without specific setup when compared to AWS Redshift where you need to carefully configure resources like CPU and RAM. On the other hand AWS offers many more instance types in EC2 for specific workloads compared to Google Compute.

If your architecture is designed for Multi-Cloud usage you could use the specific features, services, locations and advantages of the different cloud providers

Taking a closer look at the datacenter locations of the different cloud providers also shows the differences as shown in the overview below. While Google has less locations, it still has compelling services but you will probably also need AWS and Azure if you want to deploy services and applications near to your customers. Even if datacenters from different cloud providers are in similar geographic locations, your users will experience different performance as all datacenters are connected differently. As a global cooperation you will need the performance and resources as near as possible to your customers and you will want to route different customers to different clouds and datacenters based on the performance that your users experience.


Cloud server providers

If your architecture is designed for Multi-Cloud usage you could use the specific features, services, locations and advantages of the different cloud providers. You could build your applications and services with the tools and services that fit best in the regions that are close to your customers and leverage unique cloud specific services when needed.

2. Redundancy and Disaster Recovery

Outages are a real problem and even AWS, Google and Azure have outages. For example, we have just seen an outage of the North Virginia region of AWS on June 9th 2016 and one major outage was on September 2015 which took several sites down including some big names such as; AirBnB, Netflix, Tinder, Reddit and IMDb. Of course this doesn’t happen every day (hopefully) but when it happens it’s usually a disaster for your IT department and a major problem for your business overall.


Server downtime

With a Multi-Cloud infrastructure this sort of downtime can be avoided by simply switching your load to another cloud without any impact on your users.

By switching load away from a failing region or service in a particular cloud, you can seamlessly avoid any downtime. The result is obviously a much better experience for your users and you can be sure that they certainly don’t care if your application or service is running on AWS, Google or Azure. They only care that your service is available and can be used when they need it.

3. Vendor Lock-In / Autonomy

A Multi-Cloud architecture will reduce your dependence on a single cloud provider and gives your more flexibility. Not being locked in with one cloud provider will help you to negotiate more favorable terms in case of service-level agreements, discounts and pricing. If AWS increases its pricing for a particular service that you are using heavily, you will have the option to switch to Google or negotiate on pricing with much more negotiation power.
Beside that it could always happen that cloud providers remove or disable services and/or change workflows that you are relying on, and again, in this case you will have the option to pick the next best offer at another cloud.
Multi-cloud video encoding infrastructure

Bitmovin Multi-Cloud Video Encoding Service

Bitmovin was built from the ground up with Multi-Cloud in mind. Our services can virtually run on any cloud, making it somewhat unique in being a true Multi-Cloud encoding service. Nevertheless, the majority of our services run on AWS and Google simply because our customers use these clouds heavily. Having said that, we also have deployments on Azure and could easily deploy our services on any other cloud provider that offers Linux instances (which is pretty much everyone).

On June 9th, as AWS experienced a major outage, Bitmovin customers were seamlessly shifted across to the Google Cloud, avoiding any downtime.

Currently our enterprise customers can choose where their services should run and we also failover to other clouds if needed without any service interruption. On June 9th 2016 this is exactly what happened when the AWS North Virginia region had a major outage. We seamlessly shifted the load of our customers during that time to Google Cloud and afterwards back to the AWS North Virginia region when the outage has been resolved by AWS.
Needless to say that we also tightly integrate different cloud providers such as AWS, Google and Azure as inputs (where your video files come from) and outputs (where we transfer the encoded output to) for our Multi-Cloud Encoding Service. You could use AWS S3 with or without CloudFront in the same way as Google Cloud Storage with or without Fastly and you could easily switch between these storages if different projects or applications have different requirements.
In our new unified Bitmovin API that incorporates our encoding, player and analytics, our customers will be able to choose the cloud and region for every encoding through the API which gives you even more flexibility. This means you can build your own failover cases based on your needs and design specific disaster recovery plans.
Articles you might be interested in:

The post Multi-Cloud Video Encoding appeared first on Bitmovin.

]]>
What is DRM and How Does it Work? https://bitmovin.com/blog/drm-meaning-explained/ https://bitmovin.com/blog/drm-meaning-explained/#comments Fri, 29 Apr 2016 15:31:49 +0000 http://bitmovin.com/?p=7851 What is DRM and How Does it Work? Digital Rights Management (DRM) systems provide you with the ability to control how people can consume your content. Usually content owners and producers, like all the major Hollywood Studios and TV Stations, force content distributors to use specific DRM systems to protect each piece of content. Depending on the...

The post What is DRM and How Does it Work? appeared first on Bitmovin.

]]>
What is DRM and How Does it Work?

Digital Rights Management (DRM) systems provide you with the ability to control how people can consume your content. Usually content owners and producers, like all the major Hollywood Studios and TV Stations, force content distributors to use specific DRM systems to protect each piece of content. Depending on the copyright requirements, Hollywood grade DRM protection is not always needed and sometimes it’s enough to provide basic protection through token based secure authentication or simple AES encryption of the video without sophisticated license exchange and policy management.

“Digital Rights Management (DRM) systems provide you the ability to control how people can consume your content”

How Does it Work?

A DRM setup needs specific encoding, packaging, playback and a license server. In the following sections we will describe each of these components in more detail.
How doe DRM work?
Bitmovin can provide the encoding, packaging and the player services as an out of the box solution.
License servers are offered by companies such as Irdeto, EZDRM, Expressplay and Axinom who provide a Multi-DRM License Server setup. It’s also possible to build your own license servers and negotiate terms directly with Google (Widevine), Microsoft (PlayReady), Adobe (PrimeTime) or Apple (Fairplay), but it usually takes longer.

Encoding & Packaging

From an encoding and packaging point of view, it does not make much difference whether the video is ‘just’ AES encrypted or Hollywood grade DRM encrypted because, for the encryption, AES is used in both cases. The major difference is that for Hollywood grade DRMs further metadata information needs to be added in the packaging step. Hollywood grade DRMs such as PlayReady, Widevine, PrimeTime and Fairplay don’t differ on the encryption side, they differ on the configuration features that are provided. Features such as offline playback, fine grained policies (e.g., allow only SD playback, rights visibility for users, APIs, different payment modes such as subscription, purchase, rental, gifting, etc.) and platforms that are supported (e.g., Chrome, Firefox, IE, Safari, Android, iOS, etc.).

Multi-DRM with MPEG-CENC

Typically, each device supports just one DRM. If you want to achieve maximum device reach it’s impossible to use just one DRM. You need to use multiple DRM’s in parallel. The MPEG Common Encryption (MPEG-CENC) standard enables this in the most efficient way as it allows key association from different DRM’s with the same video. This means that your video can be encoded and encrypted once with the same key. Metadata for the different DRM’s will be added in the packaging step. The details of the license acquisition, license mappings, etc. will be left up to the individual DRM system. The player decides, based on the platform support, which specific DRM will be used.

“If you want to achieve maximum device reach, it’s impossible to use just one DRM, you need to use multiple DRM’s in parallel”

Traditional Multi-DRM setups need to encrypt and package the content for each DRM differently. This increases the storage footprint of the content as each video needs to be encrypted and packaged with every DRM system and stored separately. Each video also needs to be encoded into multiple resolutions and bitrates to serve different devices and then each encoding needs to be encrypted and packaged with all the different DRMs. This would not only increase the storage footprint tremendously, it also increases the management efforts, because somebody needs to keep track of these multiple different versions. Beside that it reduces the efficiency of your CDN as so many different versions of the same content are distributed.

Playback

On the player side it’s possible to utilize the HTML5 Encrypted Media Extensions (EME) to enable DRM playback without plugins. If the DRM is not supported through the EME you could fallback to Flash and Adobe Access, if supported by your player vendor. On the other hand, if the content is MPEG-CENC Multi-DRM encrypted, the player could automatically choose the DRM that is natively supported on the given platform to playback the content in HTML5 without plugins. The authentication and the license acquisition will be handled by the player through the EME with the metadata that is provided with the content.

“On the player side it’s possible to utilize the HTML5 Encrypted Media Extensions (EME) to enable DRM playback without plugins”

Licensing Server

The licensing server is the management backend of your DRM setup. It allows you to create, modify and revoke licenses for your content and users. Licensing servers and DRM’s differ in their features such as offline playback, fine grained policies, rights visibility for users, APIs, different payment (subscription, purchase, rental and gifting), etc. License servers are provided by several companies such as Irdeto, EZDRM, Expressplay, Axinom, etc. It’s also possible to create your own licensing backend if you have a contract with Google (Widevine), Microsoft (PlayReady), Adobe (PrimeTime) or Apple (Fairplay) directly and you implement the specification. As long as your licensing server follows the specifications, it could be integrated with the other parts of the DRM chain, e.g., encoding, packaging and playback.

Hollywood & UltraViolet

When implementing a DRM strategy you should check that the DRM is accepted by the content owner. Which means that if you distribute Hollywood content you need to implement a DRM that is accepted by the Hollywood studios. But also if you don’t deliver Hollywood content, it’s good to check what is accepted by Hollywood, because you never know – you will probably deliver such content in the future. Replacing an already deployed DRM solution is hard and Hollywood has already done the due diligence of the DRMs for you, so it’s worth checking these recommended DRMs.

“When implementing a DRM strategy you should check that the DRM is accepted by the content owner”

The Digital Entertainment Content Ecosystem (DECE) is a consortium of 85 companies (e.g., studios, manufactures, etc.) which created the UltraViolet standard that ensures that after you purchase a content you are able to watch this content on broad number of devices. DRM is a major part of UltraViolet and therefore six DRM technologies have been approved:

  • Widevine
  • PlayReady
  • PrimeTime
  • Marlin
  • OMA
  • DivX DRM

Apple Fairplay is not part of this list as Apple is not a member of the DECE and Fairplay has just entered the market.

Basic Encryption

A Hollywood grade DRM is not always needed, sometimes it’s enough to just add another layer of security through AES encryption. Apple HTTP Live Streaming (HLS) and MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) both support this use case.

  • HLS AES Encryption
  • Apple HLS supports two encryption methods:
  • AES-128
  • SAMPLE-AES

AES-128 encrypts the whole segment with the Advanced Encryption Standard (AES) using a 128 bit key, Cipher Block Chaining (CBC) and PKCS7 padding. The CBC will be restarted with each segment using the Initialization Vector (IV) provided.

“A Hollywood grade DRM is not always needed, sometimes it’s enough to just add another layer of security through AES encryption”

SAMPLE-AES encrypts each individual media sample (e.g., video, audio, etc.) by itself with AES. The specific encryption and packaging depends on the media format, e.g., H.264, AAC, etc. SAMPLE-AES allows fine grained encryption modes, e.g., just encrypt I frames, just encrypt 1 out of 10 samples, etc. This could decrease the complexity of the decryption process. There are several advantages to this approach as fewer CPU cycles are needed and, for example, mobile devices need less power consumption, higher resolutions can be effectively decrypted, etc.

DASH Clear Key Encryption

Clear Key encryption is an interface supported by EME. This interface can be used to deliver MPEG-DASH content with Clear Key. The interface provides the basic functionality that the user could provide a key that will be used for the decryption of the segments. MPEG-DASH signals the key in the Media Presentation Duration (MPD), which is the manifest of MPEG-DASH. All the relevant information that is needed for decryption is included in the MPD.

DRM Systems

If DRM is a requirement for your project you should take a look at the following major DRM systems. Microsoft, Google, Adobe and Apple provide high profile DRM systems with various features. In the end you will probably end up with a Multi-DRM setup where you utilize several or all of these DRMs in parallel to reach all the major devices.

“In the end you will probably end up with a Multi-DRM setup where you utilize several or all of these DRMs in parallel to reach all the major devices”

Video technology guides and articles

The post What is DRM and How Does it Work? appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/drm-meaning-explained/feed/ 3
Adaptive Bitrate Streaming – Bitmovin, Fastly and Google Cloud Storage https://bitmovin.com/blog/adaptive-bitrate-streaming-with-bitmovin-fastly-and-google/ Mon, 07 Mar 2016 14:07:19 +0000 http://bitmovin.com/?p=6121 In combination with Fastly and Google Cloud Storage we are now able to deliver an end to end experience, from encoding to streaming to playback, that can easily compete with YouTube and Netflix Bitmovin enables high quality video streaming on the web with the smartest video encoding service and HTML5 adaptive player on the market. We utilize...

The post Adaptive Bitrate Streaming – Bitmovin, Fastly and Google Cloud Storage appeared first on Bitmovin.

]]>

In combination with Fastly and Google Cloud Storage we are now able to deliver an end to end experience, from encoding to streaming to playback, that can easily compete with YouTube and Netflix

Bitmovin enables high quality video streaming on the web with the smartest video encoding service and HTML5 adaptive player on the market. We utilize cutting edge standards such as MPEG-DASH and HLS in combination with our adaptive streaming algorithms and encoding solutions that are based on years of research in the adaptive streaming area. In combination with Fastly and Google, we are able to deliver an end to end experience from encoding to streaming to playback that can easily compete with YouTube or Netflix.

Your Workflow with Bitmovin, Fastly and Google

The workflow with Bitmovin, Fastly and Google is as follows. Your input video (the video that you want to make available on the web) will be added to the Bitmovin encoding service through HTTP, FTP, Google Cloud Storage, Amazon S3, or Azure. This can be achieved by using the web portal or our encoding API. Your video will then be encoded and prepared for web playback in a way that it loads fast, does not buffer and can be played back in the highest possible quality depending on the bandwidth of your users.
The next step is that you then transfer your encoded video to Google Cloud Storage which directly connects to Fastly with their CDN interconnect. The Google CDN interconnect enables selected CDN providers such as Fastly to connect directly to Google’s edge network. Customers wbenefit from reduced costs as the CDN interconnect traffic is much cheaper than regular egress traffic and increased performance.
Google Cloud Storage acts as origin and Fastly delivers the videos to your users. The components of this chain (Bitmovin, Google, Fastly) work together tightly and their performance guarantees the fast loading time, no buffering and highest quality for your users. If one component of such a chain is not performing well, this will spoil the whole user experience!
Google Cloud Storage and Fastly working with Bitmovin - Tutorial

Setup Bitmovin Video Encoding

Basically you just need to create an account or if you already have one just log in. We offer a free plan with 2.5GB of free encoding, that’s great for testing and integrating our service into your app – no credit card required – no matter how long it takes.
After that you can follow our general getting started guide. After encoding you need to create a Google Cloud Storage Output as described in the following sections.

Create Google Cloud Storage Output

They output the created points to your Google Cloud Storage bucket that will be used as an input for Fastly. You can select which bucket you want to use, as well as the folder (prefix) and whether you want to make this content public or not. Here you can see an example:
Bitmovin Fastly and Google

Transfer Encoding to Google Cloud Storage Bucket

Now you need to go back to your encoding and select the transfer tab on the top. The next step is that you select your already created Cloud Storage output and just hit the Start Transfer button. Your encoded video will now be transferred to your bucket including all qualities (representations/renditions) as well as metadata, e.g., MPDs and M3U8s.
Bitmovin Fastly and Google

API Example

The same steps can be achieved through our API or with one of our API clients with a few lines of code, e.g. PHP:

PHP Example Code
/* CONFIGURATION */
Bitcodin::setApiToken('insertYourApiKey');
$inputConfig = new HttpInputConfig();
$inputConfig-&amp;amp;amp;gt;url = "https://eu-storage.bitcodin.com/inputs/Sample-Input-Video.mkv";
$input = Input::create($inputConfig);
/* CREATE OUTPUT CONFIG */
$outputConfig = new GcsOutputConfig();
$outputConfig-&amp;amp;amp;gt;name = "TestGcsOutput";
$outputConfig-&amp;amp;amp;gt;accessKey = "yourGcsAccessKey";
$outputConfig-&amp;amp;amp;gt;secretKey = "yourGcsSecretKey";
$outputConfig-&amp;amp;amp;gt;bucket = "yourBucketName";
$outputConfig-&amp;amp;amp;gt;prefix = "path/to/your/outputDirectory";
$outputConfig-&amp;amp;amp;gt;makePublic = false;
$output = Output::create($outputConfig);
$encodingProfileConfig = new EncodingProfileConfig();
$encodingProfileConfig-&amp;amp;amp;gt;name = 'MyApiTestEncodingProfile';
/* CREATE VIDEO STREAM CONFIGS */
$videoStreamConfig1 = new VideoStreamConfig();
$videoStreamConfig1-&amp;amp;amp;gt;bitrate = 4800000;
$videoStreamConfig1-&amp;amp;amp;gt;height = 1080;
$videoStreamConfig1-&amp;amp;amp;gt;width = 1920;
$encodingProfileConfig-&amp;amp;amp;gt;videoStreamConfigs[] = $videoStreamConfig1;
$videoStreamConfig2 = new VideoStreamConfig();
$videoStreamConfig2-&amp;amp;amp;gt;bitrate = 2400000;
$videoStreamConfig2-&amp;amp;amp;gt;height = 720;
$videoStreamConfig2-&amp;amp;amp;gt;width = 1280;
$encodingProfileConfig-&amp;amp;amp;gt;videoStreamConfigs[] = $videoStreamConfig2;
$videoStreamConfig3 = new VideoStreamConfig();
$videoStreamConfig3-&amp;amp;amp;gt;bitrate = 1200000;
$videoStreamConfig3-&amp;amp;amp;gt;height = 480;
$videoStreamConfig3-&amp;amp;amp;gt;width = 854;
$encodingProfileConfig-&amp;amp;amp;gt;videoStreamConfigs[] = $videoStreamConfig3;
/* CREATE AUDIO STREAM CONFIGS */
$audioStreamConfig = new AudioStreamConfig();
$audioStreamConfig-&amp;amp;amp;gt;bitrate = 128000;
$encodingProfileConfig-&amp;amp;amp;gt;audioStreamConfigs[] = $audioStreamConfig;
/* CREATE ENCODING PROFILE */
$encodingProfile = EncodingProfile::create($encodingProfileConfig);
$jobConfig = new JobConfig();
$jobConfig-&amp;amp;amp;gt;encodingProfile = $encodingProfile;
$jobConfig-&amp;amp;amp;gt;input = $input;
$jobConfig-&amp;amp;amp;gt;output = $output;
$jobConfig-&amp;amp;amp;gt;manifestTypes[] = ManifestTypes::M3U8;
$jobConfig-&amp;amp;amp;gt;manifestTypes[] = ManifestTypes::MPD;
/* CREATE JOB */
$job = Job::create($jobConfig);
/* WAIT TIL JOB IS FINISHED */
do{
$job-&amp;amp;amp;gt;update();
sleep(1);
} while($job-&amp;amp;amp;gt;status != Job::STATUS_FINISHED &amp;amp;amp;amp;&amp;amp;amp;amp; $job-&amp;amp;amp;gt;status != Job::STATUS_ERROR);

Enable CORS and crossdomain.xml on the Cloud Storage

To stream MPEG-DASH content through HTML5 you need to enable “Cross-Origin Resource Sharing” (CORS). Just follow the Google documentation. An example CORS configuration could look like this:

[
{
"origin": ["*"],
"responseHeader": ["application/json"],
"method": ["GET","HEAD"],
"maxAgeSeconds": 3600
}
]

Setup Fastly with Google Cloud Storage

Fastly provides a setup guide to how you can connect your Fastly account with Google Cloud Storage. Basically Google Cloud Storage will be used as an origin and it is also possible to configure it with private content. Besides that it probably makes sense to setup the Fastly origin shielding that protects your application servers from continuous requests for content.

Setup the Bitmovin Adaptive Streaming Player

For this setup it’s best to host the Bitmovin player on Google Cloud Storage so that it is delivered with Fastly alongside the media files. You can follow our self hosted player tutorial which shows how you can host the player on your server, cloud storage such as Google Cloud Storage or directly on the CDN.
After you have uploaded all files to Google Cloud Storage and configured the player correctly you should see your video playing through Fastly with the Bitmovin player.

Summary

Delivering videos on the internet in a way that they load fast, do not buffer and play in the highest possible quality is a complex, resource intensive and time consuming task. But with Bitmovin, Fastly and Google you can setup a Netflix like streaming experience in just a few steps.
You have no fixed infrastructure costs for encoders, origin servers and bandwidth as every component scales with your volume. We continuously improve our encoding service and player which means you do not need to buy hardware year after year to stay on top. Streaming services that use cutting edge streaming standard such as MPEG-DASH and HLS need specialized knowledge to configure and use this standards effectively, otherwise you will not receive the benefits that are promised (low startup delay, no buffering, high quality).
The Bitmovin encoding service can be seamlessly integrated into your workflows through our API clients and REST API for encoding. Same applies for our HTML5 adaptive player which offers a comprehensive JavaScript API that can be used to fully customize the player as well as the skinning to your needs. If things are unclear you can take a look at our tutorials and if you don’t see what you need, please feel free to contact our support for direct help.

The post Adaptive Bitrate Streaming – Bitmovin, Fastly and Google Cloud Storage appeared first on Bitmovin.

]]> MPEG-DASH HEVC Encoding https://bitmovin.com/blog/mpeg-dash-hevc-encoding/ Tue, 15 Dec 2015 12:33:28 +0000 http://bitmovin.com/?p=7395 High Efficiency Video Coding (HEVC) or H.265 is a compression standard that doubles the compression efficiency while maintaining similar or same video quality compared to its predecessor H.264/AVC High Efficiency Video Coding (HEVC) or H.265 is a compression standard that was jointly developed by ISO/IEC MPEG SC29/WG11 and the ITU-T Video Coding Experts Group (VCEG)....

The post MPEG-DASH HEVC Encoding appeared first on Bitmovin.

]]>

High Efficiency Video Coding (HEVC) or H.265 is a compression standard that doubles the compression efficiency while maintaining similar or same video quality compared to its predecessor H.264/AVC

High Efficiency Video Coding (HEVC) or H.265 is a compression standard that was jointly developed by ISO/IEC MPEG SC29/WG11 and the ITU-T Video Coding Experts Group (VCEG). The standard was officially ratified by MPEG on the 13th of April 2013. As for every new video codec the goal of HEVC was to double the compression while maintaining similar or same video quality compared to its predecessor H.264/AVC. HEVC has been designed to take advantage of very high resolutions such as 4K and 8K which usually contain very wide areas of similar blocks. Therefore, one of the first things that was changed was the block size on which the codec operates. H.264/AVC was using 16 by 16 blocks at a maximum and these blocks were replaced by Coding Tree Units (CTU) that can take up to a maximum of 64 by 64 sizes. This allows utilization of big homogenous areas as very big blocks can be used that enable higher efficiency. HEVC also only allows Context Adaptive Binary Arithmetic Coding (CABAC) as entropy encoder as it is the most efficient one and decoding performance considerations are less important now as every smartphone has already two or more cores and built-in decoding chipsets.
MPEG DASH HEVC Encoding quality comparison
There are several other small improvements that lead to the encoding efficiency gain, and the performance of the HEVC implementations will definitely increase over the next few years. Additionally, MPEG added the support for HEVC in several container formats such as MPEG 2 Transport Stream (MP2TS) and ISO Base Media File Format (IBMFF) that is used as the basis for MP4. As MPEG DASH is container and codec agnostic, HEVC is supported anyway, but with the support in the MP4 container format it’s even easier to integrate HEVC seamlessly into your workflow.

MPEG DASH HEVC Encoding with Bitmovin’s API

MPEG DASH HEVC encoding with Bitmovin is very simple and is supported through our REST-API and with our API Clients such as PHP and Python. Basically there is just one parameter in the Encoding Profile that needs to be changed. Specifically the codec attribute in the Encoding Profile needs to be changed to hevc. This attribute is optional, which means if you don’t specify it, we will use h264 as codec. Currently MPEG DASH HEVC is supported within all plans and for all of our users.

MPEG DASH HEVC encoding with the Bitmovin API
Python API Client MPEG DASH HEVC

A simple Python example of an MPEG DASH HEVC encoding can be found at our python github example.

  1. Follow the instructions on github to setup bitcodin-python.
  2. Make sure to set the correct API key:
    bitcodin.api_key = 'INSERT YOUR API KEY'
  3. Create an MPEG DASH HEVC encoding profile
    video_configs = list()
    video_configs.append(bitcodin.VideoStreamConfig(
        default_stream_id=0,
        bitrate=1024000,
        profile='Main',
        preset='standard',
        height=768,
        width=1024,
        codec='hevc'
    ))
    audio_configs = [bitcodin.AudioStreamConfig(default_stream_id=0,
                                                bitrate=192000)]
    encoding_profile_obj = bitcodin.EncodingProfile('HEVC Encoding Profile',
                                                     video_configs,
                                                     audio_configs)
    encoding_profile = bitcodin.create_encoding_profile(encoding_profile_obj)
    
  4. Create and start MPEG DASH HEVC encoding
    manifests = ['mpd']
    job = bitcodin.Job(
        input_id=input_result.input_id,
        encoding_profile_id=encoding_profile.encoding_profile_id,
        manifest_types=manifests
    )
    job_result = bitcodin.create_job(job)
    

Playback with the Bitmovin Player

The Bitmovin Adaptive Streaming Player utilizes the browser build-in HTML5 Media Source Extensions (MSE) to playback HEVC native through the browser decoding engine. If the underlying browser supports HEVC, the Bitmovin Player supports it too. Currently this works on Microsoft Edge and Android native Apps based on ExoPlayer with devices that support HEVC but with MPEG-DASH you can use H.264/AVC in parallel with H.265/HEVC and only devices that are capable of playing HEVC will use the HEVC Representations.

Conclusion

HEVC is probably the most efficient codec on the market and it’s worth trying it out. The integration with MPEG-DASH is seamless and the Bitmovin API allows you to setup an HEVC encoding workflow in just a few minutes. If you want to try our MPEG-DASH HEVC encoding and playback with your content, you can use our free plan with 2.5GB encoding output per month. That’s great for testing and playing around with our HEVC encoder.
See an MPEG-DASH example using HEVC. Scroll down to the Telecom ParisTech, GPAC: UHD HEVC DASH Dataset

The post MPEG-DASH HEVC Encoding appeared first on Bitmovin.

]]>
Video Stream Cropping for MPEG-DASH and HLS https://bitmovin.com/blog/video-stream-cropping-mpeg-dash-hls/ Tue, 20 Oct 2015 15:01:37 +0000 http://bitmovin.com/?p=7459 Having advanced features such as cropping (changing the size of the video) available with MPEG-DASH and HLS, through a unified API with API clients for different languages can be very handy Streaming has become more complex in the last few years, with modern technologies such as MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) and Apple HTTP...

The post Video Stream Cropping for MPEG-DASH and HLS appeared first on Bitmovin.

]]>

Having advanced features such as cropping (changing the size of the video) available with MPEG-DASH and HLS, through a unified API with API clients for different languages can be very handy

Streaming has become more complex in the last few years, with modern technologies such as MPEG Dynamic Adaptive Streaming over HTTP (MPEG-DASH) and Apple HTTP Live Streaming (HLS). But it’s worth it! Improving the quality of the user experience is paramount, as shown in one of our recent blog posts about video quality and user problems. Having advanced features such as cropping (changing the size of the video) available with MPEG-DASH and HLS, through a unified API with API clients for different languages can be very handy. Our team implemented this feature and it can be used in a very simple but effective way with our encoding profiles.

Cropping

The cropping configuration is part of the encoding profile and can be found in our developer section. It contains the parameters, top, bottom, right and left. These parameters specify how much pixels of the input video will be cropped on each side. You can use theses features to cut off black bars, to implement zoom effects or to just select a specific part of the video that is important.

"croppingConfig": {
    "top": 100,
    "bottom": 100,
    "right": 5,
    "left": 5
}

Zoom Effect with Cropping

Our python API client contains an example that implements a zoom effect on the Sintel movie. On the left side you can see the original movie while on the right side you can see a cropped version of the video which looks like a zoomed-in version.
cropping_side_by_side-1024x291[1]

Crop your Video Streams with Bitmovin for Free

If you want to crop your MPEG-DASH and HLS content you can use the Bitmovin encoding service that offers a free plan with 2.5GB encoding output per month. That’s great for testing and playing around with our cropping feature. If you are missing something in our cropping feature, just drop us a line. We are always interested in your feedback, which helps us to improve our service.

The post Video Stream Cropping for MPEG-DASH and HLS appeared first on Bitmovin.

]]>
Add Watermarks to Video Streams https://bitmovin.com/blog/add-watermark-to-video-streams/ Fri, 09 Oct 2015 10:06:18 +0000 http://bitmovin.com/?p=7465 Recently we added several features to the Bitmovin encoding service, including the ability to watermark your video streams. This enables our users to brand their streams in several ways using PNGs, JPGs or even GIFs. You can also set the exact position of your image in the video stream. The watermarking is part of our...

The post Add Watermarks to Video Streams appeared first on Bitmovin.

]]>
Recently we added several features to the Bitmovin encoding service, including the ability to watermark your video streams. This enables our users to brand their streams in several ways using PNGs, JPGs or even GIFs. You can also set the exact position of your image in the video stream. The watermarking is part of our encoding profiles and you can define a watermarking profile that can be used for different videos. Furthermore, as it is possible with the Bitmovin encoding service to output adaptive streams with multiple different resolutions, we scale your image based on the watermark configuration so that it works seamlessly with all defined resolutions in the profile. This means that our watermarks are also adaptive and can be applied to your MPEG-DASH and Apple HLS streams.

Bitmovin Adaptive Streaming HTML5 Player - How to add a watermark to video
The Bitmovin watermarking supports the following features:

  • PNG, JPG, GIF and BMP
  • set exact position in the video
  • automatic scaling with your encoding profile resolutions

If your are missing a feature in our watermarking that is needed for your business we are happy to help you just drop us a line!

How to add Watermarks to Video: Watermarking Config

The watermarking configuration is part of the encoding profile which also contains the video and audio stream configurations. Applying a watermark configuration on the encoding profile will overlay all video streams with the specified watermark image on the position as specified. The position of the image in pixels can be specified via the ‘top’, ‘bottom’, ‘left’ and ‘right’ parameters of the watermark configuration. Only one horizontal and one vertical distance parameter is allowed. The following is an example which uses the top and left parameter to define the position of the image in the video:

"watermarkConfig": {
    "top": 10,
    "left": 100,
    "image": "https://your.server/watermark.png"
}

You can find more information about the encoding profile in general and encoding profiles with watermarks in our developer section.

API Client Examples

Our API clients also contain examples for watermarking such as the Python API client. Just take a look at the create_job_with_watermark.py example. In the example an encoding profile is created with 4 different video resolutions and bitrates as well as an audio representation with 192Kbps. Additionally, we have added a watermark configuration to the encoding profile, shown in the excerpt below:

watermark_config = bitcodin.WatermarkConfig(
    image_url='https://bitdash-a.akamaihd.net/webpages/bitcodin/images/bitcodin_transparent_50_1600.png',
    top=191,
    right=159
)
encoding_profile_obj = bitcodin.EncodingProfile(
    name='API Test Profile',
    video_stream_configs=video_configs,
    audio_stream_configs=audio_configs,
    watermark_config=watermark_config
)

The watermark configuration points to a PNG image and sets the position through the top and right attribute. The PNG is transparent and will be inserted in the middle of the video stream. You can try out this example easily by inserting your API key on the top and simply executing it on the command line with python create_job_with_watermark.py.

Add watermarks to Video Streams streams with Bitmovin for free

If you want to watermark your MPEG-DASH and HLS content, you can use our Bitmovin encoding service with a free plan with 2.5GB encoding output per month. That’s great for testing and playing around with our watermarking feature!
For more information on how to add watermark to your video try our support section.

The post Add Watermarks to Video Streams appeared first on Bitmovin.

]]>