Developers – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Thu, 11 Jul 2024 12:05:30 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg Developers – Bitmovin https://bitmovin.com 32 32 Providing a Premium Audio Experience in HLS with the Bitmovin Encoder https://bitmovin.com/blog/premium-hls-audio/ https://bitmovin.com/blog/premium-hls-audio/#respond Mon, 01 Jul 2024 14:53:51 +0000 https://bitmovin.com/?p=283109 Introduction Many streaming providers are looking for ways to offer a more premium and high quality experience to their users. One often overlooked component in streaming quality is audio – and more specifically which audio bitrates, channel layouts, and even audio languages are available and how these options can be delivered to the viewers on...

The post Providing a Premium Audio Experience in HLS with the Bitmovin Encoder appeared first on Bitmovin.

]]>
Introduction

Many streaming providers are looking for ways to offer a more premium and high quality experience to their users. One often overlooked component in streaming quality is audio – and more specifically which audio bitrates, channel layouts, and even audio languages are available and how these options can be delivered to the viewers on a range of devices. While there many ways of improving the video streaming quality & experience such as Per-Title Encoding, Multi-Bitrate Video, High Dynamic Range (HDR), and high resolutions, there are also some some great ways of enhancing a user’s experience with premium hls audio. Some of the most important considerations for audio streaming are:

  • Adaptive Streaming: serving multiple audio bitrates for various streaming conditions
  • Reduced Bandwidth & Device Compatibility: multi-codec audio for better compression at reduced bitrates
  • Improved User Experience: 5.1(or greater) surround sound or even lossless audio
  • Accessibility and Localization: such as multi-language or descriptive audio

You can learn even more about how audio encoding affects the streaming experience in this blog.

In Bitmovin’s 2023-24 Video Developer Report, we saw that immersive audio ranked in the top 15 areas for innovation; while audio transcription was the #1 ranked use-case for AI and ML. Furthermore, though AAC remains the the most widely used audio codec – mostly due to it’s wide device support, we see that both Dolby Digital/+ and Dolby Atmos are the #2 and #3 ranked audio codecs that streaming companies are either currently supporting or planning on supporting in the near future.

- Bitmovin
Audio codec usage – source: Bitmovin Video Developer Report

With HLS and its multivariant approach, this is all possible; but understanding just how to construct and organize your HLS multivariant playlist can be tricky at first. In this tutorial we will take a look at some best practices in HLS for serving alternate audio renditions as well as an example at the end of this article showcasing how to simply do this using the Bitmovin Encoder.

Basic audio stream packaging

The most basic way to package audio for HLS is to mux the audio track with each video track. This works for very simple configurations where you are only dealing with outputting a single AAC Stereo audio track at a single given bitrate. While the benefit of this approach is simplicity, it has many limitations such as not being able to support multi-channel surround sound, advanced codecs, and multi-language support. Additionally demuxing audio and video comes with benefit of using other muxing containers like fragmented MP4 or CMAF which don’t require client-side transmuxing. Additionally, keeping audio and video muxed together comes with inefficient storage and delivery as each video variant will have the audio duplicated. Similarly, demuxed audio and video allows for the use MP4 and CMAF containers which are more performant for client devices since they won’t have to demux or transmux the segments real-time.

A multivariant playlist output for this would look something like:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-STREAM-INF:BANDWIDTH=4255267,AVERAGE-BANDWIDTH=4255267,CODECS="avc1.4d4032,mp4a.40.2",RESOLUTION=2560x1440
manifest_1.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=3062896,AVERAGE-BANDWIDTH=3062896,CODECS="avc1.4d4028,mp4a.40.2",RESOLUTION=1920x1080
manifest_2.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=1591232,AVERAGE-BANDWIDTH=1591232,CODECS="avc1.4d4028,mp4a.40.2",RESOLUTION=1600x900
manifest_3.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=1365632,AVERAGE-BANDWIDTH=1365632,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=1280x720
manifest_4.m3u8

#EXT-X-STREAM-INF:BANDWIDTH=862995,AVERAGE-BANDWIDTH=862995,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=960x540
manifest_5.m3u8

Audio/Video demuxing

A better approach is to demux the Audio and Video tracks – luckily HLS makes this simple by the use of HLS EXT-X-MEDIA playlists which is the standard way of declaring alternate content renditions for audio, subtitle, closed-captions, or video(mostly used alternative viewing angles such as in live sports). With the use of EXT-X-MEDIA to decouple audio from video, we can add in many great audio features such as supporting alternate/dubbed language tracks, surround sound tracks, multiple audio qualities, and multi-codec audio.

By supplying audio tracks with EXT-X-MEDIA tags, we can explicitly add each audio track that we want to output as well as group them together – Then we can correlate each Video Variant(EXT-X-STREAM-INF) to one of the grouped Audio Media Playlists.

Using the previous example of a single AAC Stereo Audio track, a demuxed audio/video output would look like:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-INDEPENDENT-SEGMENTS

#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AAC_Stereo",LANGUAGE="en",NAME="English - Stereo",AUTOSELECT=YES,DEFAULT=YES,URI="audio_aac.m3u8"

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,mp4a.40.2",RESOLUTION=2560x1440,AUDIO="AAC_Stereo"
manifest_1.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4028,mp4a.40.2",RESOLUTION=1920x1080,AUDIO="AAC_Stereo"
manifest_2.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4028,mp4a.40.2",RESOLUTION=1600x900,AUDIO="AAC_Stereo"
manifest_3.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=1280x720,AUDIO="AAC_Stereo"
manifest_4.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=960x540,AUDIO="AAC_Stereo"
manifest_5.m3u8

Here, you can first see we declare a single Audio Media(EXT-X-MEDIA) playlist for our audio track and give it a group-id attribute value of “AAC_Stereo“. Then each Video Variant EXT-X-STREAM-INF tag uses the “AUDIO” attribute to associate its video track to the Audio Media group “AAC_Stereo“.

Multiple audio bitrates

But now let’s imagine we want to better optimize our Adaptive Streaming to deliver our AAC Stereo audio in multiple bitrates such as a high(196kbps) and low(64kbps) so that the higher resolution Video Variants can take advantage of higher quality+bitrate audio given the increase in bandwidth when streaming those variants. We can accomplish this by encoding our audio with both a low and high bitrate outputs and group them separately – then decide which Video Variant gets which Audio bitrate/quality. – For example, our 720p or below variants get the lower quality audio by default, and our full HD or above variants get the higher quality audio by default. Just think of that as defaults though, because most modern Players that stream HLS, will allow for independently picking which audio quality to play based on Adaptive-Bitrate streaming conditions.

An example of utilizing a low and a high AAC Stereo tracks would look like:

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-INDEPENDENT-SEGMENTS

#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac-stereo-64",LANGUAGE="en",NAME="English - Stereo",AUTOSELECT=YES,DEFAULT=YES,URI="audio_aac_64k.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac-stereo-196",LANGUAGE="en",NAME="English - Stereo",AUTOSELECT=YES,DEFAULT=NO,URI="audio_aac_196k.m3u8"

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,mp4a.40.2",RESOLUTION=2560x1440,AUDIO="aac-stereo-196"
manifest_1.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4028,mp4a.40.2",RESOLUTION=1920x1080,AUDIO="aac-stereo-196"
manifest_2.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4028,mp4a.40.2",RESOLUTION=1600x900,AUDIO="aac-stereo-196"
manifest_3.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=1280x720,AUDIO="aac-stereo-64"
manifest_4.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d401f,mp4a.40.2",RESOLUTION=960x540,AUDIO="aac-stereo-64"
manifest_5.m3u8

In this example, we now have two audio tracks, one for each bitrate, and therefore have two Audio Media (EXT-X-MEDIA) playlists defined, each having unique GROUP-ID attribute, but the same NAME attribute. This is a good way declaring that the audio tracks are the same language, channel config, and codec, but at different qualities. Now, we can declare that each Video Variant(EXT-X-STREAM-INF) that is 720p or less sets the AUDIO group for that variant to the low bitrate Audio Track(GROUP-ID="aac-stereo-64") and those variants above 720p get the higher bitrate AUDIO group(GROUP-ID="aac-stereo-196") by default (but again, most Players can manage the audio tracks independently for optimal adaptive streaming).

This is at least an improvement on the previous single-bitrate audio packaging – But still, there are plenty of enhancements we can make!

More efficient AAC

The previous examples are all relying on Low Complexity AAC(AAC-LC) because this basic audio codec is supported by every playback device. It is necessary to always have at least one AAC-LC track to be able support older devices. However, most devices these days can support more efficient versions of AAC such as High Efficiency AAC(AAC-HE) which comes in two main versions: v2 which is used for bitrates up to 48kbps and v1 which is used for bitrates up to 96kbps.

So let’s adapt our previous example to not rely on 2 (or more) different AAC-LC audio tracks, and instead output one AAC-HE v1, one AAC-HE v2, and one AAC-LC rendition. The tricky part here is that we will want to group each of the above into a different GROUP-ID so that the Player client can decide which to use based on which codecs it supports – but we also will want each Video Variant to be able to use any of those audio tracks. To accomplish this, all we need to do is duplicate each Video Variant for each of the 3 unique Audio Media GROUP-IDs.

A note on grouping audio renditions

The apple authoring spec recommends creating one audio group for each pair of codec and channel count.

We now have have 3 different versions of the AAC codec so we will have 3 different audio groups.

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-INDEPENDENT-SEGMENTS

#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac_lc-stereo-128k",LANGUAGE="en",NAME="English - Stereo",AUTOSELECT=YES,DEFAULT=YES,URI="audio_aaclc_128k.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac_he1-stereo-64k",LANGUAGE="en",NAME="English - Stereo",AUTOSELECT=YES,DEFAULT=NO,URI="audio_aache1_64k.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac_he2-stereo-32k",LANGUAGE="en",NAME="English - Stereo",AUTOSELECT=YES,DEFAULT=NO,URI="audio_aache2_32k.m3u8"

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,mp4a.40.2",RESOLUTION=2560x1440,AUDIO="aac_lc-stereo-128k"
manifest_1.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,mp4a.40.5",RESOLUTION=2560x1440,AUDIO="aac_he1-stereo-64k"
manifest_1.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,mp4a.40.29",RESOLUTION=2560x1440,AUDIO="aac_he2-stereo-32k"
manifest_1.m3u8

## Repeat above approach for each additional Video Variant

In this example, you can see that we replicated the 1440p variant 3 times – 1 for reach Audio Media GROUP-ID which would then be repeated for each additional Video Variant. This will allow the client Player to decide for a given Video Variant, which audio track group to use based upon codec support and streaming conditions. Also take note how each Video Variant’s CODECS attribute is updated to represent the necessary audio codec identifier.

Surround sound audio

Now, let’s say we also want to be able to support 5.1 surround sound for those clients which can benefit from it. For this we can decide on which surround sound codec we want to support. Let’s use Dolby Digital AC-3 for this example. Since we are now relying on a more advanced audio codec for optimal surround experience, it is also be important to consider devices that may have 5.1 or greater speaker setups, but that can NOT support Dolby Digital. For this we will also include a secondary 5.1 track using basic AAC-LC codec. Now, we will create 2 new Audio Media playlists with unique GROUP-ID and NAME attributes.

A note on downmixing from 5.1 audio sources

In this example, we will assume the source has a Dolby Digital surround audio track. From that single audio source, we will create create our AC-3 surround track, implicitly convert to our AAC surround track, and automatically downmix the source 5.1 to our various AAC 2.0 Stereo outputs using the Bitmovin Encoder which is shown in sample code at the bottom of this article. Alternatively you can do all sorts of mixing, channel-swapping, as well as work with distinct audio input files like separate files for each channel for example. You can learn more about that here.

Don’t forget about grouping audio renditions

As previously mentioned, the apple authoring spec recommends creating one audio group for each pair of codec and channel count.

We now have have 5 different unique combinations of codecs and channel counts so we will have 5 different audio groups.

#EXTM3U
#EXT-X-VERSION:3
#EXT-X-INDEPENDENT-SEGMENTS

#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac_lc-stereo-128k",LANGUAGE="en",NAME="English - Stereo",AUTOSELECT=YES,DEFAULT=YES,URI="audio_aac_128k.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac_he1-stereo-64k",LANGUAGE="en",NAME="English - Stereo",AUTOSELECT=YES,DEFAULT=NO,URI="audio_aache1_64k.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac_he2-stereo-32k",LANGUAGE="en",NAME="English - Stereo",AUTOSELECT=YES,DEFAULT=NO,URI="audio_aache2_32k.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="aac_lc-5_1-320k",LANGUAGE="en",NAME="English - 5.1",AUTOSELECT=YES,DEFAULT=NO,URI="audio_aac_lc_5_1_320k.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="dolby",LANGUAGE="en",NAME="English - Dolby",CHANNELS="6",URI="audio_dolbydigital.m3u8"

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,mp4a.40.2",RESOLUTION=2560x1440,AUDIO="aac_lc-stereo-128k"
manifest_1.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,mp4a.40.5",RESOLUTION=2560x1440,AUDIO="aac_he1-stereo-64k"
manifest_1.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,mp4a.40.29",RESOLUTION=2560x1440,AUDIO="aac_he2-stereo-32k"
manifest_1.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,mp4a.40.29",RESOLUTION=2560x1440,AUDIO="aac_lc-5_1-320k"
manifest_1.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4d4032,ac-3",RESOLUTION=2560x1440,AUDIO="dolby"
manifest_1.m3u8


## Repeat above approach for each additional Video Variant

Here you can see that now we have the 1440p variant replicated a total of 5 times, once for each Audio Media GROUP-ID which allows the client Player to select the most appropriate audio and video track combination.

Again, note how each duplicated Video Variant has an updated CODECS attribute to represent the appropriate audio codec associated to it. One major reason we duplicate each Video Variant for each Audio Media GROUP-ID is that most devices cannot handle switching between audio codec’s during playback; so as Adaptive-Bitrate logic on the Player switches between different Video Variant’s it will pick the variant that has the same audio codec that it has been using. Additionally, in HLS, we cannot simply list the Video Variant once and add all of the various audio codecs to the CODECS attribute. This is because per HLS, the client device MUST be able to support all of the CODECS mentioned on a given Video Variant(EXT-X-STREAM-INF) to avoid possible playback failures. So instead, we separate out the Video Variants per each codec + channel number set.

Multi-language audio

This is all great, but what if I want to support additional dubbed audio language tracks or even Descriptive Audio tracks? Luckily, that is rather simple to do. We can just create additional AudioMedia playlists for each language and utilize the existing GROUP-IDs depending on which codecs and formats we want to support. We can use the existing GROUP-IDs which are logically grouped by Codec and Channel pairing per the Apple authoring spec, then we can add our additional language tracks to those existing groups.

#EXTM3U
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-VERSION:6
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AAC-HE-V1-Stereo",NAME="English-Stereo",LANGUAGE="en",DEFAULT=NO,URI="audio_aache1_stereo.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AAC-HE-V1-Stereo",NAME="Spanish-Stereo",LANGUAGE="es",DEFAULT=NO,URI="audio_aache1_stereo_es.m3u8"

#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AAC-HE-V2-Stereo",NAME="English-Stereo",LANGUAGE="en",DEFAULT=NO,URI="audio_aache2_stereo.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AAC-HE-V2-Stereo",NAME="Spanish-Stereo",LANGUAGE="es",DEFAULT=NO,URI="audio_aache2_stereo_es.m3u8"

#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AAC-LC-5.1",NAME="English-5.1",LANGUAGE="en",DEFAULT=NO,URI="audio_aaclc-5_1.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AAC-LC-5.1",NAME="Spanish-5.1",LANGUAGE="es",DEFAULT=NO,URI="audio_aaclc-5_1_es.m3u8"

#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AAC-LC-Stereo",NAME="English-Stereo",LANGUAGE="en",DEFAULT=NO,URI="audio_aaclc_stereo.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AAC-LC-Stereo",NAME="Spanish-Stereo",LANGUAGE="es",DEFAULT=NO,URI="audio_aaclc_stereo_es.m3u8"

#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AC-3-5.1",NAME="English-Dolby",LANGUAGE="en",CHANNELS="6",DEFAULT=NO,URI="dolby-ac3-5_1.m3u8"
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="AC-3-5.1",NAME="Spanish-Dolby",LANGUAGE="es",CHANNELS="6",DEFAULT=NO,URI="dolby-ac3-5_1_es.m3u8"

#EXT-X-STREAM-INF:...,CODECS="avc1.4D401F,ac-3",RESOLUTION=1280x720,AUDIO="AC-3-5.1".0
video_720_3000000.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4D401F,mp4a.40.29",RESOLUTION=1280x720,AUDIO="AAC-HE-V2-Stereo".0
video_720_3000000.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=1280x720,AUDIO="AAC-LC-Stereo".0
video_720_3000000.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=1280x720,AUDIO="AAC-LC-5.1".0
video_720_3000000.m3u8

#EXT-X-STREAM-INF:...,CODECS="avc1.4D401F,mp4a.40.5",RESOLUTION=1280x720,AUDIO="AAC-HE-V1-Stereo".0
video_720_3000000.m3u8

How does this differ from DASH?

In DASH, demuxed Audio and Video tracks are grouped into separate AdaptationSets for a given period. This means each given Video AdaptationSet is not directly linked to one specific Audio track, but rather the client Player independently picks a Video Representation from the Video AdaptationSet and a Audio Representation from the Audio AdaptationSet. So with DASH, we don’t have to worry about re-stating Video tracks for each group of Audio tracks as they are managed independently of each other.

Additional notes

The video codecs you choose to support may also determine which audio codecs and container formats you use. For example if you encode video to VP9 you may want to consider using vorbis or opus audio codecs.

In this example, we used AC-3 for Dolby Digital 5.1, but you may consider using Enhanced AC-3 or more commonly referred to as E-AC-3 for additional channel support(such as 7.1 or more) or spatial audio support like Dolby Atmos. Other premium surround sound codec options are DTS:HD and DTS:X.

Premium HLS audio example with the Bitmovin Encoder & Manifest Generator

Below linked GitHub sample is a pseudo-code example using the Bitmovin Javascript/Typescript SDK that demonstrates outputting multi-bitrate, multi-codec, multi-channel, and multi-language audio tracks. This can greatly enhance user’s experience as it allows for streaming the best quality and most appropriate audio for each device’s codec support and speaker channel configuration.

With the Bitmovin Encoder, we can use one master (Dolby Digital surround in this example) audio file/stream for each language and easily downmix it to 2.0 stereo or implicitly convert it to AAC 5.1. Then, once we simply create each desired audio track, we will use the Bitmovin Manifest Generator to create our HLS multivariant playlists.

Encoding Example For HLS With Multiple Audio Layers

The post Providing a Premium Audio Experience in HLS with the Bitmovin Encoder appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/premium-hls-audio/feed/ 0
Open-Source vs. Commercial Players: Understanding the True Cost of Ownership https://bitmovin.com/blog/open-source-vs-commercial-player-cost-analysis/ https://bitmovin.com/blog/open-source-vs-commercial-player-cost-analysis/#respond Tue, 02 Apr 2024 11:19:15 +0000 https://bitmovin.com/?p=278978 When choosing between a commercial player and building out an open-source option, there are many factors to consider. Cost is always the major driving factor, but it doesn’t always reflect the resources required for development. In this blog, we will discuss some recent findings from the Bitmovin engineering and integrations teams on what it takes...

The post Open-Source vs. Commercial Players: Understanding the True Cost of Ownership appeared first on Bitmovin.

]]>
When choosing between a commercial player and building out an open-source option, there are many factors to consider. Cost is always the major driving factor, but it doesn’t always reflect the resources required for development. In this blog, we will discuss some recent findings from the Bitmovin engineering and integrations teams on what it takes to build a playback solution fit for modern streaming services. We will be focusing on the engineering time required for building such a solution in-house, using open-source and native technologies, versus leveraging a commercial player where many of the necessary items are pre-built. It’s important to understand the short and long-term effects of the decision as it can have a very high and lasting impact on development teams, either keeping them snowed under with necessary maintenance or freeing up time to create unique value for the service. 

To get right into it, let’s take a look at the pros and cons of both choices.


Pros & cons of choosing an open-source player:

Pros: 

  • It’s free; open-source players have no license fees
  • Offers flexibility to create your own plugins and integrations
  • The in-house team is in full control over the player development

Cons:

  • Requires large initial investment (developer time and costs) to get up and running to the standards of modern streaming services
  • Requires in-house team to have or gain video streaming expertise
  • Restricted by the limitations inherent in the open-source player’s architecture 
  • Only have community support available to help the in-house team should any issues arise

Pros & cons of choosing a commercial player:

Pros: 

  • Speeds up the deployment time while providing robust performance and feature-set
  • Player maintenance and new market trending feature development as part of the package
  • Leverage a team of technical video player experts rather than hiring in-house
  • Frees up time to focus on core business value
  • Access to existing pre-integrations with a wide Partner ecosystem

Cons:

  • It’s not free; commercial players have license fees
  • Usually has less flexibility for creating integrations for the player
  • Restricted by the limitations inherent in the commercial player’s architecture

As can be seen from the breakdown above, there are strong reasons to go with either option. So, ultimately, the decision comes down to a trade-off between cost and flexibility.

Spoiler: At the end of this blog, we’ll discuss how Bitmovin is addressing this decision by offering customers the performance of the Bitmovin Player with the flexibility of open-source, but more on that later.

TCO: Comparing open-source to commercial players in 3 scenarios

When it comes to cost, it is important to understand the break-even point: At what point does licensing a commercial player become more cost-effective in terms of development resources and delivery time than building an in-house system based on open-source? 

Below, we will discuss 3 streaming service scenarios of increasing complexity: browser-based, Multi-platform, and enterprise. For each, we will explore how Bitmovin’s Player integrations compare to building in-house solutions by focusing on ‘Time to Market’ and ‘Annual Maintenance’. 

Person-Months definition:

As an industry standard, we will use Person-Months to reference how much time it would take a single engineer working full time to complete a project, i.e., 6 PMs = 1 person working full time on a project for 6 months. Note that we will not assume that tasks could be done in parallel by multiple engineers, as in many cases this is not possible; therefore, 6 PMs does not necessarily mean the same work could be achieved in 3 months by 2 engineers, and even less likely in 1 month by 6 engineers.

Scenario 1: Browser-Based Streaming Service

- Bitmovin

Estimated time showing the difference between what is saved in scenario 1 by deploying the Bitmovin Player compared to open-source players – 60% to 90%

Factor 1 – Time to Market – Feature Aspects:

In this scenario, we focused on browser-based streaming only, which requires the fewest devices and features to satisfy viewers of the 3 scenarios. Here the playback experience includes no additional integrations like Ad insertion or content protection, though there may, however, be other feature requirements needed to satisfy the audience and maintain a higher user experience, such as 

  • Accessibility standards
  • Closed caption handling (formats, rendering, styling, metadata) 
  • Player user interface modifications 
  • Graphic overlays
  • Additional metadata parsing

Just for these features alone, when developing on an open-source player, our team estimates this feature work to take up to 1 person-month. 

Factor 2 – Time to Market – Adding QA and Event tracking:

Depending on the size of the audience, services will want to consider both:

  • Player test suite for release automation 
  • Simple analytics/statistics to monitor usage and engagement metrics 

For both of these items, our team estimates that it will take about 4 person-months to get a basic test suite and an event-tracking analytics service up and running.

Altogether, we estimate it will take up to 4 person-months just to get up to speed with the experience viewers expect and a service level internal engineers can work with. This is in contrast to the estimated time of 1 person-month to complete an integration of Bitmovin’s Playback solution, which frees up 3 person-months.

Factor 3 – Annual Maintenance:

Whether it’s adapting to changes or edge cases in different browsers, improving the test suite or simple bug fixes as they’re reported, we estimate this collectively to cost an engineering team between 2-4 person-months each year. In comparison to the less than 1 person-month a year required for just reporting issues to Bitmovin and updating to the latest Player versions once the Bitmovin engineers have resolved them.

Total time saved by utilizing Bitmovin’s Playback solutions

  • Time to Market: 3 person-months
  • Annual Maintenance: 1-3 person-months per year

Scenario 2: Multi-platform Streaming Service

- Bitmovin

Estimated time showing the difference between what is saved in scenario 2 by deploying the Bitmovin Player compared to open-source players – 60% to 80%

Factor 1 – Time to Market – Additional devices:

In addition to the work required to create the browser-based experience, multi-platform streaming services will also be required to reach their viewers on mobile devices. Video playback on Android and iOS devices is a different beast to web browsers; for one thing, there isn’t the option of having multiple open-source players to choose from, in-house solutions will need to be based on the native players:

  • ExoPlayer on Android 
  • AVPlayer on iOS

While having lots of low-level video logic handled by the native player has its advantages, engineers are still required to develop non-base features and customizations on top, all while being restricted by what’s possible in the Android and iOS environments. 

Factor 2 – Time to Market – Monetization:

With a multi-platform streaming service having a large enough user base, it will likely mean these services are thinking about other ways to monetize, such as advertising. Without having the pipeline to encode the ads themselves, client-side ad insertion (CSAI) is the first option for streaming services, with Google’s IMA SDK being the standard way to integrate CSAI into a streaming workflow (read more on our blog about CSAI and SSAI). Interactive Media Ads SDK handles the ad playback by using a separate video element, leaving streaming services to focus on controlling the main content. However, while integrations for this SDK exist for open-source players, there are still features that aren’t available out of the box, such as ad scheduling, which will require extra development work.

Another consideration aside from monetization is protecting content from unauthorized viewing or distribution using technologies like DRM (Digital Rights Management). Additional integrations like DRM will require configuration on the player to set up, with use cases like multi-DRM being more complicated than others. 

Our engineers anticipate up to another 6 person-months, with Mobile and CSAI integrations accounting for most of that time, bringing it to a total of 10 person-months to get to market with the expected viewer experience of a multi-platform streaming service. This is in contrast to the total of 3 person-months required to integrate Bitmovin’s dedicated SDKs for Web, iOS, and Android platforms into the various websites and applications, as well as make use of the pre-integrated IMA SDK for Ad insertion or use the dedicated Bitmovin Advertising Module for more control over the ad playback experience. 

Factor 3 – Annual Maintenance:

On a recurring basis, mobile platforms require lots of maintenance to ensure security and compatibility requirements are met, otherwise other library dependencies may fail, or worse, the app may not be accepted into the Google Play or Apple App Store. Our engineering team estimates they spend over 600 hours annually on just baseline maintenance for both iOS and Android together. In addition, any updates to the IMA SDK or Google’s ad workflow in general will need to be accounted for. Including the maintenance required for the browser-based streaming service, we estimate an in-house solution to collectively cost an engineering team 4-6 person-months per year.

This is in contrast to the less than 1 person-month a year required for just reporting issues to Bitmovin and updating to the latest Player versions once the Bitmovin engineers have resolved them.

Total time saved by utilizing Bitmovin’s Playback solutions

  • Time to Market: 7 person-months
  • Annual Maintenance: 3-5 person-months per year

Scenario 3: Enterprise Streaming Service

- Bitmovin

Estimated time showing the difference between what is saved in scenario 3 by deploying the Bitmovin Player compared to open-source players – 70% to 80%

Factor 1 – Time to Market – Additional devices 

Building on the work required for a multi-platform streaming service, enterprise streaming service use cases are far more complex and are also where the most engineering time can be saved. Viewers using the largest streaming services on the market will expect to be able to stream on every device, particularly focusing on the Living Room, i.e., SmartTV and other connected TV devices such as game consoles and set-top boxes (STBs). A standard set of such devices would include:

  • LG WebOS TV
  • Samsung Tizen TV
  • Hisense TV
  • Vizio TV
  • Playstation 4 & 5
  • Xbox
  • Comcast STB 
  • Sky STB  
  • Roku 

Of these 10 devices, besides Roku, all make use of browser-based playback, meaning an HTML5 player could be used. However, unlike browsers, these connected TVs and devices can have varying hardware and software specifications that will inevitably cause issues with the playback experience. An example of this is Hisense and Vizio TVs, which have a lot of fragmentation in the specification of their TVs (even those released in the same year), meaning different issues could appear on two seemingly similar models. Additionally, different devices could have different browser implementations, like in the Managed Media Source (MSE) API; differences in the MSE implementation could mean services may need to have multiple DRM technologies or separate delivery pipelines for specific devices. Also, as mentioned, Roku is not browser-based and requires its own native implementation. 

All in all, our team estimates about 5-14 person-months to add support and integrate with a set of the aforementioned devices. One important note to add to this is that these estimations do not include adding this set of devices to any test automation system for physical device testing. 

Factor 2 – Time to Market – Additional Features 

With enterprise-level device coverage comes the expectation of enterprise-level features, which are likely only possible due to the necessary backend/pipeline created during a streaming service progression to this size. These use cases may include implementing Server-Side Ad Insertion (SSAI), which can cause many complications on Smart TV devices in particular, such as switching between DRM-protected content and clear (no DRM) content, as well as buffer management, simply due to non-standardized native API implementations and/or the absence of proper logging. Our team goes into more detail about SSAI and its challenges in this blog

Another example of an enterprise use case is low latency streaming, which again not only comes with its own implementation challenges but also a lot of work to understand platform-specific limitations. For example, lower-end devices, such as STBs or pre-2018 TV models with less processing power, could struggle to keep a stream at the live edge without stalls, not having a large enough buffer to stabilize playback. For this feature development and additional troubleshooting per platform, our team estimates somewhere between 6-12 person-months to achieve a functional enterprise player solution. 

Given the 10 person-months required for the multi-platform streaming service, the additional work to achieve an enterprise workflow with an in-house solution is at least another year. However as not all devices or streaming workflows are crucial to every service in the first few years, we estimated an additional 14 months is sufficient to add a few connected TV devices and implement SSAI on these platforms. 

All in all, we estimate a total of 24 person-months to get to market with an in-house enterprise playback solution that meets the expected viewer experience. This is in contrast to the 8 person-months required for integrating Bitmovin’s Player SDKs with dedicated modules for specific devices and intuitive APIs to abstract away the nuances of the native platform APIs. Not to mention removing the learning curve; all the browser-based devices would use Bitmovin’s Web SDK and, therefore, a single set of APIs, whereas an engineer for an in-house solution would need to have very in-depth knowledge of each platform’s unique properties and APIs. On top of all this, streaming services can leverage the expertise of Bitmovin’s solutions and integrations teams, who can offer expert advice on best practices/industry trends and even get hands-on with assisting streaming services to get to market as quickly and painlessly as possible.

Factor 3 – Annual Maintenance:

Like most consumables nowadays, manufacturers are launching new device ranges every year (or more), meaning that viewers are getting newer TVs, smartphones, and game consoles every year. This creates a requirement for continuous testing on both new and existing platforms to ensure compatibility and an uninterrupted experience for all viewers across all devices. Device testing can be especially challenging with regional devices or devices with high fragmentation in hardware and software, where debugging and reproducibility can be difficult. Given the complexity of the enterprise workflow and how many moving parts there are in the delivery chain at any one time, integrity maintenance can be very time-consuming, and as such, our engineers estimate it to be an additional 8+ person-months per year. This doesn’t include any additional feature maintenance that may come up; in the examples of Low Latency and SSAI workflows, there are new protocols, metadata, and managing technologies being added to the industry standards all the time, which may want/need to be implemented by enterprise streaming services to remain competitive.

On top of the multi-platform streaming service annual maintenance, our team estimates up to an additional 12 person-months per year for enterprise services. 

In comparison to the estimated 1 person-month per year in annual maintenance for using the Bitmovin Player, where all the device stability testing is done as part of the service, along with handling all the nuances of updated protocols and standards in the industry. This time is mainly for reporting issues to Bitmovin and updating the latest Player versions once the Bitmovin engineers have resolved them. Services can further save time by getting weekly stream health reports from Bitmovin’s Stream Lab, where customers can submit streams to be tested on physical devices like Smart TVs, game consoles, browsers, and STBs. 

Total time saved by utilizing Bitmovin’s Playback solutions:

  • Time to Market: 16+ person-months
  • Annual Maintenance: 11+ person-months a year  

Summing up the scenarios

In conclusion, these numbers are now a valuable tool for any streaming service to consider and make sure they’re choosing the best option. However, these numbers are only a guide constructed from our collective experiences at Bitmovin, and so aren’t going to apply strictly to every use case. They can, however, at least spark the right questions when it comes to the ROI of ‘build-or-buy’. Namely, to use the enterprise use case as an example, the question is: does getting to market 16+ months sooner, and so 16+ months generating ad and subscription revenue, outweigh the cost of a commercial player license?

If you have any questions about the contents of this article, please feel free to get in touch. 


A solution that fits the best of both worlds

In addition to the cost/resource-focused rationale behind the decision to use open-source vs. commercial, the other large factor we hear in the market is the desire for flexibility. Traditionally and by nature, the open-source model allows engineers full control and flexibility over their system, while commercial solutions tend to be more closed off. Until now.

At Bitmovin, embracing the benefits of open-source, we’re resolving the flexibility aspect of a commercial player with our next Web Player SDK, Player Web X. This new Web Player benefits from a completely new architecture, which most notably has an open-source plugin system. It gives engineers the performance of the Bitmovin Player with the flexibility of open-source. You can see more about Player Web X on our dedicated Player Web X page.

The post Open-Source vs. Commercial Players: Understanding the True Cost of Ownership appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/open-source-vs-commercial-player-cost-analysis/feed/ 0
Video Platforms, Video Streaming APIs, and SDKs Explained https://bitmovin.com/blog/video-streaming-apis-sdks-ovps-explained/ https://bitmovin.com/blog/video-streaming-apis-sdks-ovps-explained/#respond Fri, 22 Mar 2024 14:07:25 +0000 https://bitmovin.com/?p=278442 Building video into products and services is tough. Businesses need robust streaming infrastructure to store, encode, manage, deliver, and analyze video content. Plus, most dev teams have expertise in their company’s core competency rather than back-end video technology. 

That’s where online video platforms (OVPs), video streaming application programming interfaces (APIs), and software development kits (SDKs) come into play.

In this guide we take a look at each solution in turn to help identify the right approach for you and your team.

The post Video Platforms, Video Streaming APIs, and SDKs Explained appeared first on Bitmovin.

]]>

Video is integral to digital experiences. Whether end-users are scrolling through social media, binging content on their connected TV, or sweating it out to an online fitness class, streaming now plays a central role in driving online engagement. 

But building video into products and services is tough. Businesses need robust streaming infrastructure to store, encode, manage, deliver, and analyze video content. Plus, most dev teams have expertise in their company’s core competency rather than back-end video technology. 

That’s where online video platforms (OVPs), video streaming application programming interfaces (APIs), and software development kits (SDKs) come into play.

Think of OVPs as all-in-one solutions. They offer a comprehensive suite of tools to manage your entire video workflow, from ingestion to analytics. These are ideal for businesses needing a user-friendly platform with minimal development effort. But if your requirements go beyond simply uploading and sharing video content, OVPs may be a poor fit.

APIs, on the other hand, provide granular control. They act as messengers, allowing you to integrate specific video functionalities like playback, encoding, or analytics into your existing applications. APIs are perfect for developers seeking the flexibility to develop advanced applications, without having to start from scratch.

Finally, SDKs are pre-built toolkits designed for integrating specific video features into mobile and web applications. They save development time by offering all the building blocks for a specialized language or task — like deploying your HTML5 player on Roku. 

SDKs are often used in conjunction with APIs and OVPs. For this reason, it’s not always a question of OVP vs. API vs. SDK, but rather which combination of technologies is best for your business.

Acronyms abound in the alphabet soup that is video streaming. But don’t worry. In this guide to OVPs, APIs, and SDKs, we define each term and explore which option is best depending on your use case. From there, we recommend the top products in each category for business leaders and software developers alike.

Technical requirements for deploying online video

Before getting into it, let’s nail down the capabilities and features needed to integrate video into your product and look at how OVPs, APIs, and SDKs support these requirements.

Encoding and transcoding 

Encoding and transcoding are often used interchangeably, but they refer to two distinct steps. Encoding involves converting RAW video into a compressed digital format directly after the video source is captured, while transcoding employs a digital-to-digital conversion process to prepare and optimize video content for distribution to end users.

Live Encoder Workflow

Most online video content has been both encoded and transcoded before it reaches viewers. These processes are what make it possible to deliver bulky video over the internet and ensure smooth playback across a variety of devices. 

Transcoding is a critical capability that’s supported by all major OVPs and APIs. What differs, though, is how advanced and flexible different platforms’ transcoding features are. Most OVPs take a one-size-fits-all approach. This means the video bitrate, frame rate, and other technical parameters are predefined and all streams are prepared in the same way.

APIs, however, offer more control over transcoding configurations without having to access a dashboard. This allows developers to configure encoding settings and use a variety of protocols and codecs. The process of uploading videos is also automated with APIs, whereas OVPs generally require manual uploads through the interface. Finally, some encoding solutions offer per-title encoding/transcoding capabilities. 

With per-title encoding, the settings are customized to each video. We designed the Bitmovin Per-Title Encoding solution to automatically analyze the complexity of every file and create the ideal adaptive bitrate (ABR) ladder depending on the content. This ensures high-quality viewing experiences and efficient data usage by creating dynamic bitrate ladders on a case-by-case basis. The player can then select from multiple bitrates based on network and computing resources available. 

One additional item is that you can deploy Bitmovin’s VOD and Live encoders on your infrastructure within any major cloud provider using Bitmovin’s Cloud Connect feature. This helps maintain the highest cost efficiency and use Bitmovin’s infrastructure through its managed service.

➡ Read our Video Encoding Streaming Technology Guide to learn more.

Storage

Video accounts for the majority of the internet’s traffic. As such, it’s no surprise that CDN and storage bills make up the biggest operating expenses for OTT providers. The best way to minimize these costs is through technologies like per-title encoding, so you’ll want to consider how different components of your workflow impact one another when evaluating OVPs and APIs.

A Forrester study found that Bitmovin customers running their encoding in the cloud saw a 355% ROI over a three-year period.

Other factors to think about that impact storage costs include the anticipated volume, geographic distribution, and integration efforts. Many OVPs offer built-in storage solutions as part of their platform to simplify management. This provides a centralized storage system within the platform, but it’s difficult to tailor it to your specific storage requirements.

Streaming video APIs offer a more customizable approach to storage, including the ability to integrate with popular cloud storage providers like AWS, Microsoft Azure, and Google Cloud. This means developers can adapt the approach based on their scalability and geographic redundancy needs, and also optimize storage costs based on their existing workflows.

Distribution

Video delivery comes next, which is made possible by Content Delivery Networks (CDNs) like AWS, Akamai, Microsoft Azure, and Google Cloud Marketpalce. These networks of interconnected servers ensure efficient video distribution across the world. 

Most OVPs have multiple CDNs built-in, whereas APIs often give users the flexibility to deliver streams on their own CDN. With Bitmovin, you can do either, ensuring both customization and easy workflow configuration. 

Playback

Video players are essential components of streaming platforms, giving viewers control over what they watch, which devices they watch it on, and when the content plays. Players also tie everything together, making player control critical to the workflow.

HTML5 players can be built from scratch using an open-source option or deployed and customized using a solution like the Bitmovin Player. The same goes for deploying native players for iOS and Android. Going with a pre-built option provides access to advanced features like adaptive bitrate playback, DRM support, monetization capabilities, and interactive playback controls. 

Software development kits (SDKs) also play a major role in streamlining support for a range of devices and mobile applications by providing platform-specific integration tools. This helps organizations scale their solution and ensure a high-quality viewing experience for their audience without requiring significant development time. 

OVPs always have integrated video players as part of their platform, but they may lack the flexibility and customization required for branding or integrating unique playback features.

➡ Read our Ultimate Guide to Video Players to learn more.

Analytics

Even the most straightforward streaming workflows have hiccups. As such, insight into video performance and quality of experience is a must. Organizations need the ability to pinpoint issues before they impact their audience, gain actionable insight into viewer behavior, and optimize resource utilization with visibility across the video streaming pipeline.

OVPs typically provide basic metrics like views, watch time, and completion rate. Some take this even further with heatmaps and click-through rates. For deeper insight, though, APIs are the way to go. 

With API access, you can gain insights into a wider range of data points, including:

  • Error tracking
  • Stream performance
  • Advertising metrics
  • Viewer demographics
  • And more.

With Bitmovin’s Analytics, organizations can actively track more than 200 data points in real time and see how their streams compare to industry benchmarks. They can view performance within the Bitmovin Dashboard or utilize the Analytics API to get more granular insights which can then be pushed to major data aggregator platforms, such as Grafana, Looker Studio, AWS S3, and others for a more holistic view.

Online Video Platforms (OVPs)

Now that we’ve explored the primary requirements of video streaming — encoding and transcoding, storage, distribution, playback, and analytics — let’s dive into online video platforms (OVPs) and the best options for businesses.  

What is an OVP?

Online video platforms, or OVPs, are the prefabricated homes of video streaming. They act as turnkey solutions for managing, distributing, and monetizing online video content — eliminating the need for technical expertise or third-party integrations.

With an OVP, you get it all. The content management system (CMS), HTML5 video player for web-based devices, native players for mobile experiences (sometimes), and monetization tools are built in. This is great for businesses that want an effortless solution, but customization can be limited. It’s difficult to tailor OVPs to unique business models or existing workflows. As such, OVPs are better suited for building your business’s online presence across a dedicated channel, simple video workflows that don’t need to be fine-tuned, and hosting small content libraries on your website. 

OVP benefits

OVPs act as an all-in-one streaming platform for businesses with limited developer expertise and straightforward requirements. The benefits include:

  • Turnkey solution: If you’re looking to host an online streaming event, embed content on your website, or use video for employee communications, OVPs are the quickest way to get started.
  • Low cost of entry: OVPs are affordable and sometimes even free. They are also a great way to test interest among your user base before investing developer resources into building out a more comprehensive solution.

OVP cons

Because OVPs are designed for simple streaming workflows, businesses are limited to the tools and capabilities built into these platforms. This means that if you’re trying to build something specific, like an esports platform or fitness app, you’d be better off with an API. 

  • Limited functionality: Advanced features and specific functionalities like low-latency streaming, VR & 360, and ad insertion are often missing from OVPs.
  • Lacking control: Because OVPs control every step of the streaming workflow — including the encoding technology, CDN, and player — businesses using OVPs don’t have the same control over their infrastructure. 
  • Missing insight: OVPs offer basic analytics capabilities. However, businesses requiring detailed insight into viewer behavior and stream performance would be better off with a streaming analytics API.

What to look for in an OVP

If an OVP makes sense for your business or video project, you’ll want to evaluate the following aspects of selecting a vendor:

  1. Ease of use: Convenience is the name of the game with OVPs, so you’ll want to pick something with an intuitive user interface (UI). The goal is to streamline tasks like uploading, managing, and distributing video content for non-technical users.
  2. Feature set: Essential features like uploading and sharing content, embedding video on your website, and the ability to stream live content should be table stakes when comparing providers. From there, in-depth analytics, the ability to customize the viewing experiences, and advanced encoding capabilities help set some OVPs apart.
  3. Scalability and flexibility: If you’re planning for growth, you’ll want to choose an OVP that can scale with your business. Some OVPs offer APIs to accommodate future expansion, eliminating the need to migrate to a more flexible solution. 
  4. Reliability and performance: Assess the uptime guarantees, server stability, and service level agreements of each option. Additionally, look for features like adaptive bitrate streaming, integration across multiple CDNs, and global delivery capabilities to ensure smooth playback for viewers worldwide.
  5. Security measures: Content protection is key for use cases like corporate communications or streaming premium content to subscribed viewers. Encryption, access controls, and digital rights management (DRM) all help to this end.
  6. Customer support: Assess the level of customer support provided by the OVP vendor, including tutorials, technical assistance, and dedicated account management. Look for a vendor that offers responsive support channels and comprehensive resources to help you maximize the capabilities of the platform.

Best OVPs for businesses

You’re likely familiar with leading brands in the OVP space. YouTube, Vimeo, and Bitmovin’s Streams are three popular examples. Here’s a look at how they compare. 

YouTube

YouTube OVP

As one of the most recognizable names in online video, YouTube needs no introduction. The platform allows users to upload, view, share, and comment on videos within its platform. Businesses can also use YouTube to embed videos on their website, but YouTube branding and advertisements make this a less-than-ideal application. Monetizing the content is also restrictive, as is content security. 

Most businesses use YouTube as a way to build their online presence rather than support their video infrastructure. For that reason, it’s often categorized as a social media channel rather than as an OVP. 

Marketers and businesses shouldn’t sleep on YouTube. However, creating video-powered products and services requires taking a different approach.

Vimeo

Vimeo OVP

Vimeo offers tools for making, managing, and sharing videos. The platform lets businesses and creators host virtual events, webinars, and other marketing-focused experiences. It also provides the functionality to live stream to multiple social channels and websites simultaneously.

Vimeo offers limited monetization tools and often drives traffic to vimeo.com rather than your business’s website. That said, the platform delivers ad-free experiences and more end-user customization options than YouTube.

Marketers looking for a simple way to embed video on their website and live stream across multiple platforms may want to give Vimeo a try. But if you’re serious about building native video experiences that live entirely on your owned digital properties, you’ll want a more business-oriented OVP like Bitmovin’s Streams. 

Bitmovin’s Streams

Bitmovin Streams video streaming api

Bitmovin’s Streams helps simplify streaming for businesses, serving as a single platform for live and on-demand encoding, CDN delivery, video playback, analytics, security, and more. As an end-to-end solution that’s built for the cloud, it eliminates the complexity of building your streaming infrastructure in-house. 

Features supported by Streams include:

  • Flexible video uploading and encoding for on-demand content
  • Live streaming and transcoding
  • Drag-and-drop Player customization
  • Simple sharing and easy-to-use embedding
  • In-depth analytics
  • WordPress plugin for quick integration
  • Content protection with Signed URLs and domain restrictions

Streams also has a simple API for organizations looking for greater control, which brings us to the next section.

- Bitmovin

“Streams is one of our most important launches to date because it helps new media companies deliver high-quality streams to audiences simply and efficiently. New media companies typically have smaller developer teams that don’t have the time and capacity to get familiar with the complexities of video streaming. Therefore, there is a clear market need for a straightforward, low- or no-code solution like Streams that removes the complexity of video streaming to deliver content at speed and scale.

Demand for video streaming has grown at an incredible rate in recent years, all of which has been underpinned by extraordinary technological advancements. However, there now needs to be a greater focus on making innovations work in a simpler, more user-friendly way so video streaming can truly become ubiquitous, to enable everyone to build video products on the same level of quality and experience as the big names like Netflix.”

– Stefan Lederer (CEO, Bitmovin)

Video Streaming APIs

APIs, or application programming interfaces, are essential tools in every developer’s toolkit. They provide the flexibility to develop advanced apps while hiding the complexity behind the scenes. Here’s a look at the role they play in the world of audio and video streaming.

What is a video streaming API?

Video streaming APIs connect developers to streaming platforms like Bitmovin using code. Unlike traditional user interfaces (UI) found on video platforms like YouTube, APIs offer programmatic access to a wide range of features and capabilities, empowering developers to build customized streaming experiences tailored to their specific needs.

Without video APIs, businesses looking to deploy unique and innovative video applications would have to start from scratch. In this way, APIs speed things up significantly. Many developers elect to use video APIs to support a wide range of functionality for creating, customizing, and controlling video workflows. 

Some platforms that offer APIs can also be managed via a no-code UI. This is a great middle ground. While the UI might not provide the same level of control and customization, API access is available should the business need it. 

APIs impose almost no limits on the external services and functionality that you can integrate into your application while speeding up development through access to core services like encoding and playback.  

How do video APIs work?

Video APIs act as intermediaries, facilitating communication between the developer’s application and the underlying streaming infrastructure. In doing so, APIs hide the intricacies of online video distribution, letting developers focus on the products they’re building. 

Here’s what takes place behind the scenes when using a video streaming API.

  1. Establishing communication: Video APIs create channels for developers to interact with the underlying video streaming platform. These channels typically operate over HTTP or HTTPS protocols, allowing for secure data transmission.
  2. Authentication and authorization: Before accessing the functionalities offered by the streaming video API, developers need to authenticate themselves and obtain appropriate authorization. This is often achieved through the issuance of API keys or tokens, which verify the identity of the requesting user.
  3. Requesting services and data: From there, developers can use video APIs to request various services and data from the streaming platform. This may include tasks such as uploading video content, initiating encoding or transcoding processes, retrieving playback URLs, or fetching analytics metrics.
  4. Processing requests: The video API then processes these requests by interfacing with the backend infrastructure of the streaming platform. This involves executing the requested operations, such as encoding/transcoding video files into multiple formats, storing content in designated locations, or generating playback manifests.
  5. Handling responses: After processing requests, video APIs generate responses containing the results of the requested operations. These responses are returned to the developers in a standard data format like JSON or XML. 
  6. Monitoring and management: Video APIs often include functionalities for monitoring and managing video assets and workflows. This may involve querying the status of ongoing encoding jobs, adjusting playback settings dynamically, or accessing real-time analytics data to gain insights into viewer behavior.
  7. Ensuring reliability and performance: Video APIs prioritize reliability and performance to ensure smooth and uninterrupted video streaming experiences. Mechanisms for fault tolerance, load balancing, and adaptive bitrate delivery help handle varying levels of demand and end-user bandwidth and mitigate potential disruptions.

Types of video streaming APIs

Streaming APIs are often broken out by the specific capabilities they support. As such, you may hear references to more nuanced services like a live video streaming API or video analytics API. The names are self-explanatory, but let’s touch on how they compare.

VOD encoding APIs

Video-on-demand (VOD) encoding APIs take source files and convert them into adaptive streaming formats like MPEG-DASH and HLS for adaptive bitrate delivery. They also create thumbnails, subtitles, and other metadata. But that’s not all. When using Bitmovin’s encoder, you benefit from per-title encoding capabilities, multi-codec streaming, and HDR support as well.

Capabilities to look for in a VOD encoding API:

  • Multi-codec and format support
  • Adaptive bitrate delivery (ABR)
  • Per-title (also called content-aware) encoding
  • Thumbnail and metadata generation
  • Cloud-based processing
  • Advanced features like DRM protection and ad insertion
  • Integrations with your existing cloud storage, CMS, or analytics platforms

Live streaming APIs

Broadcasting live video online is no simple task. Unlike video-on-demand (VOD) encoding APIs, which focus on processing pre-recorded content, live streaming APIs facilitate the real-time transmission of video content to viewers as it happens. These workflows often use a contribution encoder like OBS or Videon EdgeCaster, as well as a live streaming API like Bitmovin

Whether broadcasting live events, webinars, or gaming streams, these APIs empower developers to deliver high-quality live video content reliably and efficiently. To keep lag at a minimum, you’ll want to find a live streaming API with support for low-latency protocols like RTMP or SRT.

Capabilities to look for in a live streaming API:

  • Low-latency protocols like RTMP, SRT, and WebRTC
  • Support for primary and backup inputs with automatic failovers
  • Integration with popular contribution encoders like OBS, Wirecast, etc.
  • Integration with interactivity tools like chat and polling
  • Security and DRM
  • Live recording and archiving
  • Analytics and insights

“Bitmovin’s flexible and customizable technology has enabled us to solve one of our unique broadcasting challenges: to seamlessly generate a vast amount of parallel live video feeds and present them to the user in the highest quality, wherever they are in the world. 

Znipe.TV’s unique technology of broadcasting time-synchronized video stream of multiple angles sets new demand on a transcoder service, which Bitmovin delivers with their fantastic technical roadmap. To achieve the unique Znipe.TV viewing experience, we chose Bitmovin’s encoding to handle the video transcoding so that we can focus on what we do best, providing world-class entertainment for fans globally, live and on demand.”

– Erik Åkerfeldt (CEO & Co-founder, Znipe.TV)

Playback APIs

Playback APIs, also called client-side video APIs, allow developers to interact with a video player’s core functionality. This includes creating video player instances, controlling playback, or loading new sources. A video player API can also be used to monitor the state of a video player and receive notifications when certain playback events occur.

While some video player APIs differ across platforms, we designed the Bitmovin Player API’s to provide platforms with a unified development experience across Web/HTML5, Android, iOS, and Roku.

Capabilities to look for in a player API:

  • Cross-platform compatibility and SDKs for different devices
  • Customizable video player UI
  • Advanced playback features like subtitles and playback speed control
  • Adaptive bitrate support
  • Offline playback support
  • Integration with analytics platforms
  • Security features
  • Testing solutions to ensure quality playback

Analytics APIs

Video analytics APIs provide extensive customization over data architecture and how it’s presented. The Bitmovin Analytics API, for instance, allows developers to export raw datasets to cloud storage buckets and further enrich their insight with information collected by other providers. 

With analytics APIs, developers gain access to real-time monitoring and reporting capabilities. Whether the goal is to detect playback errors, identify trends, or monitor audience engagement during live events, these APIs enable timely decision-making and proactive intervention should any hiccups occur.

Capabilities to look for in an analytics API:

  • Data capture across an extensive range of data points
  • Real-time insights and reporting
  • Data customization and export
  • Integration with existing analytics platforms

Considerations when comparing video APIs

We’ve covered the capabilities needed for specific parts of the workflow, but what about general considerations that apply to all streaming APIs? Here’s a look at key considerations.

Flexible deployment

Development teams shouldn’t be retrained to specific hardware or cloud services. As such, you’ll want to look for video encoding and playback APIs that are decoupled from any underlying technology. Finding video infrastructure solutions that can be deployed anywhere prevents vendor lock-in and boosts agility.

If you’re already running applications in the cloud, then finding products that can run on your existing resources often makes sense. You’ll also gain more control over costs and commitments by finding a video streaming solution that’s available on your existing cloud providers’ marketplaces.

Bitmovin’s solutions are available on AWS Marketplace, Azure Marketplace, and Google Cloud Marketplace. We also offer the flexibility to utilize your pool of resources on Google Cloud or use our solution on Akamai Cloud.

Comprehensive functionality

Video streaming workflows have a lot of moving parts. So we’d suggest finding a video API that offers coverage across every step, from encoding to playback to analytics. It’s also imperative to assess your specific needs — such as low-latency streaming, ad insertion, and advanced UI styling — before landing on a vendor. 

➡ Check out our extensive library of interactive demos and API examples for a peak at the functionality our platform supports.

Customization

One of the main benefits of going with a video API over an OVP is the extensive customization it will allow. This is especially important at customer touchpoints like the video player. The ability to adjust the appearance and add interactive elements to the player UI will help deliver the differentiated experience you’re aiming to build. 

That said, when speed-to-market is a priority, you don’t want to start from scratch. Finding a player API that can be tweaked without having to build the entire interface is a great middle ground.

Integration effort

The ease of integrating video capabilities into your workflow will impact your development timeline and the associated costs. Teams looking to get their services to market ASAP may be better off with a turnkey OVP than a video API. But, if the flexibility of a video API is non-negotiable, then you’ll want to find a solution with extensive developer tools. Launching cross-platform video experiences is already complex, which is why builder-centric resources are worth their weight in gold. 

Any vendor worth your investment should offer:

  • Documentation
  • Support for popular programming languages
  • Development guides
  • Code samples
  • Community forums
  • 24/7 technical support and SLAs
  • Automated testing solutions
  • Dedicated software developer kits (SDKs)

OVPs vs. streaming APIs

To wrap up the last two sections, here’s a table summarizing the key differences between OVPs and APIs:

OVPAPI
What it isAll-in-one video solutions with predefined workflows for organizations lacking technical expertise.Developer-centric building blocks for unique video streaming platforms.
Technical nameOnline video platformApplication programming interface
Ease of useUser-friendly interface, drag-and-drop functionalityRequires development expertise
Control and flexibilityLimited customization options, predefined workflowsGranular control over every aspect of video delivery
Features offeredBuilt-in features like encoding, storage, CDN, players, analyticsOften focus on specific functionalities, though some video platforms offer API coverage across the workflow
IntegrationLimited integration options with external toolsHighly flexible integration with various services and workflows
Development effortNo coding requiredRequires developers to build custom integrations
Learning curveQuick and easy to learn with an intuitive UISteeper learning curve due to technical requirements
Cost effectivenessCost-effective for basic needsCan yield cost savings through integration with existing tech stack
Best forBusinesses with basic video needs and limited developer resourcesDevelopers and businesses seeking advanced customization and unique features

Video streaming SDKs

An SDK is a set of software-building resources tailored to a specific platform (like Roku) or scripting language (like Python). 

What is an SDK?

An SDK, or software development kit, is a set of developer-centric tools designed for a specific hardware, operating system, or language. These pre-packaged kits are made up of libraries, APIs, documentation, and code samples — essentially everything required to make a developer’s life easier.

What is a streaming SDK?

Video streaming SDKs help businesses accomplish specific tasks, such as mobile encoding or video playback on gaming consoles. While SDKs aren’t required to support this functionality, they abstract away much of the complexity and provide developers with specialized tools catered to a need.

Common video SDK Examples

Popular streaming SDKs are tailored to address the unique requirements of different platforms and use cases. Here are some of the SDKs video engineers rely on.

Encoding SDKs

Say you’re looking to build a user-generated content (UGC) mobile app that enables users to stream live video within your platform. For this, you’d need a mobile encoding SDK to convert the raw video files into a compressed streaming format for transport over the internet. 

Technologies like Streamaxia OpenSDK and the Larix Broadcaster SDK support these capabilities by encoding live content into contribution protocols like RTMP, SRT, and RTSP.

Alternatively, imagine you’re building a fitness broadcast platform like Classpass that lets users stream on-demand workout videos. Integrating the Bitmovin API directly into your platform would ensure high-quality playback for viewers and cost-effective storage for your organization. However, implementing this into your existing technology stack could require writing and maintaining code specific to the API’s structure and functionalities. 

With encoding SDKs catering to specific languages like Java, Ruby, and Python, developers can rely on pre-written methods for interacting with the Bitmovin Player API — significantly speeding up time to market.

Bitmovin offers dedicated SDKs for the following programming languages:

Video player SDKs

Consider all the devices that we stream content on today. End users demand the same experience across mobile screens, web browsers, and smart TVs. All of these environments have different requirements, though, which translates to countless hours of development time. 

Using a player SDK catered to each device alleviates this challenge. Video player SDKs make it easy to deploy your solution everywhere viewers are tuning in while ensuring flawless playback across screens. 

They do so by providing the app development tools required to embed video players into specific devices, making it simple for developers to create, control, and monitor the video player experience.

Bitmovin offers dedicated SDKs for the many devices out there, including:

Did you know…

Deploying Bitmovin’s Player on 2 or more SDKs enables customers to reach an additional 200,000 viewers monthly. Moreover, utilizing it on both iOS and Android platforms can save over 600 hours in player maintenance annually.

Learn more.

Streaming APIs vs. video SDKs

In most cases, developers use APIs and SDKs in tandem. So when considering video encoding, player, and analytics solutions, you’ll want to find developer-centric partners like Bitmovin that provide robust APIs and SDKs, extensive documentation, Github repositories, and community forums to speed up buildout.

Here’s a summary of how APIs and SDKs compare in terms of required expertise, development effort, and customization.

Streaming APIsVideo SDKs
What it isProgramming interfaces that interact with video streaming services, offering specific functionalities like encoding, transcoding, playback, and DRM.Pre-built software development kits that provide ready-to-use components like players, recording tools, and live encoding capabilities for mobile and web applications.
Technical nameApplication programming interfaceSoftware development kit
Ease of useRequires development expertiseRequires development expertise
Control and flexibilityHigh level of control over specific functionalitiesLess control due to focus on using pre-built components
CustomizationHighly customizable through API parameters and integrationsLimited customization within SDK functionalities
IntegrationFlexible integration with various services and workflowsLimited integration options within SDK functionalities
Development effortOften require more effort for developers to implementReduce overall dev effort by providing abstractions and pre-built solutions
Best forBusinesses with developer expertise seeking fine-grained control, advanced features, and unique integrationsBusinesses with moderate developer resources that require basic functionalities and faster integration

Conclusion

If you’re looking to add video to your service or application, you’re going to need an OVP, API, SDK, or a combination of all three. 

Here at Bitmovin, we use YouTube as a marketing channel and the Streams UI as a tool to quickly go live and share on-demand content on our website. These OVPs are great for tasks like uploading and sharing video content.

Companies looking to build innovative video platforms need more features than an OVP can provide. There’s always the option to develop bespoke solutions in-house, but it can get expensive. Plus, time to market matters. And by selecting ready-to-use streaming solutions that integrate with your existing ecosystem, businesses can speed things up.  

APIs and SDKs provide the perfect middle ground of speed and customization. That’s why we offer developer-centric video infrastructure solutions backed by API coverage across the video workflow.  Our extensive library of streaming APIs, VOD and Live Encoders, Player SDKs, and real-time Analytics simplifies building and optimizing without constraints.

Whether you need an end-to-end video platform backed by a simple API or a combination of components (such as an HTML5 player, cloud-based encoding, live encoding, or video analytics solution), we provide the development solution required to power the future of online video. 

Find out how Bitmovin’s streaming products, APIs, and SDKs can give you a competitive edge. Start your trial today.

The post Video Platforms, Video Streaming APIs, and SDKs Explained appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/video-streaming-apis-sdks-ovps-explained/feed/ 0
How to Create a FAST Channel in Minutes with Eyevinn Technology https://bitmovin.com/blog/eyevinn-fast-channels/ https://bitmovin.com/blog/eyevinn-fast-channels/#respond Fri, 09 Feb 2024 15:27:52 +0000 https://bitmovin.com/?p=276625 With the rise of open-source technologies and cloud platforms, launching your own FAST channel is more accessible than ever. The Open Source Cloud, with its array of tools and services, offers a comprehensive environment to deploy a FAST Channel Engine. This guest post from Eyevinn Technology guides you through the process of setting up a FAST channel using the FAST Channel Engine within the Open Source Cloud with content preparation using Bitmovin Streams.

The post How to Create a FAST Channel in Minutes with Eyevinn Technology appeared first on Bitmovin.

]]>

This guest post was contributed by Magnus Svensson, Media Solution Specialist at Eyevinn Technology.

Creating Free Ad-Supported Streaming TV (FAST) channels is becoming increasingly popular among content creators and broadcasters aiming to reach a wider audience without the need for a subscription model. 

With the rise of open-source technologies and cloud platforms, launching your own FAST channel is more accessible than ever. The Open Source Cloud, with its array of tools and services, offers a comprehensive environment to deploy a FAST Channel Engine. This article guides you through the process of setting up a FAST channel using the FAST Channel Engine within the Open Source Cloud with content preparation using Bitmovin Streams.

- Bitmovin

The base for the virtual channel is transcoded and packaged HLS VoD assets stored on an origin. The advantage with virtual channels is that you only prepare and encode the content once. In this example we will use Bitmovin Streams to prepare the VoD assets.

Transcoding and packaging

By following these steps, you can prepare your videos for streaming, ensuring they are accessible and perform well across all devices and bandwidth conditions. 

Open a web browser and go to the Bitmovin Streams Dashboard. If you’re new, you’ll need to sign up. If you already have an account, just log in. Select the video files on your computer that you want to transcode. Simply drag the video file from where it’s located on your computer and drop it into the Bitmovin Streams interface.

Streams upload

After dropping the video, it will automatically start uploading to the Bitmovin server. Depending on the size of the video and your internet speed, this may take a few moments. In this case, the default or recommended settings is used.

Bitmovin Streams will now process the video, converting it into an HLS stream with multiple quality levels to ensure smooth playback across all devices and network conditions. You can watch the progress of the transcoding process on the platform. Completion time will vary based on the video size and chosen settings.

Once transcoding is complete, Bitmovin Streams provides you with a link to your HLS playlist (the .m3u8 file) and the associated video segments. This is what you’ll use to create the virtual (FAST) channel.

Create a channel

Open your web browser and go to https://eyevinn.osaas.io and login using your credentials. Once logged in, locate the “Subscriptions” item in the menu on the left-hand side of your screen and click on it. This will take you to the page where you can manage and explore available services.

On the Subscriptions page, look for the card labeled “FAST Channel Engine.” This represents the service you’ll use to create your FAST channel. Next to the service title, there’s a drop-down menu symbolized by three dots. Click on this menu to reveal more options and select “Create channel.”

Enter a meaningful name for your channel. This name will help you identify it among other channels you may create. In this example the type “Playlist” is used. This option indicates that your channel will play content sequentially from a playlist you provide.

Enter the URL to your playlist in the “URL” field. A playlist is essentially a URL pointing to a text file containing a list of .m3u8 URLs, each representing a streamable video segment. Make sure your playlist is correctly formatted and accessible online.

bitmovin_demo.txt

https://streams.bitmovin.com/cme4a5bammi5alv2ilk0/manifest.m3u8

https://streams.bitmovin.com/cmbf4nfgnfcht63mpov0/manifest.m3u8

https://streams.bitmovin.com/cme4d60piu7i292cnbmg/manifest.m3u8

After entering all necessary information, press the “create” button. The platform will now process your request and start setting up your channel based on the playlist provided. This process may take a few moments. You can monitor the progress directly on the platform.

Once your channel is successfully created, find the channel’s drop-down menu (again, symbolized by three dots). Click on it and select “Copy URL” to copy the channel URL to your clipboard.

Open a new tab in your browser or launch a web player that supports .m3u8 streaming. Paste the copied URL into the player’s input field to start streaming your channel. This step is crucial for ensuring everything is working correctly and allows you to preview your channel’s content as your audience would.

eyevinn fast

Conclusion

Creating a FAST channel using the FAST Channel Engine in the Open Source Cloud is a powerful way to reach audiences with your content. By leveraging open-source technologies and cloud infrastructure, content creators can deploy scalable, high-performance streaming channels supported by ads. 

This approach enable content distribution, allowing creators to broadcast their content globally without the need for heavy infrastructure investments.

Magnus Svensson is a Media Solution Specialist and partner at Eyevinn Technology. Eyevinn Technology is the leading independent consulting company specializing in video technology and media distribution.

Eyevinn Technology is world leading independent specialists in video technology, video development and sustainable streaming. Proud organizer of the yearly Nordic conference Streaming Tech Sweden.

The post How to Create a FAST Channel in Minutes with Eyevinn Technology appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/eyevinn-fast-channels/feed/ 0
HTML5 Video Tags: The Ultimate Guide [2024] https://bitmovin.com/blog/html5-video-tag-guide/ Fri, 22 Dec 2023 10:23:08 +0000 https://bitmovin.com/?p=162432 Streaming video over the web or on connected TV devices are two main ways viewers watch their favorite content. To engage those audiences and provide them with the best experience, development teams typically choose to deploy an HTML5-based video player (see our latest blog on top HTML5 video players) that can be tailored to specific...

The post HTML5 Video Tags: The Ultimate Guide [2024] appeared first on Bitmovin.

]]>
Streaming video over the web or on connected TV devices are two main ways viewers watch their favorite content. To engage those audiences and provide them with the best experience, development teams typically choose to deploy an HTML5-based video player (see our latest blog on top HTML5 video players) that can be tailored to specific device/browser requirements. In order to customize this <video> element, HTML5 video attributes are implemented to enable users to play and pause videos, change the language, enable subtitles, and so much more. Since HTML5 is the primary way to embed video content on web-based platforms and applications, whether you’re a seasoned developer or just getting started, we have detailed everything you need to know below on video tags in HTML5.

In our definitive 2024 HTML5 Video Tags guide, we will provide you with a thorough understanding of it, covering all aspects, from the basics to more advanced functionalities. This includes what an HTML5 video element and video tag are, how they work, and how they can be customized through the most used HTML5 video attributes.

What is the HTML Video Element?

The HTML Video Element refers to the code that highlights the video tag, its attributes, and the embedded video content on web pages, representing the markup, functionality, and associated behaviors. The element supports multiple video formats and offers various attributes for customization and capabilities to ensure responsiveness and accessibility. It is commonly used interchangeably with video tags; however, it encompasses each part of the HTML video element created by the video tag in the Document Object Model (DOM).

What are media tags in HTML5?

In HTML5, “media tags” refer to the elements introduced to embed and control different types of media content, such as audio files and video files, directly within HTML documents. 

An example of how this looks like for Video and Audio:

<video controls>
   <source src="movie.mp4" type="video/mp4">
   <source src="movie.ogg" type="video/ogg">
   Your browser does not support the video tag.
</video>
<audio controls>
   <source src="audio.mp3" type="audio/mpeg">
   <source src="audio.ogg" type="audio/ogg">
   Your browser does not support the audio element.
</audio>

What is the video tag in HTML5?

Part of the media tag, the HTML5 video tag, represented by the <video> tag in HTML5, is a feature that enables playing video files. It provides native support for video playback in modern web browsers without requiring external plugins or players. Utilizing this element makes incorporating video content into websites easy, enhancing user experience and web design flexibility.

How do you embed a video in HTML5?

To embed videos in HTML, you’ll need to use the <video> tag inside the body of the HTML document to showcase the HTML elements. Here’s an example of how the code might look:

<video width="1920" height="1080">
 <source src="movie.mp4" type="video/mp4">
 <source src="movie.ogg" type="video/ogg">
 Your browser does not support the video tag.
</video>

Here’s a breakdown of the different factors within that code snippet: 

  • Open the code snippet with <video>, and if you know the <width> and <height>, you can set that as well.
  • Specify where the source file for the video can be accessed by using the src attribute. Here, you can add another format for the same video using the <source> tag to specify where the video can be pulled. This is useful for cross-browser compatibility, as all formats supported aren’t the same across all browsers and devices.
  • Add the “Your browser does not support this video type” to make sure your viewer knows the issue is not on your end but on their browser or device compatibility with HTML5. 
  • Finally, close the <video> Tag by placing a / before “video” to finish the code snippet.

What are the problems with the HTML5 Video Tag?

Even though the <video> tag streamlines video streaming on browsers and web-based devices, it also has some limitations that can affect playback and the user and developer experience. These issues are:

  • Browser Compatibility Issues and Codec Support
    • Different teams and companies own each browser, meaning they all have different priorities they are focusing on when it comes to supporting specific formats and codecs. This may ultimately lead to playback issues if a browser doesn’t support the format you’re using.
  • Performance issues and no Adaptive streaming
    • High-resolution videos can be very heavy on a user’s bandwidth, and depending on the browser, the video tag doesn’t natively support adaptive bitrate streaming protocols like HLS and MPEG-DASH (unless it is Safari, which natively supports HLS), so it can’t adapt to a user’s network environment. However, by implementing Media Source Extensions (MSE) alongside the <video> tag, a video player can gain the capability to adapt the video’s streaming quality to network environments dynamically. Check out our other blog on Adaptive bitrate streaming (ABR). Also, if you want to ensure your stream functions well, see how it performs by testing it on our Player demo.
  • Content protection
    • Certain content needs protection that HTML5 just can’t provide since it isn’t natively supported. A video player with a DRM integrated or other content protection capabilities is necessary to protect it.
  • No unified user interface
    • Even though every major browser supports the <video> tag for HTML5, as each browser is different, so is the experience users will have when streaming video, which can lead to frustration and abandonment.

Additional features of video players like thumbnail seeking or multi-audio are usually not supported; when it comes to subtitle support, the format a browser supports may vary and might not be what you have for your content.

Which browsers support video in HTML5?

As HTML is the standard for web-based video playback, every major modern desktop and mobile browser supports HTML5 and the usage of the video tag, including:

Desktop Browsers:

  1. Google Chrome 
  2. Mozilla Firefox
  3. Apple Safari
  4. Microsoft Edge
  5. Opera
  6. Samsung Internet
  7. Vivaldi
  8. Brave

Mobile Browsers:

  1. Google Chrome for Android/iOS
  2. Safari for iOS
  3. Firefox for Android/iOS: 
  4. Samsung Internet Browser
  5. Microsoft Edge for Android/iOS
  6. Opera Mobile/Opera Mini
  7. UC Browser

Browser-based TV, STB, and gaming Console apps:

  1. Samsung Tizen
  2. LG WebOS
  3. PlayStation 4 & 5
  4. Comcast X1 STBs
  5. Sky Q STBs
  6. Xbox One Series
  7. Vizio
  8. Panasonic
  9. Hisense

What is a video format for HTML5, and which are supported by browsers?

An HTML5 video format is the file type used within a <video> tag as an option when listing a source. Unlike previous HTML versions where external plugins like Flash were needed for video streaming, HTML5 supports video playback natively within the browser. However, depending on the browser, not all video formats are supported, so it’s best to provide a video element with multiple formats for cross-browser compatibility, or users may experience errors when streaming. This can be due to limited royalty fees or just limited interest.

An example of how this would look within the code is:

<video width="1920" height="1080">
   <source src="movie.mp4" type="video/mp4">
   <source src="movie.webm" type="video/webm">
   <source src="movie.ogg" type="video/ogg">
   Your browser does not support the video tag.
</video>

The most commonly supported HTML5 video formats across different browsers are:

  • MP4
    • Most widely supported and compatible format across major browsers
- Bitmovin
  • WebM
    • It has less browser support but is still commonly used as an option with the <video> tag for its benefits with video compression.
- Bitmovin
  • Ogg (Theora):
    • It is supported by browsers like Firefox, Chrome, and Opera but less universally supported than MP4.
- Bitmovin

What is an HTML5 video error, and how does it look in Chrome?

Viewers streaming in their preferred browser may experience an issue during playback. This could range from start-up errors like a black screen and loading icon continuously spinning to constant buffering and audio and video not being supported by the browser they are using. For Google Chrome, this is no different than any other major browser when encountering video playback errors. A way to understand what is happening with your video when being streamed is by using a video analytics solution. This will help give you insight into how it is performing, and an excellent example of this is Bitmovin’s Analytics.

Converting video to a specific HTML5-supported format

To ensure you provide a source for a video format supported by each browser, you can convert your file to one of the major formats listed above. The downside is that you will increase the storage needed to maintain these files, and they will still just be available in one specific quality, meaning there is a chance the playback will fail altogether if the viewer’s bandwidth is limited.

To highlight the bandwidth issue and the topic of converting a video file into a supported format, this is where adaptive streaming comes into play. As I mentioned earlier, if you want to enable ABR capabilities, you need to use HLS and MPEG-DASH streaming protocols, and with this, you’ll need an encoder like FFMpeg or Bitmovin’s VOD encoder. Also, check out our complete encoding guide to learn more about encoding. 

HTML5 Video Attributes

HTML5 offers many attributes that can be set within the <video> tag and are essential for defining how the video behaves and appears on a webpage. These attributes provide significant control over how the video is played back and how it’s viewed, enhancing the user experience. However, the look and behavior of some attributes may vary across different user’s browsers and platforms. The most used HTML5 video attributes include

  • Width and Height
    • Define the size of the video player that will be streaming the content.
    • Code example
      • <video src=”movie.mp4″ width=”1920″ height=”1080″></video>
  • Controls
    • When included, it displays the default video controls (play, pause, volume control, etc.) to the user.
    • Code example
      • <video src=”movie.mp4″ controls></video>
  • SRC
    • This specifies the source URL of the video file. This is a common shorthand version of using the source tags you have seen earlier, but it only allows you to specify one file format.
    • Code example
      • <video src=”movie.mp4″></video>
  • Autoplay:
    • The autoplay attribute makes the video play automatically upon loading. However, the autoplay attribute is not always supported at the browser’s level
    • Code Example
      • <video src=”movie.mp4″ autoplay></video>
  • Loop:
    • If enabled, the video will start over again once finished.
    • Code example
      • <video src=”movie.mp4″ loop></video>
  • Muted:
    • This will mute the audio by default.
    • Code example
      • <video src=”movie.mp4″ autoplay muted></video>
  • Preload (none, video metadata and auto)
    • This instructs the browser to load the video when the page is loaded. It can have values like none (don’t preload), metadata (video duration, dimensions, etc.), and auto (preload the entire video).
    • Code example
      • <video src=”movie.mp4″ preload=”metadata”></video>
  • Poster:
    • This specifies an image to be shown while the video downloads or until the user hits the play button.
    • Code example
      • <video src=”movie.mp4″ poster=”thumbnail.jpg”></video>
  • Playsinline
    • This is mainly for iOS devices and indicates that the video should play on the webpage itself (inline) rather than opening in fullscreen.
    • Code example
      • <video src=”movie.mp4″ playsinline></video>
  • AutoPictureinPicture
    • This will direct the video to enter picture-in-picture mode in supported scenarios automatically
    • Code Example
      • <video src=”movie.mp4″ controls autoPictureInPicture></video>

Test out different player configurations with our Player playground to see how they fit with your use case, and learn about how you can improve your user experience further with the Bitmovin Player.

To see how this would look all together, here is an example:

<video width="1920" height="1080"
      loop
      muted
      autoplay
      poster="placeholder-image.jpg" 
      Playsinline 
      preload="auto"
      controls 
      autoPictureInPicture>
   <source src="video.mp4" type="video/mp4">
   <source src="video.webm" type="video/webm">
   <source src="video.ogv" type="video/ogg">
   Your browser does not support the video tag.
</video>

Video Accessibility with HTML5

When providing content to a broad audience, you must cater to multiple types of viewers. This is where video accessibility with HTML5 is vital, as the settings used can make the content accessible to all users, including those with hearing, sight, or other disabilities. Additionally, this is very important for streaming platforms in the EU, as the EU Accessibility Act, which will go into law on June 28, 2025, will make it so that every platform that streams video needs to make them accessible for anyone disabled. 

To implement this, you’ll need to insert the <track> tag that has multiple attributes, such as:

  • SRC:
    • As listed above, this specifies the source of the track file and or accessibility concerns. This is typically made available through a WebVTT (.vtt) file that contains the text track data.
  • Kind:
    • This defines the type of text track utilized for this source. This can be subtitles, captions, and descriptions, which are vital for people who are blind or hard of hearing or don’t understand the spoken language well. Additionally, the track tag supports chapter breakdowns and metadata for non-accessibility use.
  • SRClang:
    • This sets the language of the track text data and should be a valid BCP 47 language tag.
  • Label:
    • This provides a readable title of the text track and enables the user to select between different items within the video player interface.
  • Default:
    • This setting makes it start by default unless the user’s preferences indicate something different.

An example of how this would look in the HTML document is:

<video controls>
   <source src="movie.mp4" type="video/mp4">
   <track src="subs_en.vtt" kind="subtitles" srclang="en" label="English" default>
   <track src="subs_fr.vtt" kind="subtitles" srclang="fr" label="French">
   <track src="captions_en.vtt" kind="captions" srclang="en" label="English Captions">
</video>

Here’s a breakdown of key elements for HTML5 video accessibility:

  • Captions and Subtitles:
    • This provides text alternatives for audio content, aiding users who are deaf or hard of hearing and is usually applied by listing WebVTT (.vtt) files.
  • Descriptions:
    • This helps describe important spoken details in the video for users who are deaf or hard of hearing and can be included within the main subtitles but often are a separate track.
  • Transcripts:
    • This is a text version of the audio and visual content for users who need or want a text alternative. This would include the entire dialogue and describe every visible element.
       
  • Accessible Video Player:
    • When viewing content, the controls must also be accessible, so setting the <controls> for the player experience is critical.
  • Control Video Playback:
    • Allow users to control the video playback, including pausing, stopping, and adjusting volume.
  • Language Identification:
    • This will help your browser and player provide accurate language listings for captions, subtitles, or audio descriptions.
  • Fallback Content:
    • Ensure your content plays back by providing alternative content for browsers not supporting HTML5 video.

An example of how this would look is:

<!DOCTYPE html>
<html lang="en">
<head>
    <title>Accessible HTML5 Video</title>
    <style>
        /* Optional: CSS for styling captions for better visibility */
        ::cue {
            background: rgba(0, 0, 0, 0.8);
            color: white;
            font-size: 1.2em;
        }
    </style>
</head>
<body>

<video id="myVideo" controls width="640" height="360" poster="thumbnail.jpg">
    <!-- Video sources -->
    <source src="example.mp4" type="video/mp4">
    <source src="example.webm" type="video/webm">

    <!-- Captions -->
    <track label="English captions" kind="captions" srclang="en" src="captions_en.vtt" default>
    
    <!-- Subtitles -->
    <track label="Spanish subtitles" kind="subtitles" srclang="es" src="subtitles_es.vtt">

    <!-- Descriptions (if separate text track) -->
    <track label="English Descriptions" kind="descriptions" srclang="en" src="description_en.vtt">
    
    <!-- Fallback content -->
    Your browser does not support HTML5 video. Here is a <a href="example.mp4">link to the video</a> instead.
</video>

</body>
</html>

This is how captions would look on the video player

- Bitmovin

An example of captions on a video player

Responsive video playback: optimizing for mobile and tablet viewing

Setting up the width and height attributes of the HTML5 video element is essential for completing your video’s dimensions. However, when it comes to ensuring your video content is responsive, meaning it fits the screen size when viewed across various devices, using static dimensions can be limiting. 

For example, a video set at a specific size width might look great on a desktop, but if viewed on mobile or another device, it may be cut off (see image below).

- Bitmovin

Image of a non-responsive video

Example of a responsive video in HTML 

This ultimately creates a poor viewing experience for the user. To tackle this, you’ll need to leverage CSS to boost your <video> element. For instance, setting the height to “auto” allows the video to maintain its original aspect ratio on different screen sizes, ensuring a consistent and optimal viewing experience.

- Bitmovin

Here’s how you can make your HTML5 video responsive:

  1. Use CSS for Fluid Dimensions
    1. Apply CSS styles to the <video> element, setting the width to 100% and height to auto. This allows the video to adapt to the width of its container while preserving its aspect ratio.
  2. Contain the Video in a Div
    1. Wrap your <video> element in a <div> and set the desired dimensions on the <div>. This method gives you more control, as the video will scale proportionally within the container.

By doing this, your video will be better suited for a range of devices, from large desktop monitors to smaller mobile screens, ensuring a user-friendly experience for all viewers.

In Conclusion

The HTML5 Video Element and tag have played a pivotal role in helping platforms stream video to viewers. With its functionality and customization capabilities through the HTML5 video attributes, you can deliver a better experience to your audience as we’ve provided the best practices for embedding video and ensuring accessibility over the web. 

Whether you are just starting or are an experienced developer, the insights provided here should empower you to create more engaging, accessible, and cross-browser-compatible video content. Keep these principles in mind as you continue to explore the dynamic world of web video and multimedia.


Find out how our solutions can help you include ads within your streams by signing up for a free trial today.

The post HTML5 Video Tags: The Ultimate Guide [2024] appeared first on Bitmovin.

]]>
Encoding VR and 360 Immersive Video for Meta Quest Headsets https://bitmovin.com/blog/best-encoding-settings-meta-vr-360-headsets/ https://bitmovin.com/blog/best-encoding-settings-meta-vr-360-headsets/#respond Tue, 14 Nov 2023 07:24:23 +0000 https://bitmovin.com/?p=258046 This article was originally published in April 2023. It was updated Nov 14, 2023 with information about Quest 3 AV1 support. Whether you’re calling it Virtual Reality (VR) or 360 video or Metaverse content, there are a lot of details that should be taken into consideration in order to guarantee a good immersive experience. Things...

The post Encoding VR and 360 Immersive Video for Meta Quest Headsets appeared first on Bitmovin.

]]>
This article was originally published in April 2023. It was updated Nov 14, 2023 with information about Quest 3 AV1 support.

Whether you’re calling it Virtual Reality (VR) or 360 video or Metaverse content, there are a lot of details that should be taken into consideration in order to guarantee a good immersive experience. Things like video resolution, bitrates and codec settings all need to be set in a way that creates a high quality of experience for the viewers, while being conscious of storage and delivery costs that can come with these huge files. Although all this stuff has been widely discussed for 2D displays, like mobile phones and TVs, VR streaming differs enormously from those traditional screens, using different display technology that drastically shortens the viewing distance from eye to screen. In addition to that, VR headset specs may differ from one device to another, so the same video may produce a different visual experience depending on the model or device. In this post we are going to share the things you need to consider, along with tips and best practices for how to encode great looking VR content, specifically for playback on Meta Quest (formerly known as Oculus) headsets.

Visual quality requirements of 3D-VR vs 2D videos

Unlike traditional 2D screens, where viewers are located at a considerable distance from the screen, VR viewers are looking at a smaller screen much closer to the eyes. This drastically changes the way a video should be encoded in order to guarantee good visual quality for an immersive 3D experience. For this same reason, the traditional 2D video quality metrics such as VMAF and PSNR are not usually useful to measure the visual perception for 3-D VR content, for instance:

VMAF for 3D-VR

VMAF considers 2D viewers located at a viewing distance in the order of magnitude of the screen size, for example:

  • 4K VMAF model – vmaf_4k_v0.6.1,  takes into consideration that the viewer is located at 1.5H from the screen, where H is the TV screen high. 
  • HD VMAF model – vmaf_v0.6.1, considers  a viewer located at 3H from the screen.

The previous models resulted in a pixel density of about 60 pixels per degree (ppd) and 75 ppd – for 4K and HD respectively. However, when talking about VR videos, the pixel density is highly magnified, for instance, for Meta Quest 2 headsets the specs mention a pixel density of 20 ppd. Therefore, the predefined VMAF models are not suitable.  Actually, if you do use VMAF to get the visual quality (VQ) for a VR video intended for headset playback, you’ll probably find it does not look good enough even though it has a high VMAF score – this is because of the “zoom in” that Quest does in comparison to the traditional screens. 

PSNR for 3D-VR

Even when it is not a rule, it is expected to have good VQ on 2D videos when PSNR values are between 39 dB and 42 dB – for average to high complexity videos. See [1] [2] However, this PSNR range is usually not enough to create a good immersive experience with Quest headsets. For instance, according to some empirical tests we did, we found that at least a PSNR above 48 dB is required for good VQ with Quest devices.

- Bitmovin
image source: Meta Quest Blog

The Best Encoding Settings for Meta Quest devices

A general overview of the Video Requirements can be found at the Meta Quest website. Additionally, the following encoding settings may be useful when building your encoding workflow: 

Resolution

The minimal resolution suggested by Meta is 3840 x 3840 px for stereoscopic content and 3840 x 1920 px for monoscopic content, which is much higher than earlier generations or mobile devices.  

H265 Video Codec Settings 

Video Codec – Meta Quest devices support H264(AVC) and H265(HEVC) codecs, however given that they require resolutions above 3840 px, we strongly recommend H265 due to the high encoding efficiency it has when comparing it to H264. 

GOP Length – In our tests we successfully achieved a good VQ within the recommending bitrate range, using a 2-second GOP length for 30 fps content. However, since the VR experience is not as latency sensitive for video on demand, we suggest using greater GOP lengths in order to improve the encoding efficiency even more if needed.

Target bitrate and CRF – Meta suggests a target bitrate between 25-60 Mbps and as mentioned, we strongly suggest using the H265 codec to maintain high visual quality within that range. If the bitrate goes too far above the suggested maximum, customers may experience slow playback or stalling due to device performance issues. 

Having said all that, it is worth mentioning that setting a proper bitrate to meet the VQ expectations is really challenging, mainly because the bitrates necessary may change from one piece of content to another depending on their visual complexity. Because of that, we suggest using a CRF based encoding instead of a fixed bitrate. Specifically, we found that when talking about H265, a CRF between 17-18 would produce videos that are suitable for viewing on Quest headsets without excessively high bitrates. 

Building 360-VR encoding workflows with Bitmovin VOD Encoding

Bitmovin’s VOD Encoding provides a set of highly flexible APIs for creating workflows that fully meet Meta Quest encoding requirements. For instance:

  • If adaptive bitrate streaming is required at the output, Bitmovin Per-Title encoding can be used to automatically create the ABR ladder with the top rendition driven by the desired CRF target.
  • If progressive file output is required, a traditional CRF encoding can be used by capping the bitrates properly.
  • Additionally, Bitmovin filters can be used to create monoscopic content based on a stereoscopic input, for instance, cropping the original/stereoscopic video to convert it from a top-and-bottom or side-by-side array into a single one. Monoscopic outputs can be viewed on 2D displays, extending the reach of your 360 content beyond headsets.

Per-Title Encoding configuration for VR

The following per-title configuration may be used as a reference for encoding a VR content. Depending on the content complexity, the output may include from 4 to 7 renditions with the top rendition targeting a CRF value of 17.

perTitle: {
     h265Configuration: {
       minBitrate: 5000000,
       maxBitrate: 60000000,
       targetQualityCrf: 17,
       minBitrateStepSize: 1.5,
       maxBitrateStepSize: 2,
       codecMinBitrateFactor: 0.6,
       codecMaxBitrateFactor: 1.4,
       codecBufsizeFactor: 2,
       autoRepresentations: {
         adoptConfigurationThreshold: 0,
         },
       },
     }

Theres also full code samples here if you would like to dig deeper.

The same configuration can be used to encode any VR format such as top-and-bottom, side-by-side or monoscopic 360 content. The per-title algorithm will automatically propose a proper bitrate and resolution for each VR format based on the input details. Additionally, it is strongly recommended to use VOD_HIGH_QUALITY as an encoding preset and THREE_PASS as encoding mode. This will assure the Bitmovin Encoder delivers the best possible visual quality. 

In our tests using typical medium-high complexity content, we found that using a CRF of 17 produces good VQ for Meta Quest playback, with PSNR values above 48 dB and bitrates that are usually below the suggested maximum of 60 Mbps. 

Alternatively, traditional CRF encoding can be used instead of Per-title, for instance if only one rendition is desired at the output – with no ABR.

Creating monoscopic outputs from stereoscopic inputs

Usually, VR 360 cameras record the content in stereoscopic format either in top-and-bottom or side-by-side arrangements. However, depending on the customer use case, it could be required to convert the content from stereoscopic to monoscopic formats. This can be easily solved with the Bitmovin VOD Encoding API by applying cropping filters to remove the required pixels or frame percentage from the stereoscopic content, turning it into monoscopic format, i. e., by removing the left/right or the bottom/top side from the input asset.

- Bitmovin
Top-Bottom Stereoscopic Format source: Blender Foundation

For instance, the following javascript snippet would remove the top side of a 3840 x 3840 stereoscopic content:

.....
.....
// Crop filter definition
const cropTopSideFilter = new CropFilter({
  name: "stereo-to-mono-filter-example",
  left: 0,
  right: 0,
  bottom: 0,
  top: 1920,
 })
 
// Crop filter creation 
cropTopSideFilter = await bitmovinApi.encoding.filters.crop.create( cropTopSideFilter)

// Stream Filter definition
const cropTopSideStreamFilter = new StreamFilter({
  id : cropTopSideFilter.id,
  position: 0,
})

// StreamFilter creation
bitmovinApi.encoding.encodings.streams.filters.create(<encoding.id>, <videoStream.id>, [cropTopSideStreamFilter] )

AV1 Codec Support on Meta Quest 3

In the recommended settings above, we strongly suggested using HEVC over H.264 because the newer generation codec offers greater compression efficiency that turns into bandwidth savings and a better quality of experience for users. Now with the Quest 3, you can take advantage of AV1, an even newer codec that outperforms HEVC. On average, our testing has shown that you can maintain equivalent quality while using around 30% lower bitrate with AV1. This will depend on the type of content you’re working with, so if you’re experimenting with AV1 for the Quest 3, choosing a bitrate that’s ~25% lower than your HEVC encoding would be a good place to start.  DEOVR shared a 2900p sample .mp4 file encoded with AV1, but you can also create your own with a Bitmovin trial account.

Ready to start encoding your own 360 content for Meta Quest headsets? Sign up for a free trial and get going today! 

Related links:

Bitmovin Docs – Encoding Tutorials | Per-Title Configuration Options explained

Bitmovin Player 360 video demo

The post Encoding VR and 360 Immersive Video for Meta Quest Headsets appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/best-encoding-settings-meta-vr-360-headsets/feed/ 0
Under the Hood of Server-Side Ad Insertion (SSAI) – The Challenges of Implementing Ad Monetization Technologies https://bitmovin.com/blog/ssai-server-side-ad-insertion/ https://bitmovin.com/blog/ssai-server-side-ad-insertion/#respond Thu, 10 Aug 2023 11:54:54 +0000 https://bitmovin.com/?p=265633 As the demand for video streaming escalates, so does the cost associated with maintaining and delivering high-quality content to viewers. Broadcasters, OTT Platforms, and any industry streaming video over the Internet are actively seeking robust monetization strategies to sustain and expand their businesses. While Subscription Video On Demand (SVOD) and Transactional Video On Demand (TVOD)...

The post Under the Hood of Server-Side Ad Insertion (SSAI) – The Challenges of Implementing Ad Monetization Technologies appeared first on Bitmovin.

]]>
As the demand for video streaming escalates, so does the cost associated with maintaining and delivering high-quality content to viewers. Broadcasters, OTT Platforms, and any industry streaming video over the Internet are actively seeking robust monetization strategies to sustain and expand their businesses. While Subscription Video On Demand (SVOD) and Transactional Video On Demand (TVOD) models meet the needs of certain companies, the Advertising Video On Demand (AVOD) model has emerged as the go-to choice for growth without burdening subscribers with high additional costs. AVOD allows content providers to engage a wider audience by offering free or lower-cost access to their video libraries while being subsidized by advertisements. Ad Insertion is then the key technical component to enable this, and two options are available to implement this model successfully: Server-Side Ad Insertion (SSAI) and Client-Side Ad Insertion (CSAI).

In this blog, we will focus on SSAI, how it works, the use cases and challenges for specific workflows and devices, and the different factors that come into play when implementing it.

What is Server-Side Ad Insertion (SSAI)?

To begin exploring the topic, let’s delve into what SSAI is and its benefits. The technology enables ads to be directly delivered within the streaming protocol, seamlessly stitching them into the video stream on the server side just before making it available to viewers. The advantages of this approach include:

  • Bypassing ad-blockers – This ensures ads reach viewers
  • Smoother integration – Transition from content to ad is more natural
  • Improved viewer experience – less buffering and interruptions during playback
  • Increased ad revenue – Advertisers are able to stream better-targeted ads to specific user bases
  • Flexibility – Ads can be adjusted in real-time, engaging viewers with more personalized ads that meet their interests
- Bitmovin

A diagram showing how SSAI works

How is it applied to HLS and DASH streams?

As adaptive bitrate (ABR) streaming is the standard for every platform streaming over the internet, SSAI needs to work in the context of two popular video streaming protocols – Dynamic Adaptive Streaming over HTTP (DASH) and HTTP Live Streaming (HLS). While both protocols use fragmented mp4 (ISO BMFF) or transport stream (TS) segments to deliver video content over HTTP to viewers, some protocol differences make the SSAI implementation different for each.

The Dash Implementation

In DASH, SSAI is usually implemented using Media Presentation Description (MPD) Periods. The concept of Periods enables the boundaries between periods in the stream timeline to contain different streaming properties (like codecs, encryption, different adaptations, etc.). When the stream manifest is requested, the ad server in the background resolves ads be delivered within the content and creates MPD Periods at specific advertising time slots. These Periods then contain already chunked ads with their metadata stitched to the original MPD manifest before it is delivered to the viewer’s device.

The HLS Implementation

In HLS, SSAI is often integrated using EXT-X-DISCONTINUITY tags which are present in media playlists. This concept is similar to Periods in DASH, but the main difference is that discontinuities across different playlists (like audio and video) do not have to be aligned. This approach may involve splitting the segmented video stream into smaller segments and splicing them at the appropriate point to insert the ad (the same is valid for DASH). Depending on development expertise and protocol preference, it may require more complex implementations, especially for live scenarios.

- Bitmovin

A diagram showing the setup of SSAI in DASH and HLS streams

Ads may be injected into both DASH and HLS manifests only into single Periods or playlists without discontinuities, but this approach presents another set of difficulties. The ad segments need to follow the same encoding for adaptation profiles and time synchronization as the main content, which can be challenging when ads need to be inserted in variable time slots or live streams. Also, if the content is encrypted or DRM (Digital Rights Management) protected, the ads must be encrypted using the same encryption schema, which might not be feasible on a large scale. Another problem is client-side tracking requiring specific metadata like EMSG in a container format or Events and Timed Metadata inside the manifest.

Common SSAI use cases and their challenges

SSAI alongside DRM

SSAI is often required to work all along the DRM systems. DRM systems like Widevine, Fairplay, or PlayReady typically use common encryption algorithms to protect the content and require viewers to obtain a license or key to access it. This prevents piracy and other unauthorized access or distribution of the content.

As mentioned above, ads are often delivered separately from the video content and may not be encrypted, so they are placed into separate DASH Periods or HLS discontinuities to be correctly played back on target devices.

- Bitmovin

A diagram showing SSAI stitched within a DASH stream that is encrypted with DRM

Handling the transition from unprotected to encrypted content may also be challenging on the client side. For example, many steps must be done on MSE and EME API when integrating with javascript clients, like recreating SourceBuffers or waiting for license requests.

In addition to encryption, DRM can also control the distribution and access to ads. This can help prevent unauthorized sharing or reuse of ad content and ensure that the ads are only shown to authorized viewers. By working together, SSAI and DRM can help protect digital video content and ensure it is monetized effectively.

Utilizing SSAI in live streaming

In live streaming, SSAI is a powerful way to deliver targeted and personalized ads to viewers. When implemented, it can reduce latency during the transition between the content and the advertisement during playback. It can also enable dynamic ad scheduling, allowing the platform to change the ads due to inventory or campaign requirements. However, one of the main SSAI challenges is the need to handle segment timestamps carefully to ensure that the ads are inserted at the appropriate point in the video stream. This is because live video streams are constantly changing, and the timing of each segment can vary based on factors such as network latency and processing time.

In order to handle segment timestamps in live streaming, SSAI systems typically use so-called boundary detection. This involves analyzing the incoming video stream to detect critical points or boundaries between segments and then using this information to insert ads at the appropriate moment. In boundary detection, the SSAI system monitors the video stream for changes in the keyframe or scene, which usually indicate the start or end of a segment. The system then uses this information to determine the timing and duration of each segment and to insert ads accordingly. This process requires careful synchronization between the video stream and the ad server to ensure that the ads are inserted at the right time and do not interfere with the viewer’s experience.

- Bitmovin

A diagram that shows SSAI stitched within a live stream

Due to the considerable complexity of boundary detection, when SSAI is present in the live stream, the main content segments typically have large presentation timestamps (PTS). However, segmented ads present in DASH Periods or HLS discontinuities for dedicated ad slots usually have PTS starting from 0. This may present issues to be handled on the player side as making timestamp continuous is challenging and might introduce some buffering issues.

One last complication to handle in HLS is the effect of splitting the last and first segments of the content stream where the ad is being placed. This often presents issues with the alignment of buffered ranges and may cause playback stalls if not handled correctly.

SSAI in streams viewed on Smart TVs

Delivering SSAI-enabled streams on Smart TVs has its benefits with specific challenges that must be faced, and there are multiple reasons for that. Most smart TV platforms are based on a browser-like environment running JavaScript. LG WebOS, Samsung Tizen, and other smart TV operating systems are exposing native player APIs that are, by their specifications, capable of playing adaptive streams in DASH or HLS. This sounds promising, but unfortunately, streams containing SSAI often do not work, and there is also an absence of proper logs to figure out what is wrong. These native players also do not give much control over how the stream is played back, like managing buffering and controlling ABR algorithms.

The benefit is that most of these platforms also support media source extensions (MSE) and encrypted media extensions (EME) API to manage video streaming using JavaScript, so players supporting SSAI in browsers should work. However, the issue with Smart TVs is the lack of standardization across different platforms. Each Smart TV manufacturer may implement different versions of protocols and specifications of APIs for handling streaming. This often brings various technical limitations to the platform environment, leading to issues supporting different devices. One issue is rollovers in segment timestamps which are common in many live streams, or possible changes in properties of the ad and content segments like encoding parameters, codec, encryption (DRM) transitions, etc. Another issue with SSAI on Smart TVs is the potential for buffering or latency. Smart TVs often have limited processing power and memory, making it difficult to handle ad transitions and provide stable buffering.

Addressing these issues, platforms looking to support these devices and have SSAI-enabled streams playing on them need to put in a lot of effort and perform a lot of black-box testing to figure out the best integration for their streaming workflow. Working closely with Smart TV manufacturers is often required to ensure the ad insertion process is standardized and optimized for their platform. This is why when deploying on different smart TVs and devices in general, it is crucial to consider the unique challenges of supporting them and the steps it takes to optimize the viewing experience for users.

SSAI workflow and third-party providers

Implementing SSAI is usually very complex, which is why many platforms outsource the technical integrations to third-party providers that are running their own SSAI services and are capable of delivering ads into a prepared manifest. There are various providers to choose from like:

  • Google Ads Manager DAI (Dynamic ad insertion)
  • MediaTailor provided by (Amazon Web Services)
  • Yospace
  • Broadpeak

The architecture of how SSAI providers integrate with the whole streaming delivery pipeline is a very interesting topic, and we plan to go into further depth on it in a future blog post. In the meantime, we recommend reading our SSAI and Adblocking blog that focuses more on this if you want to learn more.

Playing SSAI-enabled streams with the Bitmovin Player

The Bitmovin Player is designed to work seamlessly with server-side ad insertion solutions. It allows for the dynamic insertion of ads into video streams, ensuring a smooth playback experience for viewers. The Player supports ad stitching, seamlessly blending the ads with the video content during playback. This helps maintain a continuous and uninterrupted viewing experience.

The Bitmovin Player is also pre-integrated with Bitmovin’s Analytics and the Open Measurement Interface Definition (OMID) with its dedicated SDK. This helps give you actionable insights related to your audience and quality of experience metrics while tracking ad events. This data can help you adjust and set where your SSAI ads are utilized since you’ll know your most popular content, where/when viewers are watching it, and how your content and ads perform when streamed, ensuring a smooth viewing experience. Ultimately this will help you optimize your monetization strategy and maximize your business’ potential. To see SSAI in action, check out the Bitmovin player SSAI demo page.

In case there are any device limitations supporting SSAI streams, Bitmovin Player is also compatible with CSAI (Client-side ad insertion) solutions like VAST (Video Ad Serving Template) and VMAP (Video Multiple Ad Playlist) standards produced by IAB (Interactive Advertising Bureau). For more on Bitmovin’s CSAI support, we will be writing up a dedicated blog about it soon, but in the meantime, you can see how it works with our Player by trying our CSAI demo.

Stitching it up

From what we’ve listed above, Server-Side Ad Insertion (SSAI) provides several advantages, including bypassing ad blockers, seamless ad integration, enhanced viewer experience, increased ad revenue, and content security with DRM systems. However, implementing SSAI in streaming workflows for specific devices can be complex due to platform differences and technical limitations. Apart from platforms doing it themselves, Player options like Bitmovin’s can help streamline the process, enabling effective ad monetization and a smooth viewing experience, enhanced with OMID and Analytics integrations for effective workflow optimization. Overall, SSAI remains a potent tool for content providers to monetize their video streaming workflows and maximize revenue potential.

If you’d like to see how Bitmovin’s solution stack can benefit your streaming workflow – sign up for our 30-day free trial.

The post Under the Hood of Server-Side Ad Insertion (SSAI) – The Challenges of Implementing Ad Monetization Technologies appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/ssai-server-side-ad-insertion/feed/ 0
5 Ways React Native & Flutter Can Simplify Video Streaming Workflows https://bitmovin.com/blog/react-native-flutter-streaming-workflows/ https://bitmovin.com/blog/react-native-flutter-streaming-workflows/#respond Tue, 25 Jul 2023 08:01:06 +0000 https://bitmovin.com/?p=264456 Developing an application built to stream video can take a lot of work, especially for smaller development teams or those with minimal video technical expertise. Normally, to build on platforms such as iOS and Android, an experienced developer for each would be needed, but with frameworks such as React Native and Flutter, it’s gotten much...

The post 5 Ways React Native & Flutter Can Simplify Video Streaming Workflows appeared first on Bitmovin.

]]>
Developing an application built to stream video can take a lot of work, especially for smaller development teams or those with minimal video technical expertise. Normally, to build on platforms such as iOS and Android, an experienced developer for each would be needed, but with frameworks such as React Native and Flutter, it’s gotten much easier to support more platforms with less. There are many benefits to utilizing these frameworks within a streaming workflow, especially the fact that with React Native, you’re able to use the code you’ve deployed for your web platform across multiple device types. Additionally, with dedicated SDKs, developers can deploy faster and enable a consistent user experience across all devices.

In this blog post, we’ll explore the benefits and drawbacks of utilizing React Native or Flutter for video streaming workflows. Additionally, we’ll examine how Bitmovin’s SDKs further enhance the development process.

1. Simplifying Cross-Platform Development

React Native and Flutter have been made specifically for cross-platform development, allowing developers to create video streaming apps that seamlessly run on multiple platforms without engaging with native frameworks. The benefit of React Native is that it uses a single codebase written in JavaScript to deliver consistent experiences across iOS and Android devices. It leverages platform-specific rendering APIs, resulting in a near-native user interface and performance. Similarly, Flutter enables the development of visually rich and performant video streaming apps for iOS, Android, and web platforms. With Flutter, teams can write code once and deploy it across various devices.

However, while both React Native and Flutter allow you to implement a consistent view across platforms, there are still slight differences in UI and behavior due to variations in native components and rendering approaches that may require additional effort to get right. Also, with Flutter being programmed in DART and their widget approach, there is a learning curve that needs to be tackled as it is a newer language compared to Javascript. To ensure the process is fluid, Developers must carefully test to ensure a seamless experience on different devices.

2. Native-like Performance

Both React Native and Flutter offer native-like performance for video streaming apps. React Native utilizes native components, which allows it to leverage the device’s GPU for optimized video playback. This results in smooth and efficient streaming, ensuring that users enjoy their content without any performance hiccups.

- Bitmovin

React Native UI Development View

Flutter, on the other hand, takes a different approach. Flutter has its implementations of each UI control rather than deferring to those provided by the system. Content is drawn to a texture, and Flutter’s widget tree is kept entirely internal. This means there’s no place for an Android view to exist within Flutter’s internal model or to render them interleaved with Flutter widgets. If native UI components are needed, Flutter’s PlatformView comes in handy. This component maps to the native View and UIView components on Android and iOS, allowing development teams to utilize these Native UI components that would usually be used when building the apps natively on these operating systems.

- Bitmovin

Flutte Development UI workflow

On the contrary, it’s important to acknowledge that achieving true native performance can be challenging due to the additional abstraction layers introduced by these frameworks. While React Native and Flutter are built to optimize performance, there may be scenarios where fully native apps outperform them. In complex video streaming applications with intensive processing requirements, teams may need to implement platform-specific optimizations or utilize native code for performance-critical tasks. However, this won’t be as crucial for development teams where the priority is to optimize their process and time to launch quickly on more devices.

3. Quicker Feedback and Updates

React Native’s hot reloading and Flutter’s hot reload features greatly enhance the development process for video streaming workflows. With hot reloading, React Native developers can instantly see code changes as they modify their app. This provides quick feedback and allows for rapid adjustments, leading to efficient iterations. Similarly, Flutter’s updating feature enables developers to view real-time updates to the app’s UI and behavior, making it easier to experiment with different video streaming features and refine the app’s functionality. These features streamline the development process for small teams, allowing them to iterate more efficiently and fine-tune their video streaming workflows.

Aside from this, managing the app’s versions and current state is important, as this approach could also lead to issues when making an update, as it can unintentionally cause bugs within the application environment and disrupt the app’s functionality if not maintained properly.

4. Robust Features and Customizable UI

React Native and Flutter provide extensive UI components and customization options, enabling teams to create visually appealing and tailored user interfaces for their video streaming apps. React Native offers a wide range of community-driven components and libraries that can be easily integrated into the app. This allows developers to utilize existing solutions and deliver a better user experience.

On the other hand, Flutter has a widget-based approach with a robust set of pre-built UI components that enable developers to customize them extensively. This simplifies the process and helps save time and effort when designing and implementing elements. Along with ease-of-use, teams will be able to develop a unique setup and incorporate branding aspects within the application. 

The negative here is that even though teams can thoroughly customize features and elements, they should avoid overdoing it as it can lead to performance issues if the customizations are heavy and, in turn, affect the streaming experience for viewers.

5. Extensive Open-Source Ecosystem

React Native and Flutter have large developer communities, providing access to plenty of libraries, tools, and resources. This allows small teams to leverage existing options and tap into community knowledge to overcome video streaming challenges. The communities provide ongoing support, ensuring platform updates, bug fixes, and performance optimizations. Support and guidance from a passionate developer network can make a huge difference, and this factor alone is why both of these frameworks have such widespread adoption.

Regarding developer communities, check out Bitmovin’s developer community, which focuses on video streaming workflow aspects and questions on deploying Bitmovin’s solutions.

Bitmovin’s Dedicated React Native and Flutter SDK

With our focus on simplifying the streaming process and enabling developers to stream on more platforms, our team has released our React Native SDK and is currently working on a Flutter SDK. The React Native SDK offers a comprehensive suite of features specifically designed for integrating advanced video streaming functionalities into React Native applications. It handles critical aspects such as UI customization, video playback, adaptive streaming, content protection, and analytics, reducing the need for teams to build these components from scratch. The upcoming Flutter SDK will extend Bitmovin’s support to the Flutter framework, providing a powerful toolkit for building high-quality video streaming applications. These SDKs make it easier for development teams to deploy essential aspects and focus on building their app’s unique features.

Wrapping It Up (Pun Intended)

React Native and Flutter provide powerful solutions for simplifying video streaming workflows. The cross-platform capabilities, native-like performance, hot reloading for rapid iterations, customizable UI options, and extensive ecosystems make React Native and Flutter ideal for small teams or ones with limited video streaming workflow knowledge. Bitmovin’s React Native SDK and upcoming Flutter SDK further enhance and optimize the video streaming development process. By leveraging these frameworks and SDKs, teams can streamline their video streaming workflows, reduce development time, and deliver the highest quality of experience to their users.

Also, if you’d like to see how Bitmovin’s solution stack can benefit your streaming workflow – sign up for our 30-day free trial.

The post 5 Ways React Native & Flutter Can Simplify Video Streaming Workflows appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/react-native-flutter-streaming-workflows/feed/ 0
How We Built a Website Like YouTube Using Bitmovin’s Streams https://bitmovin.com/blog/how-to-build-youtube-like-platform-with-streams/ https://bitmovin.com/blog/how-to-build-youtube-like-platform-with-streams/#respond Tue, 04 Jul 2023 17:16:16 +0000 https://bitmovin.com/?p=263708 Since entering the streaming space, we’ve always been intrigued to understand the effort it would take and what it would be like to build out something similar to YouTube that would allow users to view and upload their own videos. As our product teams conduct frequent hackathons, this was a perfect opportunity to dive into...

The post How We Built a Website Like YouTube Using Bitmovin’s Streams appeared first on Bitmovin.

]]>
Since entering the streaming space, we’ve always been intrigued to understand the effort it would take and what it would be like to build out something similar to YouTube that would allow users to view and upload their own videos. As our product teams conduct frequent hackathons, this was a perfect opportunity to dive into it.

To achieve this, we decided to use Bitmovin’s latest solution, Streams, with NextJS 13 and Tailwind CSS. Being a junior developer on the Streams team and Mihael being the product manager, this was also a good way to see its capabilities firsthand and experience what many platforms have already experienced when implementing it in a real streaming environment. When we launched Streams, we built it to give teams a feature-rich end-to-end solution they could quickly deploy or integrate into their existing streaming workflow. We also set our objective to make it a robust solution anyone could use, no matter their level of experience, to get up, launch, and stream quickly.

In this blog post, we will provide an overview of the tools we used and share our experience implementing the various products in combination to develop this YouTube-like app in less than a day.

Overview of the Tools and How They Were Used

Streams

As we mentioned before, Streams is the latest offering that Bitmovin has brought to market. It’s a powerful out-of-the-box live and on-demand video streaming solution that provides an end-to-end workflow and integrates Bitmovin’s industry-leading Live and VOD Encoding, Video Player, and Analytics. With Streams, platforms are able to host, manage, and stream high-quality video anywhere in the world with a unified and seamless workflow. With regards to the specific features, to make content available, simply upload your video library or connect your live source to start encoding. Our proprietary Per-Title encoding will make your content available in the highest quality with lower bitrates so viewers can have the best streaming experience in low bandwidth environments. 

Per-Title encoding will also help you save tremendously on storage and delivery costs due to the lower bitrates. For video playback, Streams enables you to brand the Bitmovin Player to your company and customize the Player skin, giving your users a unique and personalized viewing experience. Additionally, with support for client-side ad insertion, you can monetize content with VAST, VMAP, and VPAID ads that can be added in pre-roll, mid-roll, and post-roll. Then, to monitor and track each view session, Streams provides in-depth metrics with its direct integration with Bitmovin’s Analytics, giving you the ability to analyze the user experience and content quality and performance. Lastly, Streams is integrated with multiple global CDN providers, ensuring your content has a global reach and can be viewed in the highest quality anywhere in the world.

Tailwind CSS

We chose Tailwind CSS for this project because it offers a highly efficient and intuitive approach to styling web interfaces. With its extensive collection of utility classes, you can easily create beautiful, responsive designs without the need to write custom CSS code. Tailwind CSS allows you to save time and effort by providing pre-defined classes that can be combined to achieve desired design outcomes. Its focus on responsiveness and customization options ensures that you can easily create interfaces that perfectly fit your needs. By leveraging Tailwind CSS, we can streamline the front-end development process and deliver visually impressive user interfaces in a more efficient manner.

Next.js

Next.js is the perfect choice for you if you want to build fast and scalable web applications. It is a popular open-source framework for building server-side rendered (SSR) React applications. It simplifies development and ensures optimal performance by providing a powerful set of features such as automatic code splitting, server-side rendering, and dynamic imports. Additionally, its seamless integration with popular tools and supportive community make it an ideal framework for creating robust and efficient applications.

How I Built the YouTube-Like Platform

Now, let’s move on to how we used these tools to create our YouTube-like platform.

Our first step was to create a basic Next.js application with a homepage that could display a list of videos. We used the Bitmovin Streams API to fetch the list from our Bitmovin account and display them on the homepage using React components. 

Once done, we added a video player component that allows users to view individual videos. We used the Bitmovin Player for this piece to stream the videos and were able to use its out-of-the-box Player controls, such as play, pause, volume control, speed choice, audio track selection, and video quality. We could have also implemented styling options for the Player for branding purposes, but we wanted to focus solely on the core capabilities of the platform. Next, we embedded the Player into the app, which was very easy as we only had to import the JS library and utilize the web component. Alternatively, we could have used the iFrame option to embed the video, but this would have made it harder to customize the experience in the future.

After that, we added a video upload page, allowing users to upload their own videos to our platform. We used the Bitmovin Upload API for this so users could create a new video asset, upload the video file, and encode it into multiple formats for playback on different devices. We also added a basic form that allows users to enter metadata such as the video title and description.

Finally, to style our application, we used Tailwind CSS. We used the pre-defined Tailwind classes in our React components to style our user interface.

Our Thoughts Looking Back

Creating a video streaming platform is a complex task, but with the right tools and frameworks, it can be achieved efficiently and effectively. By using Bitmovin Streams, Next.js, and Tailwind CSS, we were easily able to create a YouTube-like platform that was powerful, fast, and user-friendly. The combination of these powerful tools allowed me to handle complex tasks with ease, such as video encoding, playback, and streaming, while also allowing for rapid UI development and responsive design. 

Along with the front and backend implementation, we were also able to gather clear insights into the user experience and quality and performance of the content streamed through the integration with Bitmovin’s Analytics. With this integration, we are able to leverage in-depth audience data on overall views, unique users, browser sessions, regions where the video has been viewed, and much more. 

As a follow-up to this project for our next hackathon, we are also thinking of extending our Youtube-like platform to showcase how live streaming can be implemented, as Bitmovin’s Streams also supports this capability. Below find the link to a short video of the results from this project and our GitHub repository, which can be used by others as inspiration or a foundation to get started with the above-mentioned technologies.

Video:

Github Repo: https://github.com/bitmovin/streams-examples 

If you’d like to test out Streams, Sign up for our free 30-day trial today and gain access to all of our solutions.

The post How We Built a Website Like YouTube Using Bitmovin’s Streams appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/how-to-build-youtube-like-platform-with-streams/feed/ 0
From Workouts to Wellness: Navigating the 5 Main Challenges of Health and Fitness Video Streaming Workflows https://bitmovin.com/blog/health-fitness-video-streaming-challenges/ https://bitmovin.com/blog/health-fitness-video-streaming-challenges/#respond Thu, 29 Jun 2023 13:55:15 +0000 https://bitmovin.com/?p=263443 Video streaming has emerged as a powerful tool in health and fitness, enabling platforms to engage with individuals looking to achieve their health and wellness goals wherever they are. Whether in the comfort of their homes, on the go, or gathered in groups at physical locations around the globe, video streaming unlocks a new world...

The post From Workouts to Wellness: Navigating the 5 Main Challenges of Health and Fitness Video Streaming Workflows appeared first on Bitmovin.

]]>
Video streaming has emerged as a powerful tool in health and fitness, enabling platforms to engage with individuals looking to achieve their health and wellness goals wherever they are. Whether in the comfort of their homes, on the go, or gathered in groups at physical locations around the globe, video streaming unlocks a new world of possibilities for people looking to get in shape. However, ensuring a seamless and captivating user experience can create unique challenges for platforms and service providers.

In this blog, we will explore the main hurdles health and fitness video streaming platforms encounter and discuss potential solutions for each.

1. Video Encoding and Transcoding: Achieving Compatibility and Efficiency

To make content playable and support different device screen sizes and network capabilities, efficient video encoding and transcoding solutions are necessary. By not implementing a correct encoding ladder optimized for every major device, the viewing experience can be significantly limited, giving your users a bad experience and possibly forcing them to churn. This is why utilizing a strong encoding solution such as Bitmovin’s Video On-Demand (VOD) Encoder benefits you greatly by ensuring your users have a smooth and consistent viewing experience. You can optimize VOD Encoder settings for your content through Per-Title Encoding, which plays a vital role by dynamically adjusting encoding parameters for each video, maximizing quality and bandwidth efficiency. With Bitmovin’s industry-leading Per-Title algorithm, you can deliver superior video quality while reducing storage and delivery costs. 

Check out our whitepaper on Per-Title Encoding technology to learn more about the benefits of Per-Title optimization.
When it comes to live streaming, one of the primary challenges is ensuring the stability of the stream. This requires a robust internet connection and a reliable streaming solution that can minimize disruptions and buffering during live broadcasts. By leveraging a reliable encoder, such as Bitmovin’s Live Event Encoder, that supports multiple codecs like H.264, H.265 (HEVC) and VP9 and protocols like RTMP, SRT and Zixi, you can minimize the bitrate needed for playback, optimizing bandwidth usage and improving stream stability. This benefits viewers with slower internet connections, ensuring a smooth and uninterrupted streaming experience.

- Bitmovin

2. Device support, ABR streaming, and a customizable Player: Personalized Experiences for Every User

When delivering smooth and personalized streaming experiences to your users, providing broad device support, adaptive bitrate streaming, and a customizable player are essential for you and your development team. Utilizing an open-source player in this regard can be good for select device types and platforms, but it won’t be able to cover every major device in use. You can maximize your reach and engage with a larger audience by ensuring compatibility across a wide range of devices. The player must also be able to adapt to the user’s available bandwidth to guarantee they can still stream the video in the highest possible quality, which isn’t automatically built into every player option out there. Additionally, having a customizable player skin to personalize the viewing experience goes a long way with users, pushing them to engage more with the current content.

As all of these are essential functionalities, having a single video player solution that supports various platforms and devices with ABR and other capabilities helps simplify the development process and gives you more control over how users experience and engage with your content. This is where Bitmovin’s video Player stands out, as you can seamlessly deliver your health and fitness videos across web browsers, mobile devices, smart TVs, Set-top boxes, and other popular platforms while ensuring the best viewing experience. This full feature set and broad device support eliminate the need for separate player implementations and streamline your development efforts.

- Bitmovin

3. Live-to-VOD: Increasing the lifespan and expanding access to content

Health and fitness platforms often stream live workout and wellness sessions, events, and classes. Depending on the workflow, not converting these live streams into on-demand video wastes the potential value of this content, as it gives your users more video to engage with at their convenience. By utilizing a tool that enables live-to-VOD, such as the encoding solutions provided by Bitmovin, platforms can automate capturing, encoding, and storing live streams for on-demand use. This functionality helps improve user satisfaction and increase content views, while maximizing the ROI of your live stream production. It also enables them to revisit past sessions, catch up on missed classes, and follow their fitness and wellness journey at their own pace.

4. Analytics and Viewer Insights: Understanding User Engagement

Analytics and viewer insights are vital for every health and fitness platform so they can maximize user engagement and content performance. With the right insights being tracked, you and your team may be aware of performance issues, buffering problems, or other technical issues that impact the viewing experience. Furthermore, you may face challenges in understanding your user’s behavior, how they are viewing your content, and how your ads are working if you’re utilizing them within your workflow.

A comprehensive metrics offering such as Bitmovin’s Analytics is essential as it enables you to overcome these challenges. Bitmovin’s solution provides over 200 metrics and filters, real-time monitoring, and actionable insights that empower you to optimize your content strategy, address technical issues promptly, and deliver a superior streaming experience. This data-driven approach enables you to maximize user engagement, enhance customer satisfaction, and stay ahead in the competitive health and fitness streaming landscape.

5. Ease-of-use and scalability: Simplifying Development and Accommodating Growth

The last main challenge on my list is simplifying the deployment process and ensuring scalability. This is crucial for every health and fitness platform, as when implementing a video streaming solution, it’s essential to optimize the development team’s time by prioritizing ease-of-use to help streamline the integration and setup of each part within the workflow. Implementing the wrong piece can increase your encoding time, make content harder to access, and even increase buffer times on your video player if it’s too heavy. It can also force you to forego supporting new devices if technical expertise becomes an issue.

Opting for solutions that offer user-friendly interfaces, comprehensive documentation and guides for API usage, and robust support, such as Bitmovin’s Streams solution and other video streaming technology, ensures development teams aren’t breaking their backs to get up and streaming. Additionally, depending on the framework choice, such as React/React Native or Flutter, smaller teams or ones with less video streaming expertise can maximize their code usage by utilizing what they’ve deployed across the web and use it across additional platforms/device types, enabling them to support more devices and easily customize each aspect without needing to know certain frameworks.

Furthermore, scalability becomes a key consideration as the platform grows and user demand increases. The chosen video streaming solution should provide flexible scaling capabilities, such as being cloud-native, allowing the platform to accommodate a growing content library and many concurrent viewers. This guarantees the platform can seamlessly meet workflow needs and user demands, even during peak usage periods, providing a smooth, consistent, and uninterrupted viewing experience.

In Conclusion

As you can tell from the 5 main points above, health and fitness streaming platforms face unique challenges in delivering seamless experiences. Bitmovin’s solutions for video encoding, live streaming, Player, and analytics provide essential tools to overcome these challenges and help simplify and scale streaming workflows. With efficient on-demand and live encoding, broad device support with ABR and customizable player interfaces, live-to-VOD functionality, and comprehensive analytics, platforms can optimize user engagement, make development easier, and ensure exceptional streaming experiences. With Bitmovin’s technology, you can successfully navigate the complexities of video streaming workflows and deliver high-quality content to your global audience.

If you want to see how Bitmovin’s solutions can help you simplify your existing streaming process or provide an end-to-end workflow to help launch your video streaming platform, sign up for our 30-day free trial to start testing them today (No credit card required)

The post From Workouts to Wellness: Navigating the 5 Main Challenges of Health and Fitness Video Streaming Workflows appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/health-fitness-video-streaming-challenges/feed/ 0