Bitmovin – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Tue, 31 Jan 2023 11:13:30 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg Bitmovin – Bitmovin https://bitmovin.com 32 32 Mitigating the Cost of Errors with Granular Data for Video Analytics https://bitmovin.com/blog/video-error-cost-mitigation-granular-data/ Tue, 12 Jan 2021 09:00:31 +0000 https://bitmovin.com/?p=149771 Video on the web has changed dramatically in the past ten years. We’ve shifted from progressive downloads that grab a full video, using plug-ins and strictly proprietary file formats to play, to streaming small chunks of data that support a wide range of network capabilities and formats.  The increasing granularity of video transactions means increased...

The post Mitigating the Cost of Errors with Granular Data for Video Analytics appeared first on Bitmovin.

]]>
Reducing Cost of Error_Granular Data_Featured image
Video on the web has changed dramatically in the past ten years. We’ve shifted from progressive downloads that grab a full video, using plug-ins and strictly proprietary file formats to play, to streaming small chunks of data that support a wide range of network capabilities and formats. 
The increasing granularity of video transactions means increased opportunity for similarly granular data in video streaming analytics, such as startup time, error percentage, buffering rate, and start-up failures. This is all important information to take advantage of as companies try to keep pace with customer expectations while managing costs.

- Bitmovin
The most important video performance metrics according to Bitmovin’s Video Developer report

Granular Data vs. Low Granularity in Video Analytics

Delivering video with standard HTTP servers means companies can use readily available web infrastructure to host and distribute video, which also means common web tools like Google Analytics can provide some insight into how a user interacts with video. However, there are limits to the insights these tools can give you.
Google Analytics is designed to treat website visits as discrete events to track the user across multiple pages—it’s not exactly suited to monitor the stream-based architecture that web video has become. It can’t provide insights like how long a video was watched or what bit rate was delivered at a specific time.
Data analytics tools that specialize in video, like Bitmovin’s, are designed to analyze streaming video on a per request level. This grants you a significantly deeper level of insight into how customers interact with content. With granular data in analytics, you don’t have to know the metrics you need in advance or build custom triggers that handle different players. Instead, you’ll receive relevant, comprehensive metrics without needing to devote developer resources.
Video Analytics_Quote with Image
These per-request insights help you understand every aspect of how clients interact with your content, such as subtitle usage, muting, what devices video is being served to, even when users pause and resume playback. While larger companies may be able to build pipelines that generate codecs for every possible device and frame rate, those with more limited resources have to prioritize. Granular video consumption data is vital for choosing how to allocate your resources.

The Cost of Errors

A huge benefit of granular data for video analytics is gaining insight into users who experience an error after beginning a video. Vimeo found that about 6 percent of client churn on their platform resulted from users experiencing a technical error. Errors are a fact of life when streaming video, regularly affecting more than 5 percent of desktop devices.

Granular Data in Video Analytics_Errors by Device Type_Bar Chart
Errors by Device Type

These errors also have high costs, especially when they can’t be properly tracked, monitored, and attributed. A 2013 study by Krishnan and Sitaraman found that “a viewer who experienced failure is 2.32% less likely to revisit the same site within a week.” In a review of Bitmovin’s customer database, Product Manager, Christoph Prager found that there are three categories of error types: Ambiguous, Unclear, and Clear.

“… clear error is when an error message and an error code, point a developer directly to an underlying issue. Whereas our definition of an ambiguous error is when an error message and error code points towards a problem area, and unclear errors are where an error code and/or message did not provide any insights into the root cause of the problem or error.” 

And according to the database, accounting for 65% of all errors, the most costly error type is thus the Ambiguous Error category. Fortunately, Bitmovin analytics track errors, and you can use their error cost calculator to see exactly how much these issues are costing you. Using Bitmovin analytics, you can more easily prioritize and fix problems, reducing the number of clients turned away from your content by technical errors and retaining them as loyal customers.
Granular Data in Video Analytics_Error breakout categories_Pie Chart

Using Granular Data for Resource Efficiency

Think of streaming video analytics as an ever-present helping hand when it comes to making quick resource management decisions. For example, if your servers hit capacity during a live event, timely and actionable analytics can mean the difference between cost-effective, appropriate scaling and losing a viewer who steps away during an interruption.
Of course, there are many dimensions of video analytics that you can choose to track. The Bitmovin analytics platform, for example, offers the ability to track over forty discrete metrics. But let’s highlight a few that are particularly crucial to business success.

Improving Viewer Experience

The biggest overall goal of video analytics is understanding and predicting how customers interact with your content. Granular data analysis allows you to start making reasonable assumptions about when and how people will interact with your content in the future. Efficiently applied, that data can help you manage client expectations and establish an auditable and improvable record of service.
This is particularly important any time you’re providing live support for customers experiencing video issues. You may have to immediately assess a client’s network connection, or how long they’ve been watching a video. Bitmovin’s real-time video analytics platform can reduce the time your customer service team spends trying to get to the bottom of how your video is being consumed by the customer. 
Remember not everything that affects your customers is under your control. Supplemental services, like video management (eg, DRM) and advertisement management, can impact customer experience. Analytics can be particularly important in these instances because knowledge from such data is essential for managing your relationship with two stakeholders: your customer and your advertising partner. Monitoring the effect these adjacent services have on your clients helps you ensure the support they provide is positively affecting customers.
In addition to helping you understand where customers turn away from content, analytics can help you feed clients relevant content to keep them engaged. Granular data helps ensure you’ll recommend videos customers watch start to finish rather than pushing them to videos that get interactions, but might not hold their attention.

Reducing Costs

One of the side effects of streaming video data is the increased importance of having the server as close to the customer as possible. As a result, video providers rely extensively on content delivery networks (CDNs). These services charge for data transferred, and video can represent a large portion of these bandwidth costs. Analyzing information like device fragmentation can enable you to directly target your user base while making efficient use of cloud-based CDNs.
Keeping track of data like video startup time, client resolution, and percentage of time spent buffering can help you contextualize network costs, develop a cheap and efficient architecture, direct development efforts, and prevent customer churn. Flagging video quality and load time issues can prevent encoding bottlenecks before they happen. Bitmovin provides a bitrate heatmap to help understand and quantify exactly how your clients consume bandwidth. With this information, you can determine how to manage server capacity during high-traffic events, or inform a choice about adopting 4K or 8K video.
Analytics can help your organization look forward and plan for future requirements. As more infrastructure is hosted in cloud services, transactions incur incremental per second and per-byte costs. Optimizing services that provide discrete amounts of data millions of times over (like video) can save significant amounts of money. Analytics give you the tools to make those decisions wisely. 

Investing in the Right Tools and Resources

If you’re prioritizing high-definition video that’s mostly consumed while users are on their phones outside the house, granular analytics will guide you to solutions like implementing codecs such as HEVC, which focuses on transmitting better quality data using smaller network connections.
Pay attention to how many users are downloading high-quality, super-fast video and how much device fragmentation affects your user block, and then apply that knowledge to weighing the stakes of investing in modern encoding innovations, like per-title encoding, multi-codec streaming, and per-scene adaptation. You can also determine what your most popular content is, then optimize the delivery of exclusively that content by providing additional encodings, targeting new users with your best material.

Getting the Most Value Out of Your Data

In a nutshell, granular video analytics allow you to efficiently make decisions about how to manage the costs and effects of your video delivery infrastructure. If you provide video as a service or rely on video to convey critical content, consider integrating Bitmovin analytics. Ensure your clients aren’t turned away by technical issues and be secure in the knowledge that you’re making informed decisions about preparing and delivering your content.
Start by signing up for a free trial period with Bitmovin to get a sense of the type of analytics available to you in the constantly evolving field of video content delivery.
Did you enjoy this post? Check out some of our other great content below:

The post Mitigating the Cost of Errors with Granular Data for Video Analytics appeared first on Bitmovin.

]]>
How to Trust Your Player: Protecting Content from Origination to Playback https://bitmovin.com/blog/how-to-trust-your-player-building-an-ott-service-for-todays-world-p5/ Thu, 12 Nov 2020 13:38:30 +0000 https://bitmovin.com/?p=137831 How to Trust Your Player: Building an OTT Service for Today’s World Article 5 – From one end to the other: Protecting content from origination to playback, once and for all Joshua Shulman, Digital Marketing Specialist, Bitmovin Alan Ogilvie, Lead Product Manager, Friend MTS Ali Hodjat, Product Marketing Director, Intertrust Technologies Any player in the...

The post How to Trust Your Player: Protecting Content from Origination to Playback appeared first on Bitmovin.

]]>
How to Trust Your Player: Building an OTT Service for Today’s World

- Bitmovin

Article 5 – From one end to the other: Protecting content from origination to playback, once and for all

  • Joshua Shulman, Digital Marketing Specialist, Bitmovin
  • Alan Ogilvie, Lead Product Manager, Friend MTS
  • Ali Hodjat, Product Marketing Director, Intertrust Technologies

Any player in the OTT world would have a hard time keeping up with the myriad of changes we have seen over the past several months: COVID-19. The dramatic increase in video consumption. The exponential rise in subscriptions to established OTT streaming services. New OTT streaming services. PVOD. Fragmentation of content. But enter the other player – the content pirate – and things become even more complicated. 
As we reviewed in our first article, the stakes are high – very high. A recent report from Parks Associates finds that the value of pirate video services accessed by pay-TV and non-pay TV consumers will exceed $67 billion (USD) worldwide by 2023. Another report from ABI Research estimates that more than 17% of worldwide video streaming users access content illegally. The impact on OTT streaming services is a direct and significant blow to the bottom line.

Securing OTT Content

To stay alive in this environment, OTT companies have no choice but to secure content delivery and playback at a multiplayer level, which includes:

  • Protecting content with technology within and around the video player: the consumer playback experience.
  • Protecting content from “players”: the pirates – the potential bad actors looking to compromise your service, and steal content. This is the human factor.

If you’re an OTT service launching premium exclusive content, don’t be the one that suddenly discovers your content appearing, and then being distributed through pirate services, within minutes of launch.

Digital Rights Management (DRM)

Often considered the cornerstone of content and revenue protection strategy, digital rights management (DRM) remains a critical part of an effective multi-prong system. In Article 2, Intertrust Technologies discussed the pros and cons of two DRM license acquisition models (direct acquisition model, from a license server, and proxy license acquisition model, from a proxy server).
Intertrust also discussed DRM best practices for leveraging a cloud-based DRM service to protect high-value streaming content. OTT operators must follow these to block the loopholes that hackers otherwise may use to defeat the purpose of DRM technology.

  • Multiple content encryption keys (CEK) – Setting different CEKs for audio track, as well as for each video resolution, enables OTT streaming service providers to grant access to content distributed to different customers/different devices. They can do this by delivering only the DRM licenses with CEKs for the authorized resolutions based on the consumer’s subscription package.
  • DRM security levels – Defining the security tier of the DRM stack that is supported by the target device, with two relevant distinctions: software-based DRM client and hardware-based DRM client. Using the right DRM security level allows OTT streaming service providers to map the required security level for each given resolution or track.
  • Widevine Verified Media Path (VMP) – The requirement enforced by Google Widevine DRM is specifically relevant when a browser-based video player is used to decrypt Widevine-protected content. Given Google’s recent policy to strictly enforce the VMP requirement, Widevine license servers can only issue licenses for content decryption modules that support the VMP feature.

Securing the Playback Experience

Delivering high-value premium content to a web browser can be a risky venture, but one that is critical to reaching audiences today. Browser environments are amongst the farthest-reaching, but least secure, due to their open nature, and require some extra attention when implementing content protection systems.
Bitmovin highlighted in Article 3 how code obfuscation tools and techniques work in browser playback environments where website code (JavaScript) is interpreted and executed. The result is code that is extremely difficult to read and reverse-engineer, either by tinkerers or a more determined actor…such as a content pirate.
Yet executing code on a web browser, following open JavaScript standards, remains impossible to completely secure playback. Someone with enough motivation, and time to spend gathering intelligence and doing research, will eventually be able to reverse-engineer your playback code. In reviewing its web player, Bitmovin detailed how concurrent management and domain locking work as part of a complete defense strategy to deter attacks from content pirates.
Finally, once an OTT provider has secured its distribution chain from source to the playback environment, and has followed best practices to secure the playback experience as much as possible, Bitmovin summarized three golden rules to boost users’ experience – and ultimately, your brand.

Watermarking and Monitoring

For all of its merits, the reality is that DRM only protects the delivery and distribution of content to the point of consumption. Article 4 Friend MTS showed that beyond DRM there is a need to detect pirated content, deter wrongdoers by identifying them in stolen content, and take action to stop further loss of revenue by disabling access to the service. 
Although DRM protects the content until it arrives at its intended legitimate destination, additional precautions should be made to stop content from being redistributed by those who have no rights to do so.
Commonly pirates will capture content directly from the screen (with the use of screen recording software) or a device’s digital output with rights management removed. They’re able to rip the stream once the content is decrypted by the authorized devices. 
So, if DRM protects only the legitimate path from origination to the point of consumption, the OTT operator must protect the value of video content – whether original or rights-managed – outside of these service boundaries. How? Forensic subscriber-level watermarking can be employed on any delivered video in the service. Doing so affords the ability to identify the ‘subscriber’, your legitimate user. Using a combination of active monitoring of piracy groups and sites – suspected pirate materials are identified through known reference fingerprints, and an extraction process can take place to obtain the subscriber identifying data within the watermark. This can rapidly signpost the “bad actors”, low volume content sharers, and industrial-scale pirates. Action can then be taken to stop the content from being accessed and used for piracy. 
With an effective subscriber-level watermarking solution, you can close the loop and start to lock down piracy at its source.
Friend MTS reviewed the pros and cons of A/B variant (server-side) and client-composited (client-side) watermarking and looked at how they are deployed and function. Client-composited is the clear winner with its rapid detection of content theft, lower overall cost, reduced deployment complexity, faster time-to-market, and higher adaptability to attacks on watermarks.
In looking at the characteristics of an effective client-composited watermarking service, Friend MTS outlined its Advanced Subscriber Identification (ASiD) service, which has retained its agility to fend off attacks and has proven robustness in both broadcast and OTT environments. They highlighted the importance of a watermarking provider not only keeping up with the latest pirate schemes but staying ahead of them. They also detailed the key watermarking features of speed, global reach and ability to deliver through a multi-CDN service – all within the context of live sports and entertainment, pay-per-view and on-demand content.
Article 4 also highlights the need to understand the ‘human factor’ in your OTT service – the end-users who are consuming content. Friend MTS advised starting with a position of ‘zero trust’ for your users – assume some users of your service will attempt to circumvent security controls or use your service in a way you didn’t intend. Errant or undesired behavior within your service can be broken down into various ‘personas’ and the article takes you through several of these.
Once user behaviours are understood, you can plan your monitoring architecture, and how your business support systems should respond to service misuse.

Conclusion

Today’s OTT world is radically different than it was in early 2020. Bad actors abound. Content and revenue are at risk literally every minute of every day around the world. But you do not need to be a victim.
It’s possible to take steps upfront to secure content, working with a multi-pronged strategy that integrates DRM, client-composited forensic watermarking, player security, and robust monitoring to produce a real solution to the problem of content piracy. In today’s world, “end-to-end” is not just an IT buzzword. It’s a way of delivering streaming media to a playback client in the most secure and protective environment that we can achieve. 
______________________________________________________________________
Join us for our Webinar on the 18th of November. We’ll be continuing the discussion on the content distribution chain and the importance of delivering streaming content in the most secure ways possible while protecting both your content and revenue

Visit How to Trust Your Player

Check out the full blog series below:

View the webinar on the How to Trust Your Player Page
View the fireside chat series:

Download this article as a PDF
Download the full series as a PDF
______________________________________________________________________
“How To Trust Your Player” is a collaborative effort between Bitmovin, Friend MTS and Intertrust Technologies. The goal is to educate media and content providers on the importance of delivering streaming content in the most secure ways possible, from the video player to the end consumer, while protecting both content and revenue. 

Bitmovin

Bitmovin is a developer of video streaming technology. Built for technical professionals in the OTT video market, the company’s software solutions work to provide the best viewer experience imaginable by optimizing customer operations and reducing time to market.
Bitmovin’s solution suite – a video encoder, player, and analytics platform – lets content owners redefine the viewer experience through API-based workflow optimization, fast content turnaround, and scalability. 
Founded in 2012, the company is based in San Francisco, with offices in major cities in Europe, North America and South America. With more than 250 enterprise customers around the globe, Bitmovin helps power clients like BBC, fuboTV, Hulu Japan, RTL, and iFlix.

Friend MTS

Friend MTS helps media and entertainment businesses secure content so that revenue can grow and creativity can thrive. 
With advanced services that measure, monitor, detect, and disable content piracy, Friend MTS provides a 360-degree view of the constantly shifting content piracy protection ecosystem. The company stays a step ahead of ever-advancing and sophisticated content piracy behavior and technology with a sharp, deliberate, laser-focused commitment to continual monitoring and innovation.
Businesses and nonprofit organizations throughout the world recognize Friend MTS as the leading authority for content and revenue protection. The company also has donated its digital fingerprint technology to the International Center for Missing and Exploited Children to tackle child abuse content online.
Founded in 2000, Friend MTS is headquartered in Birmingham, England, with operations throughout Europe, the Middle East, Africa, Latin America, and North America. Friend MTS is the recipient of an Emmy® Award for Technology and Engineering, presented by the National Academy of Television Arts and Sciences (2018).

Intertrust Technologies

Intertrust provides the world’s leading digital rights management (DRM) cloud service with a complete ecosystem of security and rights management products. The company empowers businesses to securely manage all of their data and devices, regardless of location, format, or type – enabling innovative multi-party apps and services. 
Intertrust Media Solutions provides robust content protection solutions for media and entertainment. Intertrust ExpressPlay consists of a cloud-based multi-DRM service, broadcast TV security, and anti-piracy services with proven scalability in the largest OTT streaming platforms globally. 
ExpressPlay DRM™ is today’s most complete multi-DRM monetization service for OTT streaming, supporting Apple FairPlay Streaming, Google Widevine, Microsoft PlayReady, Adobe Primetime, and the open-standard Marlin DRM. Intertrust also offers ExpressPlay DRM Offline to enable secure streaming of premium content through an offline multi-DRM platform. 
Founded in 1990, Intertrust is headquartered in Sunnyvale, California, with regional offices in London, Tokyo, Mumbai, Bangalore, Beijing, Seoul, Riga, and Tallinn.

The post How to Trust Your Player: Protecting Content from Origination to Playback appeared first on Bitmovin.

]]>
How to Trust Your Player #4: Beyond DRM – Video Watermarking https://bitmovin.com/blog/how-to-trust-your-player-building-an-ott-service-for-todays-world-p4/ Tue, 27 Oct 2020 13:26:13 +0000 https://bitmovin.com/?p=134909 How to Trust Your Player: Beyond Digital Rights Management – Video Watermarking Weighs In Alan Ogilvie, Lead Product Manager, Friend MTS Andy Wilson, Senior Product Architect, Friend MTS Chris O’Brien, Engineering Manager, Friend MTS In the continually evolving OTT world, we’ve established that savvy pirates are implementing new and advanced methods to steal valuable content...

The post How to Trust Your Player #4: Beyond DRM – Video Watermarking appeared first on Bitmovin.

]]>
How to Trust Your Player: Beyond Digital Rights Management – Video Watermarking Weighs In

- Bitmovin

  • Alan Ogilvie, Lead Product Manager, Friend MTS
  • Andy Wilson, Senior Product Architect, Friend MTS
  • Chris O’Brien, Engineering Manager, Friend MTS

In the continually evolving OTT world, we’ve established that savvy pirates are implementing new and advanced methods to steal valuable content – to the tune of more than $67 billion (USD) in value by 2023. Another report from ABI Research estimates that more than 17% of worldwide video streaming users access content illegally.
We also know that launching an OTT service is costly, resource-intensive, and complicated. Getting it right is critical. Beyond building the video consumption environment and content acquisition, companies must incorporate up-to-date content protection methods. In this “How to Trust Your Player” series, we’ve learned about Digital Rights Management (DRM) from Intertrust Technologies, and about content packaging, license acquisition models – and best practices for implementation within the video player environment from Bitmovin

Understanding Content Protection

But what about the other players? They are the users, the consumers of all this valuable content. To ensure content protection among these players, we have to look at watermarking. Working together with OTT services throughout the world, we have seen how companies are working hard to protect their content at the front end with DRM, but are not commonly implementing readily accessible, advanced watermarking techniques to protect the content once it reaches the end user.
As a result, they are risking subscriber loyalty, growth, and revenue by not covering the last hole in the content delivery system. This scenario is one case where the overused “end-to-end” term is applicable: OTT companies must protect their content end to end in order to truly protect their content and revenue.

Protection Beyond DRM

So what’s an OTT service provider to do?
We know that DRM is absolutely necessary in this journey, and needs careful, considered implementation. As Intertrust pointed out in its article, “Securing Content Access with Digital Rights Management Best Practices”, recommended DRM best practices are essential to: 

  • Maintain a secure interface for delivery of content keys to the encoder and packagers;
  • Secure session tokens for authentication and authorisation;
  • Prevent attacks against the DRM license acquisition servers;
  • Make sure only verified browsers and players can access the media and DRM license in different devices.

A default option for any premium content service provider, DRM is designed to protect audio/video content during transit to the consumer’s player. As discussed in the above-mentioned article, DRM manages the robust content encryption key exchange between the secured playback device (the player) and the license service. DRM is also responsible for setting usage policies for the content, and for enforcing this within the playback environment. However, once the material has started playing, a new threat emerges – the consumer. A common misconception is that playback devices are secure.  
DRM can do little to isolate pirated content, or identify the wrongdoers, when content is stolen and made freely available. Once content arrives at its intended legitimate destination, DRM can do nothing to stop it from being redistributed by those who have no rights to do so. The crux of the problem is that DRM protects only the legitimate path from origination to the point of consumption.
how-to-trust-your-player_drm protection workflow
See “Beyond DRM: The Complete Content Protection Story,” for further details.
It’s also important to understand that practices to curb sharing and theft of credentials (such as passwords) do not help reduce the distribution of content once it has escaped the boundaries of a video service.
In short, DRM is a key part of any rigorous approach to piracy defence. But if we want to talk about end-to-end protection, there’s more.

Enter Video Watermarking

To protect the value of video content – whether original or rights-managed – outside of these legitimate service boundaries, you’ll need to identify the video itself. Specifically, you’ll need information to confirm its outermost point of legitimate use. With that, you can identify the “bad actors”: the infringing users and industrial-scale pirates. 
To accomplish this, video providers can embed information into the video itself, at the point of origin, in the Content Distribution Network (CDN) during distribution, or within the player device. Information might include the device IP address, session details, and subscriber identifier.
The most effective way to do it? Client-composited (client-side) watermarking. It’s clever, as consumers can’t see the watermarks. Only automated analysis can. 
Client‑composited watermarking occurs within the consumer device. The embedded player accesses a software library database that replies with a unique identifier. The watermark information is converted into a pattern, similar in concept to a QR code, and then is “composited” with the video via an overlay.
how-to-trust-your-player_video watermarking-visualized
Source: Friend MTS. Image source: frames from (CC) Blender Foundation 
Client-composited watermarking is fast. Time to detection of content theft can be as little as a few seconds – important for any service, but particularly so for live sporting events. It’s also lower in cost than other watermarking options, such as A/B watermarking. 
For a more thorough discussion of watermarking  methods, their advantages and disadvantages, see ourSubscriber Watermarking Technologies – White Paper Quick Facts.”

Best Practices in Video Watermarking: Detect, Deter, Disable

No matter which way you go with watermarking, you must keep the end goals in mind: to deter piracy, detect it when it occurs, and disable the source of the pirated content. The truth is that embedding watermarks alone is not very helpful unless there is a way to use the watermarks to find stolen video content, identify its source, and take appropriate action. Herein lies the hallmark of a robust watermarking solution.
Detecting involves monitoring suspected pirate outlets, and then matching the digital “fingerprint” of a suspected piece of content with a reference fingerprint that generates during the production process. Then, advanced watermarking analysis can see the identifying watermark and extract the information that it contains.
Determent is about defending against pirate “attacks.” To reduce the chances that an instance of stolen content could be traced back to its last legitimate distribution end point (or to the pirates themselves), content thieves may try to make the watermark unreadable by applying “transformations” to the content. These “attacks” make the watermark no longer available or readable. However, a strong, advanced watermarking program has a far better chance of surviving these attacks and remaining readable.
Disabling is about treating the incident after determining the identity of a pirated video stream. This can include direct actions against the pirate, ranging from take-down notices to reporting to law enforcement. Typically, video providers take actions against subscribers whose accounts they detect to be restreaming. Those actions might be interrupting the session, requiring the user to re-enter access credentials, suspending the end user’s account, disallowing the use of the device on the account, or even initiating legal action.

Choosing a Watermarking Service

What do you want from your watermarking service? What should you want from your watermarking service?
Deployment
How widely deployed is the service? How many set-top boxes and OTT players is it securing around the globe? In the OTT world, and in the content protection world, experience does count. Make sure you are getting a system with a proven, demonstrable track record in detecting, deterring and disabling piracy across multiple illegal redistribution channels. 
Strength against attacks
OTT players need to choose a watermarking service that is effective. How effective? Ask the provider for details. At Friend MTS, we know that our Advanced Subscriber Identification (ASiD) service has remained secure against every attack made to date in both broadcast and OTT environments.
Keep in mind that staying abreast of attacks is a constantly changing process. Your watermarking provider has to not only keep up with the latest pirate schemes, but stay ahead of them. Those bad actors are clever, and don’t always appear “bad” on the surface. In general, they use a legitimate subscription and easily available screen recording software for screen scraping – or even $10 (USD) switches that can remove HDCP. Commercial pirate distributors can easily capture video output, then re-encode and redistribute the premium video using their own infrastructure to monetise stolen content.
Fragmentation of content – which happens when consumers need to subscribe to more than one streaming service to get access to all the content they want to watch – makes it even harder for legitimate content owners and providers to compete with illegal subscription services. These pirate content aggregators, not restricted by licensing agreements, monetise stolen content by offering the end user a one-stop shop for the best sports and entertainment programming. 
Be sure the service you are considering is highly adaptable to ever-evolving pirate attacks.
Speed
As explained, client-composited watermarking will provide the fastest identification of piracy. If you’re dealing with live sports and entertainment, pay-per-view, and on-demand content, this factor should play an important part in your decision on the type of watermarking system to deploy. Think about it in these terms: Several years ago, a major broadcaster – the original source for 60% of the sports channel piracy in its market – introduced ASiD. OTT piracy reduced to less than 1% within weeks.
Global reach
With today’s technology and the speed of the Internet, OTT players will need to protect content in markets throughout the world. Even if you are servicing customers in one country or on one continent, remember that content thieves can and do act without physical borders.
Multi-CDN service
Some watermarking mechanisms may incur additional charges to support multi-CDN usage. Since OTT services have enough expense and complexity, know that it is possible to find a robust service that incurs no additional expenses for multi-CDN content delivery.
Every OTT operator will have its own criteria, but the bottom line is to carefully select a watermarking service that is cost-effective and results-driven. 

Understanding the Human Factor

One of the most challenging aspects of securing an OTT service is the understanding of the human factor in content protection: the end-users who are consuming content.
It is essential to start at a level of zero trust, assuming that some users of your service will attempt to circumvent security controls or use your service in a way you didn’t intend. This could mean something as simple as sharing their credentials with family or friends, or a more direct attack against your content security systems by bypassing/overcoming licensing restrictions.
To overcome this challenge, understand that the point of zero trust begins as early as sign-up for your service. Protection steps include validation of the presented user profile, location checks, payment fraud detection (such as comparison with other existing users), and enforcement of a suitably complex password with multi-factor authentication to prevent brute force attacks.

Video Viewer Personas

Errant or undesired behavior within your service can typically be broken down into the following personas.
The Over-Consumer
Running an OTT service is expensive. The cost of delivering compressed video to your consumers is one of the most costly aspects, even with high competition driving CDN pricing down. Your service pricing and tiers model against costs, and per–user delivery/CDN cost – driven by view time per user session – is a major factor. Is a user’s consumption patterns far more than your predicted model suggests? That could indicate the “over-consumer”. 
The Frequent Mover
Here, an authenticated and authorised user’s sessions change IP addresses frequently in a short period of time, spanning multiple geographies. This is a good indication of a compromised account, with multiple users accessing the service unbeknown to the legitimate account holder.
The Account Sharer
The Account Sharer is characterised by multiple authentication authorisations over time, with different IP addresses/ISPs, and possibly different geographies. As with the Frequent Mover, this pattern could indicate a compromised account. But, it is also possible that a legitimate user has shared their credentials with friends and family – or worse, with a much wider group.  
The Out-of-Bounds Viewer
In this case, the user viewing the content is outside of a designated geographic area. Initial authorisation attempts may have been genuine, but other data sources may reveal the user’s true location.
The Anonymous IP Viewer
The Anonymous IP Viewer’s traffic comes from a suspected or known, proxy/VPN, or a suspect network source (i.e. cloud infrastructure vendor, rather than ISP).
The Long Viewer
This user watches only live channels, for very long periods in one session. 
The Tamperer
The Tamperer’s session data indicates tampering with the playback environment Tamper warnings from the code obfuscation solution may have fired. Session token data mismatches may have been logged. You may also see multiple authorisation attempts, and multiple content license request attempts for a single-use token.
From sign-up forward, every component within your service should provide user behaviour monitoring to aid in the identification of patterns that could indicate fraudulent or suspicious activity. This analysis is important to protect your interests under the terms of your content licensing deals – and critically important for revenue protection.

Using Watermarks for End-to-End Protection

To combat the increasing number of piracy attacks, OTT services must implement solid watermarking and detection as well as DRM. There’s a lot at stake: content, revenue, and brand – and even investment in the delivery infrastructure of systems, software, operations, and technical support.
Start by developing and enhancing understanding of the full content protection strategy, and continue with following the considerations and best practices we’ve outlined to choose and implement a watermarking service. Only then can you make sure that your players – from one end to the other – are as trustworthy as the technology you’ve implemented.
Check out the corresponding fireside chat:

Visit How to Trust Your Player

Check out the full blog series below:

View the webinar on the How to Trust Your Player Page
Download this article as a PDF
Download the full series as a PDF
_________________________________________________________________
How To Trust Your Player is a collaborative effort between Bitmovin, Friend MTS and Intertrust. Our goal is to educate media and content providers on the importance of delivering streaming content in the most secure ways possible from the video player to the end-consumer while protecting both their content and revenue. 

The post How to Trust Your Player #4: Beyond DRM – Video Watermarking appeared first on Bitmovin.

]]>
How to Trust Your Player #4 - Beyond DRM – Video Watermarking Weighs In nonadult
How to Improve Viewers’ Quality of Experience (QoE) While Cutting Storage and Delivery costs (ft. Teleport Media) https://bitmovin.com/blog/low-cost-hq-qoe-teleport-media/ Tue, 13 Oct 2020 08:00:38 +0000 https://bitmovin.com/?p=131428 This article was a collaborative article written by Bitmovin & Teleport Media | Authors: Andrei Klimenko, CEO, Teleport Media & Joshua Shulman, Content Marketer, Bitmovin Streamed content is the future of video viewing. With that in mind, the quality of the video content and the viewers’ quality of experience (QoE) become key success factors for...

The post How to Improve Viewers’ Quality of Experience (QoE) While Cutting Storage and Delivery costs (ft. Teleport Media) appeared first on Bitmovin.

]]>
This article was a collaborative article written by Bitmovin & Teleport Media | Authors: Andrei Klimenko, CEO, Teleport Media & Joshua Shulman, Content Marketer, Bitmovin
Streamed content is the future of video viewing. With that in mind, the quality of the video content and the viewers’ quality of experience (QoE) become key success factors for an OTT service to maintain and grow a loyal audience. 
how to improve qoe featured image
As of late, there’s been a major industry-wide shift towards reducing the cost of operations – especially as a result of the COVID-19 pandemic. However, with more consumers at home, there is also a much larger demand for improved viewer experience. This was reflected in Bitmovin’s latest video developer report, which identified controlling costs as the #1 challenge and viewer engagement (experience) as the #2 opportunity for innovation.

QoE_Biggest Video Developer Challenges_Bar Graph
Biggest industry challenges according to Bitmovin’s Video Developer Report 2020/21
QoE-Opportunites for streaming innovation-VidDevReport_BarGraph
Largest opportunities for innovation to Bitmovin’s Video Developer Report 2020/21

The Broadcast Decision-Makers’ QoE Dilemma

The reality is that the adoption of any range of products and tools for video developers aimed at upgrading stream quality while improving bitrate expenditure can quickly lead to rising costs in your OTT workflows. As a decision-maker, you’re faced with the ultimate dilemma: How can I maintain the maximum quality of experience (QoE) without breaking the bank to build a successful OTT service?
What if there’s a way to have your cake and eat it too? Bitmovin and Teleport Media work hand-in-hand within the most complex and expensive part of your OTT supply chain (outside of content production) – during the storage and delivery phase. Our respective teams of video experts work every day to develop and improve software solutions that help to maximize your audience reach, improve the visual quality, and to achieve maximum cost-efficiency. 
In this article, we explain how to avoid broadcaster decision-makers’ dilemma by displaying how to implement next-gen video optimization solutions that reduce time to market, improve video quality, and prevent negative effects from video consumption spikes; thus gaining you more happy users, all while cutting costs on storage and delivery. 

Key drivers of video streaming costs outside content production

Video files often vary significantly in size based on two major factors, quality, and length, and most OTT services offer a variety of different types of content from short-form animations to long-form live-action films. Regardless of the type of content that an OTT service chooses to deliver to its consumer base, one of the most important things to consider for a budget is the amount of storage that the organization will need to run its business effectively and efficiently. 
The storage necessary to maintain a content library will always be multiplied against the number of viewers that a given Content Delivery Network (CDN) will distribute – naturally, higher resolution files will rack up a significant storage cost. Without proper management of a content library, an organization will quickly spend it’s full CDN bandwidth, thus limiting how many concurrent viewers can use your platform or service at a given moment.
This is most dangerous during unexpected traffic spikes for things viral content that will expend a pre-paid CDN plan (especially in scenarios where in-house CDN resources or multi-CDN strategies are limited), resulting in a costly purchase of additional impressions. A common market solution is to reduce the visual quality of the content, resulting in a much lower Quality of Experience (QoE) for the audience. Although cost-effective, visual quality reduction is one of the top reasons that your audience-base will churn in search of a higher QoE. Failing to reduce the size of your content will cause additional QoE issues like playback failures (slower start-up time, loading errors, buffering) or rebuffering.
In short, there are five main drivers of video streaming that can affect the total cost of operations (TCO):

  1. File size
  2. Storage capacity
  3. CDN price
  4. Visual quality/bitrates
  5. Audience size

The ultimate challenge is how to address the first four drivers without losing customers and without massively increasing an OTT budget that could be reserved for content production and marketing efforts. This is where the true winning broadcasters can thrive – those that find and implement solutions that can redefine the viewer experience with low latency streaming, all while reducing their total cost of operations (TCO). Although generally interchangeable between organizations of all sizes, established platforms and services should seek to optimize their operations with an efficient and flexible video infrastructure; whereas new OTTs and broadcasters must focus on reducing their time-to-market and providing as high-quality content on as many devices as possible.
Bitmovin’s Adaptive Video Player, ML-enabled Cloud Encoding, and Video Analytic solutions paired with Teleport Media’s decentralized CDN deliver a unique workflow that reduces time-to-market and TCO, all while maintaining (if not improving) QoE.

Efficiently compressing content for QoE with Bitmovin

One of the most efficient methods any OTT organization can optimize their operations is with cloud-based per-title encoding. Per-title encoding is the method of customizing the bitrate ladder of individual videos based on the complexity of the content therein. The penultimate goal of any per title-based encode is to algorithmically select the optimal bitrate with a pre-defined codec that will deliver a perfect viewing experience without overspending on data delivery or storage.

QoE-Per-title-workflow-illustration
Per-title encoding workflow

This type of encoding is most ideal for larger content libraries or those with varying types of content by stripping away anything that’s beyond the human’s visual perception. To test which bitrate ladder is optimal for each piece of content, it’s recommended to use quality metrics like PSNR or SSIM. There are systems in place that use machine learning to automatically select the ideal bitrate, otherwise known as an Adaptive Bitrate Ladder (ABR)
Alternatively – OTT services and broadcasters can opt towards multi-pass encoding techniques, which, as the name suggests, “simply” encode video files multiple times in the least lossy way possible. Although not as efficient as per-title encoding, multi-pass encoding offers the benefit of compression at scale and in bulk. Both options are great for optimizing your video workflows towards faster times to market and maintaining quality, but per-title encoding is better at over-all efficiency, whereas multi-pass is best suited for speed.
Once a video content library is compressed, a provider must find a way to efficiently deliver their content.

Decentralized video delivery with Teleport Media 

The next step to delivering high-quality content at scale is selecting a delivery method with a CDN provider. And much like top compression techniques, CDNs with efficiency, quality, and cost in mind are shifting to cloud-based architectures with decentralized delivery solutions. Based on the WebRTC technology, that has become a default feature of any modern internet-connected device, decentralized solutions like Teleport Media’s adaptive and secure peer-to-peer delivery system enables viewers’ devices to restream content between one another.

QoE-Traditional CDN vs Teleport Media P2P CDN Workflow-illustration
Traditional CDN Workflow vs Teleport Media’s P2P Workflow

This method reduces delivery costs by vastly reducing the amount of traffic that actually flows from CDNs to viewers. The player-side P2P CDN architecture is protected from pirates and restreaming and is fully DRM compliant. Unlike a traditional CDN, the P2P content delivery network doesn’t contain any servers that create a bottleneck when the video has to be delivered at scale. It operates as a video player plug-in that takes the duty of traffic delivery from many sources simultaneously – both other viewers’ devices and the origin CDN. Teleport Media JS is compatible with a wide range of HTML5 video players, like Bitmovin’s player, and is designed for HTTP adaptive streaming. The P2P CDN works effectively in browser and mobile applications, on Live and VoD content.

Using Teleport Media with in-house and 3-party CDN

As traffic varies significantly throughout the day for almost any OTT video provider, it can be difficult to efficiently manage content delivery, especially during surges.

QoE-Effects on OTT weekly viewership-graph
OTT weekly viewership profile before implementing Teleport Media (100% indicates the traffic maximum on a weekend)

Teleport Media’s decentralized architecture is capable of leveraging the in-house CDN and making it serve 5-times more audience with the same amount of servers and bandwidth. It also decreases the overall costs of delivery, making the need for 3-party CDNs nearly vanish. When an OTT provider doesn’t own in-house CDN and uses delivery services from CDN vendors starting from the first byte of data, Teleport Media saves the budget up to 40% by providing much lower traffic delivery costs with the premium quality.

QoE-Weekly Viewership Profiles-Graphs_2
OTT weekly viewership profile after implementing Teleport Media (blue indicates all traffic offloaded in P2P)

How decentralized CDN architecture lowers rebuffering

Rebuffering serves as a good indicator of troubles on in-house or external CDN. The more the audience, the less bandwidth per one viewer, the higher the chances of rebuffering.
Teleport Media provides each viewer with multiple connections to other peers and constantly measures their quality. If for any reason P2P connections slow down and put the video player at the risk of rebuffering, the viewer is switched to origin CDN until restoring full buffer.

QoE-Rebuffering Spikes on TM during Peak hours-Line Graph
Rebuffering rates during traffic spikes. P2P CDN keeps 4x times lower rebuffering rate compared to other premium CDNs

Cost-Efficient End-to-End Content Delivery with Bitmovin and Teleport Media

The combination of Bitmovin’s per-title (or multi-pass) encoding solutions and Teleport Media’s decentralized CDN architecture are guaranteed to yield the optimal viewer QoE while reducing the cost operations without having to compromise the content’s resolution or disrupting an existing video infrastructure. The adaptively and least lossy compressed content will be delivered to nearly every user on every device. By combining Bitmovin’s and Teleport Media’s solutions you’ll get:

  • Higher-quality content at lower bitrates (even for users in regions with low bandwidth capacity)
  • Lower storage volume with compressed & re-streamed content using adaptive bitrate ladder renditions
  • Top-quality playback during traffic spikes of any capacity

This was most recently displayed by Russian VoD Service, Okko.TV that was able to scale their content delivery to support 4K and UHD content across a wide span of devices. To further lower the complexity of implementation, Bitmovin recently adopted a direct plug-in to it’s HTML5 web-player.
If you’d like to further discuss how to improve visual quality while cutting storage and delivery costs, do reach out:

The post How to Improve Viewers’ Quality of Experience (QoE) While Cutting Storage and Delivery costs (ft. Teleport Media) appeared first on Bitmovin.

]]>
Bitmovin’s Intern Series: Finding Memory Leaks in Java Microservices – Part 2 https://bitmovin.com/blog/java-memory-leak-detection/ Tue, 24 Sep 2019 14:33:28 +0000 https://bitmovin.com/?p=65075 Welcome back! In our first instalment of the Memory Series – pt. 1, we reviewed the basics of Java’s Internals, Java Virtual Machine (JVM), the memory regions found within, garbage collection, and how memory leaks might happen in cloud instances. In this second post we cover more in-depth facets of cloud storage as it pertains...

The post Bitmovin’s Intern Series: Finding Memory Leaks in Java Microservices – Part 2 appeared first on Bitmovin.

]]>
- Bitmovin

Welcome back! In our first instalment of the Memory Series – pt. 1, we reviewed the basics of Java’s Internals, Java Virtual Machine (JVM), the memory regions found within, garbage collection, and how memory leaks might happen in cloud instances.
In this second post we cover more in-depth facets of cloud storage as it pertains to Bitmovin’s encoding service and how to mitigate unnecessary memory loss. So let’s dive right in!

Investigating Java Memory Issues

When it comes to preventing memory issues, questionable code should not even make it into production. To help with this, static code analysis must be executed and provide feedback on which pieces can be improved to prevent bugs or performance problems in the future. For the sake of early recognition, this analysis should happen at the moment a developer pushes a change to the code repository. Popular solutions for this are SonarQube and SpotBugs. These programs can find potential issues such as unclosed streams, infinite loops etc. Although, there is still a possibility that this first inspection fails, and a hazardous piece of code makes it into the application; which may result in high memory usage. Additional leaks could be a result of issues within third-party integrations, where other leaks or unsafe calls through the Java Native Interface (JNI) are made and are hard to debug after the fact. So, keep in mind that it is not necessarily your code that is leaking.
To identify the root cause of a memory issue, developers can use a technique called memory profiling. Over the years, many approaches and tools have been developed to support memory profiling; Java applications use command-line programs such as: jcmd or jmap, which come with the Java Development Kit (JDK). Alternatively, there are a handful of dedicated graphical profilers as well as GC-log analyzers available.
The very first step to analyzing an application suspected of having a memory leak is to verify your metrics. Questions to support your analysis include:
Which abstraction reports the problem?

  • Kubernetes Pod
  • Docker container
  • Native process

Which metric was used for memory usage?

  • RSS, Virtual etc.

Can the problem be verified by using another tool, script or metric?
These questions are necessary to determine the root cause of memory leaks, as the Kubernetes Pod can host multiple processes in addition to Java. Virtual memory, however, is not necessarily a good metric to monitor, as it can be difficult to determine whether or not it’s backed by physical memory. Resident Set Size (RSS) is a measurement of “true” memory consumption of a process by not including swapped pages, therefore indicating physical RAM usage. RSS should be considered for monitoring along with other relevant metrics; such as the working set, which includes a Linux page cache and is used by Kubernetes (and many other tools) to report memory usage. In circumstances where limits are defined within the Kubernetes environment, this metric is used to determine which Pod can be evicted in favor of another that wants to allocate more memory.
The next step in the analysis process is to enable verbose GC-logging and Native Memory Tracking (NMT) to gain additional data about the application in question. Both options can be configured via JVM parameters as shown below.

java -Xloggc:gclog.log \
    -XX:+PrintGCDetails \
    -XX:+PrintGCApplicationStoppedTime \
    -XX:+PrintGCApplicationConcurrentTime \
    -XX:+PrintGCDateStamps \
    -XX:+UseGCLogFileRotation \
    -XX:NumberOfGCLogFiles=5 \
    -XX:GCLogFileSize=2000k \

    -XX:NativeMemoryTracking=”summary” \

    -jar Application.jar

Once the JVM parameters are implemented, all Garbage Collector (GC) activities are added to a file called gclog.log within the application directory. This log is automatically rotated at a file size of 2 MB and maintains a maximum of five log files before rotating once again. The GC log can be analyzed by hand or visualized using a service like GCeasy, which generate extensive reports based on the logs. Note that GC logging can even be enabled for production use, since it’s impact on application performance is nearly nothing.
The second feature enabled by the listed command is called Native Memory Tracking (NMT) and is helpful to determine which memory region (heap, stack, code cache or reserved space for the GC) uses memory excessively. Once an application with NMT enabled has started, the following command will return a comprehensive overview of the reserved and committed memory. Spikes or anomalies within this output can point your investigation in the right direction.

jcmd <pid> VM.native_memory summary

Regardless of the outcome from previous actions, the next step should always be a dynamic profiling session of the running application. You can find dedicated tools such as YourKit Java Profiler & Java Flight Recorder (JFR), or an open-source program like Oracle’s VisualVM to help optimize your profiling session. Tools like JFR and VisualVM integrate well with the most common Java Integrated Development Environments (IDE), while YourKit’s solution provides the additional ability to profile containerized applications as seen below.
- Bitmovin
Profiling production applications can be very tedious, especially in scenarios where programs are deployed to an external cloud provider and encapsulated within a container. In this case, the Dockerfile has to be revised to inject a dedicated profiling agent in the container and a network tunnel must be established to access the agent of the containerized JVM. Depending on the application and agent, this can bring a performance impact of 1-5%. In Bitmovin’s case, we were seeing much longer application start times, but once initialized, only a subtle impact was noticeable. The obvious solution to avoid this would be to execute the leaking program locally and profile in this way. Unfortunately, the leak might not be reproducible this way due to a lack of traffic or because the code path which is leaking memory is only executed in rare scenarios that only occur during production.
Alternatively, a heap dump can be created of a running application. A heap dump is a snapshot of all objects located in-memory at a specific point in time and is typically stored in a binary format using the .hprof file extension. Capturing a memory dump can be accomplished using the jmap command which is part of the JDK. In our experience, most profilers provide the functionality to create a heap dump during a live-profiling session.

jmap -dump,format=b,file=<file-path> <pid>

The time it takes to create a heap dump obviously depends on the amount of memory an application has allocated. Please keep in mind that during the whole capture process, the application will be halted and unresponsive! In addition, a heap dump triggers a full GC which is fairly expensive in terms of CPU time and has a short-term impact on responsiveness. Therefore, this option should be used sparingly and with caution. Still, a memory dump provides an in-depth view of allocations and can be interpreted using profiling tools to perform advanced root cause analysis.

Service A

Now that we have wrapped up the theory and concepts of JVM memory management and how to properly apply profiling techniques, it’s time to take a look at a real-world example. The first application we are reviewing, Service A, is a web API that handles REST requests for creating, configuring and starting encodings. This application had the tendency to continuously consume more memory until maxing out at 2 GB per Kubernetes Pod. Additionally, the heap (which typically consumes most of the JVM’s memory) only had a peak usage of around 600 MB. This microservice has been around for a long time and has a fairly complex code base. Service A also leverages multiple third-party dependencies and is built upon a legacy Spring Boot version, as shown in the table below.

Java Version Spring Boot Version JVM Parameters Memory Usage / Pod
Java 8 1.5.x -Xmx5536m 2,0 GB

Based on the analysis process described before, we started the application with GC logs enabled and looked at the output of the NMT summary. It became clear that the heap never used up its full capacity of 5 GB. Instead, peak usage was around 600 MB without the metaspace.
Since live profiling in our environment is not easily possible without a significant impact on application startup time (which leads to timeouts), we skipped this step and analyzed a production heap dump instead. This was made possible because JFR is included in the JVM, due to the  free-to-use OpenJDK-11 and does not have to be side-loaded with a dedicated agent. For analysis, a Pod with a relatively high memory consumption (1.5 GB) was selected. When opening the snapshot in YourKit, the Memory view was automatically pre-selected, and this is where the analysis of the dump began.
- Bitmovin
In the image above, YourKit displayed the objects contained within the memory dump grouped by their corresponding classes. The top level entry java.lang.ref.Finalizer is shown to have a retained size of around 1 GB. However, the retained size is a rather complicated metric and is only estimated upon loading the heap dump into YourKit. To calculate the exact size, click the button in the top right corner “Calculate exact retained sizes”. Once the calculations are complete, the list is automatically updated, and the Finalizer entry is out of view (it shrunk to 3 MB). It’s important to note that due to these changes, the estimated retained size might be misleading.
- Bitmovin
Looking at the top entries after determining the exact retained size shows that char[], java.util.HashMap and byte[] consumed a significant amount of memory. Yet, one class is of special interest for this analysis: com.bitmovin.api.encodings.rest.helper.EncodingResolver. YourKit shows that there is only one instance of this class and it uses almost 200MB of RAM. We analyzed this behavior further by right-clicking the list entry and choose Selected Objects, which opens another view and displays the class’s inner structure as shown in the image below.
- Bitmovin
The field muxingCache is of type java.util.HashMap and contains 27511 entries. After reviewing the source code it was clear – the EncodingResolver class implemented the singleton pattern and objects were continuously added to the muxingCache, which is never flushed. After discussing these findings internally, it turned out that in the earlier versions of the service, the EncodingResolver class was not a singleton, and the leak was occurred when the change was made.  We found our very first memory leak!
Further tweaking of the JVM parameters and heap boundaries of the service lead to a more stable and predictable memory usage pattern. The most drastic change however, was the reduction of the maximal heap space from 5 GB to 1.5 GB. Thereby, the JVM was forced to perform GC activities more frequently and slightly increased CPU usage, but also shrank the memory footprint to a manageable amount.

Service B

Our second analysis case, Service B, differs greatly from Service A in that it is a relatively new application, and therefore uses a more recent Spring Boot version. As a result, the code base is significantly smaller and the workload that must be handled is not as heavy as Service A.

Java Version Spring Boot Version JVM Parameters Memory Usage / Pod
Java 8 2.x -Xmx512m 6,4 GB

The first question that immediately came to mind when looking at the numbers from above was: How can a service with a heap boundary of 512 MB consume 6,4 GB of memory? This is a pretty good indicator that the heap might not be the problem. For quality and consistency, GC logging and NMT have been enabled to verify this assumption. Shortly after the deployment of the new application, we were able to generate a continuous growth memory usage graph for observation. As expected, the GC logs and NMT summary clearly showed that neither the heap (which was stable at around 300 MB), nor native memory were leaking. So where did the other 6.1 GB go?
Local profiling, stress testing, and a heap dump did not lead and were of no help to our analysis. Hence, it was time to get back to square one and rethink the strategy behind our analysis. This yields an important lesson that was mentioned earlier: verify the metrics that report the memory issue.
As it turns, we only examined at the Grafana dashboard and used the kubectl command to get the memory consumption of the Pods, assuming that the value reported was solely consumed by the JVM within the container. However, attachin the container and looking at the RAM utilization on a per-process basis, we determined that the JVM only consumed around 600 MB. Surprisingly, the remaining 5.8 GB were still not showing up amongst any statistics.
Eureka!! This is when we had a mental breakthrough and thought of caching. We believed that the page cache under Linux had something to do with the memory consumption. As a matter of fact, Linux keeps files cached in memory to speed up read and write operations. When there is no more physical memory available and a process requires additional memory, Linux automatically shrinks this cache to provide the required resources. This realization left us with one final question: Which cloud files could possibly consume up to 6 GB of cache? The answer to this is relatively straightforward: logs! When navigating to the directory within the container, where the application stores its logs, the disk usage utility reported a total file size of 6 GB. The following image shows what happened when we deleted the log files. Et voilà, the memory usage went down from 6.4 GB to around 620 MB!
- Bitmovin
We came to the conclusion that references to the logs or the files themselves were still part of the cache, which explained the sudden drop of memory usage when the logs get deleted. To fix this issue a proper log-rotation policy must be implemented. However, log-rotation policies turned out to be more challenging than expected with the current Spring Boot version, as no upper limit for the archived log files can be defined. It’s important to note that archived logs can be deleted as soon as day one of launch, due to how Spring Boot uses underlying logging framework Logbacks.

Conclusions

Over the course of researching and performing tests for this article, it has become very clear that finding memory leaks is a complex topic. When reading this, it may seem like hunting down memory issues is not too hard of a task, but organizing the right toolset and appropriate procedures to follow is very challenging. Unfortunately, there is also no such thing as the one and only way to troubleshoot JVM memory leaks, because every problem is unique and as we’ve learned, it may not even be the application itself that is leaking memory. Every single memory issue requires dedication and a custom-tailored strategy to be identified and remedied. 
At the time of the release of this article, solutions for the problems identified by this research were either already in place and being closely monitored or were being prepped for implementation. Although troubleshooting JVM memory leaks is a lengthy procedure, there is no proper way around it. But besides it being a necessary evil, learning more about an application’s internals is very beneficial from a developers and an infrastructural point of view.

The post Bitmovin’s Intern Series: Finding Memory Leaks in Java Microservices – Part 2 appeared first on Bitmovin.

]]>
Bitmovin’s Intern Series: Finding Memory Leaks in Java Microservices – Part 1 https://bitmovin.com/blog/finding-memory-leaks-java-p1/ Thu, 19 Sep 2019 11:57:34 +0000 https://bitmovin.com/?p=61823 Where Has My Memory Gone? Finding Memory Leaks in Java Microservices Welcome to this two-part series about troubleshooting memory-related issues in a containerized cloud environment. These posts will discuss the basics of how memory is managed for Java applications and how to develop a procedure which can be used to identify memory problems. So let’s...

The post Bitmovin’s Intern Series: Finding Memory Leaks in Java Microservices – Part 1 appeared first on Bitmovin.

]]>
- Bitmovin

Where Has My Memory Gone?

Finding Memory Leaks in Java Microservices

Welcome to this two-part series about troubleshooting memory-related issues in a containerized cloud environment. These posts will discuss the basics of how memory is managed for Java applications and how to develop a procedure which can be used to identify memory problems. So let’s dive right in!

Introduction

Bitmovin uses a microservices architecture based on the Google Kubernetes Engine (GKE). This highly-containerized environment helps manage the complexity of continuously evolving applications and with decoupling & separation of concerns amongst the individual services. However, a handful of Kubernetes Pods tend to have a significantly higher memory usage compared to others. Metrics are collected and aggregated by Prometheus and visualized using a Grafana dashboard.  Certain services logging around 600 MB at the start-point and vaulted up to 6 GB of memory per instance after one or two days of usage. Further testing determined that certain Pods only use a few hundred MB when restarted but immediately increase in a linear fashion.
Part one covers a brief overview of the concepts and technologies that we work with and part two will cover the findings that have emerged from analyzing our cloud services.

A Primer on Java’s Internals

We’ll kick off this post by revisiting how Java works, since many of our microservices are running on Java Virtual Machine (JVM) and a good understanding of the subject matter is necessary to follow along with the rest of the article. If you are already familiar with the concepts of compilation, memory management and garbage collection, feel free to skip those sections.
One of the advantages of Java, in comparison to other languages, is its cross-platform portability. This means that the same bytecode can be executed anywhere that the Java Runtime Environment (JRE) is installed. However, it requires quite a bit of behind the scenes work to implement. An important component that allows Java to function the way it does, is the architecture that defines the compilation and running processes. Compiling a Java application does not result in a binary file, but in a bytecode – a portable, yet compact representation of the program – which will be converted to platform-specific machine code at the time of execution. Due to the high cost of translation from bytecode to machine code, JVM’s just-in-time (JIT) compiler only compiles frequently accessed code paths. Blocks executed only once (ex: during application startup) might be interpreted as inefficient, but are not compiled as it wouldn’t provide relevant performance improvements.
Thereby, Java source code must be “compiled” twice before it can be executed:

  1. Source code is compiled to bytecode (Main.java to Main.class).
  2. The JIT compiler converts the Main.class (bytecode) file to native machine code.

- Bitmovin
Image source: CS @ SIT –  Just-in-time (JIT) compiler

Memory Regions in Java

Now that we’ve clarified the specifics of how Java programs are executed, we will examine memory management within JVM. When a JVM is started, the Operating System (OS) allocates memory for the process, which is split into heap and non-heap memory. Non-heap memory is further broken down into metaspace, code cache, thread stacks and shared libraries. The process of debugging the aforementioned memory types will be discussed in part 2.
Heap Space – This memory region type is used for dynamic memory allocation of objects during runtime. Objects are always created in the heap space and the respective references are contained within the stack. Heap size can be configured using the JVM parameters: Xms and Xmx, which define the initial and maximal size respectively. Once the application exceeds the upper limit, a java.lang.OutOfMemoryError exception is thrown.
Depending on the JVM implementation, the heap can be broken down into so-called generations:

  1. Young generation: New objects are allocated here. Most objects have a short lifetime and can be removed soon after they have been created. When an object has reached a certain threshold, it will be moved to the next generation.
  2. Old generation: Objects that have survived multiple garbage-collection cycles reside here. A continuously growing old generation is also a good indicator for memory leaks.

Metaspace – The metaspace lives in the native memory – a location within the process address space that is not within the heap and stores class definitions. By default, this region does not have a limit and has the capacity to exceed physical memory capacity. In this scenario, the OS allocates virtual memory which leverages swap space – significantly impacting the application’s performance. Therefore we advise defining an upper limit for the metaspace. However, the value must be carefully defined, as exceeding the limit will cause a java.lang.OutOfMemoryError exception.
Code Cache – As previously mentioned, JVM uses a JIT compiler to convert the bytecode to platform-specific machine codes. To improve performance optimizations, compiled machine-code blocks are cached so that they can be executed faster.
Thread Stacks – Static allocations such as primitive values and references pointing to objects on the heap are located within the stack. Variables only exist in the stack during the execution time of the corresponding method that they have been defined in. Aside from the heap, the stack does not need to have its garbage collected, as it automatically shrinks when a method returns. However, the default stack size can be adjusted using a JVM parameter; we don’t recommend making these adjustments, as a too small limit will cause a java.lang.StackOverflowError.

Garbage Collection

Memory utilization has become a crucial part of software engineering since the adoption of high-level languages. Until modern methods of memory management became available, developers had to manually determine how much memory should be allocated and when it can be returned to the system.  While this manual process allows complete control over memory usage at any given point in time, it is unfortunately prone to human error. However, programming languages having without Garbage Collection (GC) are still widely used today, especially in performance-oriented or hardware-near applications.
One core benefit of Java is the presence of a Garbage Collector (GC). The GC periodically checks which objects are still used and which are not. Unused objects are deleted (or archived) to provide space for new allocations. This helps streamline the process of software development, as a memory consumption is no longer a source of concern for your everyday programmer. When troubleshooting memory issues of a higher-level language such as Java, we recommend familiarising yourself with the concept of garbage collection and how to configure it in an optimal way.
Garbage-collection algorithms have evolved over the years, as a result, Java provides the option to choose from a set of different GCs. This is important because every GC has its advantages and disadvantages, therefore GC’s should be selected based on specific workload needs. Every Java version has a default GC which can be found in the Oracle’s official documentation datasheet. Given that the GC is chosen on a per-application basis, it can be changed by passing a JVM argument to the java command as indicated below.

java -XX:+Use<GCName> -jar Application.jar

In the command above, <GCName> must be replaced with the respective GC. Some of GCs available in Java 8 are listed below.

Aside from choosing the right GC, further customization can also be done using a variety of tuning parameters. It is important to understand that although GC’s are well designed, they do not guarantee full protection from memory leaks. Even today, with modern protection layers, it is still possible to run into severe performance issues.

Memory Leaks

Let us finally talk about leaks, shall we? First, it is important to clarify that although Java is a garbage-collecting language, memory leaks can still occur. The GC only ensures that unreferenced (unreachable) objects are cleaned up. Much like rectangles and squares, all unreferenced objects are unused (and thus safe to collect), but not all unused objects are unreferenced. A simple example is a static list within a class. Once the list is populated, its entries will never be garbage collected because the JVM’s Garbage Collector does not know whether the list will ever be accessed again or not. This concept is illustrated by the figure below.
- Bitmovin
Image source: Stackify – How Memory Leaks Happen in a Java Application
 
Generally speaking, a memory leak can arise in any of the various memory regions described earlier. However, the most common source for leaks in Java applications is the heap space and can often be traced back to simple programming errors such as:

  • Static fields or member fields of singleton objects harvesting object references
  • Unclosed streams or connections
  • Adding objects with no implementation of hashCode() and equals() to a HashSet because then the same key can be used over and over again to add entries to the HashSet.
  • Inefficient SQL queries which are frequently executed and where a large data set is read into memory

The Notorious OutOfMemoryError Exception

A memory leak in Java does not necessarily have to manifest itself in a java.lang.OutOfMemoryError exception. This exception is more like a symptom and a good indicator that there might be a leak somewhere. It usually occurs if there is insufficient space to allocate an object in the Java heap but there are also other reasons (too large thread stacks, too much GC overhead etc.) why you might be seeing this exception in the logs. The following enumeration lists all types of OOM exceptions and more information on what the individual exceptions mean can be found here.

  • java.lang.OutOfMemoryError: Java heap space
  • java.lang.OutOfMemoryError: GC Overhead limit exceeded
  • java.lang.OutOfMemoryError: Requested array size exceeds VM limit
  • java.lang.OutOfMemoryError: Metaspace
  • java.lang.OutOfMemoryError: request size bytes for reason. Out of swap space?
  • java.lang.OutOfMemoryError: Compressed class space
  • java.lang.OutOfMemoryError: reason stack_trace_with_native_method

The second part of our memory loss series will cover this analysis as it applies to Bitmovin’s cloud encoding service
Find the second instalment here: Finding Memory Leaks in Bitmovin Cloud Encoding Services – Part 2

The post Bitmovin’s Intern Series: Finding Memory Leaks in Java Microservices – Part 1 appeared first on Bitmovin.

]]>
Partner Highlight: Automated and Customized Transcoding using Bitmovin API and Built.io https://bitmovin.com/blog/automated-transcoding-bitmovin-builtio/ Mon, 03 Jun 2019 23:05:24 +0000 https://bitmovin.com/?p=43263 Guest Post provided by Bitmovin Partner: G&L uses Bitmovin’s API based encoding and customized transcoding products to run live and on-demand streaming operations for Bayer04 Leverkusen Fußball GmbH, providing a complete chain of services including all required components and processes. Modernizing the workflow consisting of media delivery, customized transcoding and playout became a necessity in...

The post Partner Highlight: Automated and Customized Transcoding using Bitmovin API and Built.io appeared first on Bitmovin.

]]>
- Bitmovin

Guest Post provided by Bitmovin Partner:

- Bitmovin
G&L uses Bitmovin’s API based encoding and customized transcoding products to run live and on-demand streaming operations for Bayer04 Leverkusen Fußball GmbH, providing a complete chain of services including all required components and processes.
Modernizing the workflow consisting of media delivery, customized transcoding and playout became a necessity in order to keep up with the demands of the club competing in the German Bundesliga. The solution required state-of-the-art performance and a great degree of automation capabilities for delivering on-demand sports broadcasts, seamlessly upon ending the live stream. Additionally, the solution needed to support post-game content like press conferences, reports and documentaries that were made available for streaming on the club website, immediately after the live broadcast – viewable as “BAYER 04-TV” at https://www.bayer04.de/de-de/page/videopage.
Early on, the most significant potential for optimization was identified within the transcoding and media processing parts of the delivery chain. The following challenges needed to be addressed through the new video workflow solution:

  • Independence from media asset management systems
  • Video production on the customer’s end to be integrated without limitations – ex. on-site editing for press conferences
  • Enabling transcode and metadata transfers (title, pre-content, poster image…) as quickly as possible
  • Support for fully automated publishing on availability (upload = publish)
  • Provision as a reusable and expandable solution
  • Server-less implementation on public cloud providers
  • Future integrations to be realized without requiring development resources
  • Integrations and adaptations to be implemented by an operations manager or the system architect

All things considered, it became clear that a quickly implemented standard solution would not be able to meet the requirements. The demand for a high-performance solution with outstanding availability and relatively simple, modular configuration options created the need for a customized approach.
One candidate came to mind immediately, when it comes to per-project transcoding at the fastest possible rate: The services provided by our partner Bitmovin proved to be both fast and equipped with extensive configuration options in the past. We were familiar with their encoding API and had carried out numerous projects successfully in the past.” – G&L
After intense evaluation and testing, G&L chose Bitmovin, along with the components mentioned below, for high speed and high-quality transcoding for setting up a custom end-to-end transcoding solution for Bayer04 Leverkusen Fußball GmbH:

  • Geißendörfer & Leschinsky GmbH: Top performance for your content: G&L Geißendörfer & Leschinsky GmbH is a leading systems integrator, managed service provider, and software developer for digital media preparation and delivery.
  • Bitmovin: The highly flexible REST API encodes using AVC, HEVC, VP9 and AV1 as well as in the container streaming formats MPEG-DASH, Apple HLS, Smooth Streaming, progressive TS, progressive MP4, fMP4, CMAF, and WebM – enough to serve all relevant devices and systems. The notifications needed for the customer use case were provided through a Webhooks API. Apart from the market-leading processing speed and the multi-award-winning encoding quality, Bitmovin excels with a highly flexible and detailed API, which allowed G&L to adjust every single detail during the process in order to exactly meet the customers’ requirements – an aspect which really sets Bitmovin apart from other encoding service providers.
  • built.io: The iPaaS provider manages the cloud-based solutions, handling the communication for applications and data between cloud and on-premise environments within the setup. In this case, the platform runs the communication with all other components: Webhooks, JSON Payload, signals to the transcoding API and, on top, notifications via email and Slack sent to operators and DevOps. G&L made this workflow automation “clickable” as a visual workflow diagram. Additional adjustments like the integration of other services could then be realized easily using drag & drop options.
  • BrickFTP: It is the most robust enterprise file server solution that offers support for all standard protocols (e.g. FTP, FTPS, FTPES, SFTP or WebDAV) and flexibly usable REST & Webhook APIs, which were used for this project. At the core sits a watchfolder solution, which runs typical automated batch tasks. BrickFTP is a particularly good fit with its on-upload notification, which triggers a transcoding job following a successful upload of the source material.
  • Akamai NetStorage: It is a frequently used cloud storage solution by G&L – as it serves as an essential element while scaling high-quality media delivery and media workflow implementations. On-demand scaling through mirroring content at multiple locations across the internet guarantees consistent availability, even during regional outages. In this use case, the transcoded HLS output was transferred via HTTP API onto NetStorage and consequently delivered from there.

By using the Bitmovin Encoder product, G&L was able to put together an extremely customizable and modernized transcoding workflow for a major Football Club in Germany that not only met their current Live and On-Demand needs but also enabled future expansion to support growing use cases in the online Sports sector.

Detailed workflow

- Bitmovin

Workflow in built.io

- Bitmovin

Notification in Slack

- Bitmovin
Please find the original article written by Jochen Herkenrath (G&L) here (German Language)
More Resources:

The post Partner Highlight: Automated and Customized Transcoding using Bitmovin API and Built.io appeared first on Bitmovin.

]]>
There’s an App for That? HTML5 vs Native for In-App Video Playback https://bitmovin.com/blog/theres-app-html5-vs-native-app-video-playback/ Wed, 21 Feb 2018 02:01:54 +0000 http://bitmovin.com/?p=22527 Online video publishers that decide to launch a mobile app are faced with a decision between reusing their HTML5 player, building a completely native playback experience, or a combination of both approaches. Initially, it may seem easiest to re-use your HTML5 player, but that is not always the best option from evidence we’ve seen in...

The post There’s an App for That? HTML5 vs Native for In-App Video Playback appeared first on Bitmovin.

]]>
wrestling style versus graphic comparing html5 vs native ios and android sdk for video playback
Online video publishers that decide to launch a mobile app are faced with a decision between reusing their HTML5 player, building a completely native playback experience, or a combination of both approaches. Initially, it may seem easiest to re-use your HTML5 player, but that is not always the best option from evidence we’ve seen in our comprehensive testing. This post will evaluate the pros and cons of each approach. It will give some guidance for this important decision, and highlight why investing in a native video player gives your customers the best performance, feature set, and user experience. [Free Download: Video Developer Report 2019 – Key insights into the evolving technology trends of the digital video industry]

Pure HTML5 Approach

HTML5 video player example with descriptive text
On iOS and Android devices you can use a web-view component to embed web content into a native application. The web-view enables the multimedia playback capabilities of the underlying browser engine in your native app.
A simple implementation loads a website containing a HTML5 video player (usually reused from your web implementation as shown below) that references the video content you wish to playback. The web-view presents a standard, non-customizable video player user interface that is dependent on the operating system. This approach limits you to the media formats and codecs that the underlying web-view supports. You are required to write a lot of custom code that differs from your web implementation to support advertisements, DRM, casting, air-play, and the rich-user experience that your customers expect. Running all this code inside a mobile devices web-view ends up consuming a lot more resources than necessary

Pros

  • Time to Market: Reusing your existing HTML5 video player inside a mobile app requires much less initial development. An organization does not need to invest in engineers with the specific mobile development skill sets required to build a native video player.
  • Single code base: Changes to your HTML5 video player will automatically update your mobile app as well. It is even possible to update the video player over-the-air without updating the whole native app using the app store.

Cons

  • Low performance: Running the whole HTML5 player inside a web-view component needs more processing power than using a full native player. This also negatively impacts the battery life of your device. Also the video startup time is negatively affected as we will show later.
  • Reduced Interaction with native app:  If you want to show supplementary content based on the HTML5 player, you will need to update the app’s UI with the current video player state. Implementing a communication channel between the web player and native application is tedious and difficult to maintain.
  • Reduced feature set: The HTML5 player is unable support all the features your customers expect (and have with other native video players). Examples of those features are
    • Offline playback and offline DRM
    • Picture in picture
    • Reduced access to device sensors

Pure Native Approach

Build a native video player and a native user interface. All of the components in this solution are built on top of core operating system APIs and functionality. The most common native video players today are AVPlayer (iOS) and ExoPlayer (Android). Both video players have API’s that allow you to add functionality around playback, DRM, advertisements, casting/airplay, and analytics. These native video players also have access to all other system and device APIs.
native android exoplayer SDK video player example

Pros

  • Offline playback and DRM: With a native video player, you are able to support download and offline playback of protected and unprotected content. We find this use case very important to users who are traveling or are in locations with limit network bandwidth. They are able to download content directly to their device and then playback that content anywhere.
  • Picture in Picture: Picture in Picture is a feature that allows the user to browse other apps, answer emails, and respond to texts all while the video continues to play. The video is minimized into a smaller window that can be resized and moved around the screen.
  • Performance: Native video players will have a much more responsive user interface. They will also have better startup-time and lower resource consumption as seen in our detailed performance analysis below.

Cons

  • Updating: No over-the-air updating of the player without updating the application itself

Hybrid Approach

A hybrid approach is typically composed of a native video player with an HTML5 user interface overplayed on top of it. All user interactions are captured by the HTML5 user interface and sent to the underlying native video player.
hybrid native and web video player screenshot with skateboarder poster frame

Pros

  • Reuse of User Interface: Enables easy JS/CSS based skinning of the player. Changing the color, background and size of UI elements is a matter of minutes and applies to all supported platforms at once. It guarantees a unified user experiences across all platforms and devices. There is no need to implement the same UI which is already used for the web-based player multiple times to support additional platforms like iOS and Android
  • Performance & Functionality: With the hybrid approach, you are still able to get benefits of Native Video Player. You can support features like offline playback, DRM playback, and picture in picture.

Cons

  • Performance: Although the performance is much better with the hybrid approach than with the pure HTML5 based approach, it lacks a bit behind compared to the full native strategy. Nevertheless, for many use cases this performance overhead is more than compensated by the benefits explained above.

Performance Evaluation

CPU Usage and Battery Life

As our evaluation below shows, a pure native approach results in a much less CPU utilization than the HTML5-only solution. Less CPU usage allows for smoother and more fluid user experience and also has a positive impact on battery life. It also shows that a HTML5-only approach almost doubles the CPU utilization during playback compared to the full native or hybrid approach. The test data shows the CPU usage during the first two minutes of a 4.8 mbps, 1080p steam on an iPad Mini 4. It is averaged over multiple test runs and we see similar results on Android.
histogram of cpu usage during video playback comparing native, hybrid and html5 players

Startup Time

We also evaluated the start-up time of the different approaches. We measured the time it takes from creating the player object until the first frame is rendered and playback starts. This involves setting up the player itself, initializing the UI, downloading and parsing the manifest and retrieving first video segment. The values were obtained as mean values of several experiments which were executed on different mobile devices. As the chart below shows, the pure HTML5 approach performs worse, with a increase in start-up time of 50% to 100% over the hybrid approach depending on the network conditions. The pure native approach still performs best here and should be chosen to get the most optimal start-up performance for your video enabled application.
- Bitmovin

Bitmovin’s Solution

Amongst the different approaches explained above, the pure native and the hybrid approach are the best options when you want to the deliver the user experience. The pure HTML5 approach may offer a quick start for developing a video enabled application but its performance and functionality lacks behind the other two approaches. The CPU utilization can double, which will have an impact on the perceived system performance and battery life. Additionally the start-up time until video playback starts is significantly higher as shown before. The pure HTML approach also reduces the feature set and the possibilities you could provide to your users with a native or hybrid approach.
With Bitmovin’s native SDKs for iOS, Android, tvOS and Fire OS we allow our customers to choose between the pure native and the hybrid approach. As discussed, the pure native approach delivers the best performance and allows to use all the features which are offered and supported by the underlying operating system. Also the hybrid approach still delivers a pretty good performance and offers a very easy and flexible way to adapt the UI and deliver a unified user experience across multiple platforms and devices.

Resources and Next Steps

Below we’ve compiled some of our best resources and suggestions for followup reading to learn more about your options for implementing a native, hybrid or HTML5 player.
Read and Learn:

Start Building Now:

The post There’s an App for That? HTML5 vs Native for In-App Video Playback appeared first on Bitmovin.

]]>
Efficient Multi-Codec Support for OTT Services: H.264/HEVC/VP9 and/or AV1? https://bitmovin.com/blog/higher-quality-lower-bandwidth-multi-codec-streaming/ Thu, 21 Dec 2017 15:47:01 +0000 http://bitmovin.com/?p=22041 By encoding your videos using a multi-codec approach you can double the quality while still reducing your bandwidth consumption and maintaining maximum device reach. In the spectrum of online video, you probably already know that H.264 (also known as AVC) is ubiquitous. Nearly every device and operating system supports decoding either on hardware or software....

The post Efficient Multi-Codec Support for OTT Services: H.264/HEVC/VP9 and/or AV1? appeared first on Bitmovin.

]]>
Multi-codec streaming is an effective way to reduce bandwidth and CDN costs

By encoding your videos using a multi-codec approach you can double the quality while still reducing your bandwidth consumption and maintaining maximum device reach.

In the spectrum of online video, you probably already know that H.264 (also known as AVC) is ubiquitous. Nearly every device and operating system supports decoding either on hardware or software. Although this compression technology is widely supported, which is a significant advantage, it is nowhere near as efficient as the next generation codecs in terms of compression rate.
According to Netflix’ experiments, H.265 (also known as HEVC) can deliver up to 50% bitrate savings when compared to previous generation codecs like H.264/AVC. Besides Apple devices equipped with iOS 11 and macOS High Sierra, H.265 is also supported by most 4K SmartTVs and Microsoft Edge for Windows 10 (with hardware decoder present in the device).
Similar to H.265, VP9 is another great option when it comes to reducing bandwidth consumption or delivering higher quality with the same bitrate. Bitrate savings can reach up to 50% compared to H.264, dramatically lowering your CDN costs. VP9 is supported on multiple platforms including Google Chrome, Firefox, Microsoft Edge and Android devices.
Roughly 83% of the internet users in the US could be reached with VP9 and HEVC. The remaining 17% would fall back to H.264 and you would still have complete coverage of every browser. The table below shows the browser market share for desktop and mobile.

Browser Market share in US (%) Codecs supported
Google Chrome 57.27% H.264, VP9
Mozilla Firefox 7.70% H.264, VP9
Safari 15.88% H.264, H.265*
Microsoft Edge 2.13% H.264, H.265, VP9**
Internet Explorer 7.28% H.264

Source: netmarketshare.
* Only available in Safari for iOS 11 and macOS High Sierra.
** Only available in Edge 14.14291.

Codec comparison

The figure below shows a side-by-side codec comparison. Maintaining the same visual quality, a Full HD content could be encoded at 4 Mbit/s (50% less than H.264) with H.265/HEVC and VP9.
Compare quality between VP9, HAVC and H.264

Saving potential exemplified

As previously stated, roughly 83% of the internet users in the US could be reached with H.265 or VP9, thus benefiting both end users by consuming less bandwidth and also streaming companies by reducing CDN costs. Considering 50% of bandwidth reduction by leveraging these codecs your total saving potential would be of 42%. Under those circumstances, let’s picture the scenario described in the table below:

CDN distribution cost per GB 0,025 USD
Video watched time (average) 10 minutes
Views 1,000,000
H.265 consumption – 1080p @ 4Mbit/s
  1. 10 minutes = 600 seconds
  1. 600 seconds * 4 Mbit/s = 2400 Mbit
  1. 2400 Mbit = 300 MB
  1. 300 MB = 0,3 GB
  1. 0,3 GB * 0,025 USD = 0,0075 USD per view
  1. 1,000,000 * Assuming 18% of views consuming H.265 = 180,000 views
0,0075 USD per view * 180,000 = 1,350.00 USD
VP9 – 1080p @ 4 Mbit/s
  1. 10 minutes = 600 seconds
  1. 600 seconds * 4 Mbit/s = 2400 Mbit
  1. 2400 Mbit = 300 MB
  1. 300 MB = 0,3 GB
  1. 0,3 GB * 0,025 USD = 0,0075 USD per view
  1. 1,000,000 * Assuming 65% of views consuming VP9 = 650,000 views
0,0075 USD per view * 650,000 = 4,875.00 USD
H.264 consumption – 1080p @ 8Mbit/s
  1. 10 minutes = 600 seconds
  1. 600 seconds * 8 Mbit/s = 4800 Mbit
  1. 4800 Mbit = 600 MB
  1. 600 MB = 0,6 GB
  1. 0,6 GB * 0,025 USD = 0,015 USD per view
  1. 1,000,00 * Assuming 17% of views falling back to H.264 = 170,000 views
0,015 USD per view * 170,000 views = 2,550.00 USD
Total CDN cost with H.264-only streaming: 15,000.00 USD
Total CDN cost with multi-codec streaming: 8,775.00 USD
Total savings: 6,225.00 USD

How to implement multi-codec streaming with Bitmovin

Now that we have established the effectiveness of a multi-codec approach on your online video strategy we can jump right into the “how to” section of the article. First of all let’s evaluate what we can do on the encoding side.
With the Bitmovin API you can encode your content with different codecs like H.264/AVC, H.265/HEVC, VP8, VP9, and recently also AV1. Ultimately the output can be MPEG-DASH, HLS, Microsoft Smooth and/or progressive MP4/WebM/TS.

To work with the Bitmovin API we have API clients for all the major programming languages. Visiting our Github page you will find all of them as well as code examples. For this particular topic of multi-codec streaming we have this Java API Client example which covers everything that is being presented in this article.

Each video and audio stream encoded using a given codec needs to be wrapped in a container. A container/muxing supports multiple streams, such as audio and video tracks. For adaptive streaming formats such as MPEG-DASH and HLS it is common to have separated muxings for audio and videos, however, for progressive formats obviously audio and video tracks needs to be muxed altogether. The table below shows the containers that can be used for each of the codecs.

Output Codec Container (muxing)
HLS H.264 fMP4, TS
H.265 fMP4
MPEG-DASH H.264 fMP4
H.265 fMP4
VP9 WebM
Microsoft Smooth Streaming H.264 MP4
Progressive H.264 MP4, TS
H.265 MP4, TS
VP9 WebM

As shown above, fMP4 muxings can be used to hold H.264 and H.265 segments for both HLS and MPEG-DASH. As a result these segments can be encoded only once and be referenced by a MPEG-DASH and HLS manifest, reducing your storage costs by 50%.
Adaptive streaming formats also allow us to include multiple codecs to the same manifest/playlist, that is the beauty of the solution. By encoding your content like this, you can hand over the logic of choosing the most appropriate codec to the player (you will find more details on that further down).
The examples below show how a multi-codec MPEG-DASH manifest and HLS playlist look like. For MPEG-DASH we use one AdaptationSet for each codec. On the other hand, for HLS we simply list all the variant streams with it’s codecs one below the other.

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<MPD id="fc7573ef-1945-4eea-91b0-fe6e20e870ca" profiles="urn:mpeg:dash:profile:full:2011" type="static" mediaPresentationDuration="P0Y0M0DT0H0M46.067S" minBufferTime="P0Y0M0DT0H0M2.000S" bitmovin:version="1.19.0" xmlns="urn:mpeg:dash:schema:mpd:2011" xmlns:bitmovin="http://www.bitmovin.net/mpd/2015" xmlns:ns2="http://www.w3.org/1999/xlink">
    <Period id="437c9a5c-a403-499c-92ca-b24944a70b77" start="P0Y0M0DT0H0M0.000S">
        <AdaptationSet segmentAlignment="true" mimeType="video/mp4">
            <Representation id="990f791b-984f-4566-a3b1-77a0ffbe2e60" bandwidth="875000" width="854" height="480" frameRate="30" codecs="hvc1.1.c.L90.90">
                <SegmentTemplate media="video/875_h265_fmp4/segment_$Number$.m4s" initialization="video/875_h265_fmp4/init.mp4" duration="120000" startNumber="0" timescale="30000"/>
            </Representation>
            <Representation id="6c1b1d6f-8a59-4a97-9424-ee99bb17819b" bandwidth="1175000" width="1280" height="720" frameRate="30" codecs="hvc1.1.c.L93.90">
                <SegmentTemplate media="video/1175_h265_fmp4/segment_$Number$.m4s" initialization="video/1175_h265_fmp4/init.mp4" duration="120000" startNumber="0" timescale="30000"/>
            </Representation>
        </AdaptationSet>
        <AdaptationSet segmentAlignment="true" mimeType="video/webm">
            <Representation id="00f77da9-8658-4afb-9710-0dfb08e7d346" bandwidth="875000" width="854" height="480" frameRate="30" codecs="vp9">
                <SegmentTemplate media="video/875_vp9_webm/segment_$Number$.chk" initialization="video/875_vp9_webm/init.hdr" duration="120000" startNumber="0" timescale="30000"/>
            </Representation>
            <Representation id="60ad47c8-9b48-41a9-8c3d-16fe4c9e56b2" bandwidth="1175000" width="1280" height="720" frameRate="30" codecs="vp9">
                <SegmentTemplate media="video/1175_vp9_webm/segment_$Number$.chk" initialization="video/1175_vp9_webm/init.hdr" duration="120000" startNumber="0" timescale="30000"/>
            </Representation>
        </AdaptationSet>
        <AdaptationSet segmentAlignment="true" mimeType="video/mp4">
            <Representation id="e53c20bd-519d-4881-9e35-6dd1a3817eaf" bandwidth="1750000" width="854" height="480" frameRate="30" codecs="avc1.4D401F">
                <SegmentTemplate media="video/1750_h264_fmp4/segment_$Number$.m4s" initialization="video/1750_h264_fmp4/init.mp4" duration="120000" startNumber="0" timescale="30000"/>
            </Representation>
            <Representation id="8138b60d-2bc8-4eef-91b9-3ef9e27b6cbb" bandwidth="2350000" width="1280" height="720" frameRate="30" codecs="avc1.4D401F">
                <SegmentTemplate media="video/2350_h264_fmp4/segment_$Number$.m4s" initialization="video/2350_h264_fmp4/init.mp4" duration="120000" startNumber="0" timescale="30000"/>
            </Representation>
        </AdaptationSet>
        <AdaptationSet lang="en" segmentAlignment="true" mimeType="audio/mp4">
            <AudioChannelConfiguration schemeIdUri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"/>
            <Representation id="3565543a-8524-41ac-98b4-006fcd21eaea" bandwidth="128000" audioSamplingRate="48000" codecs="mp4a.40.2">
                <SegmentTemplate media="audio/128_aac_fmp4/segment_$Number$.m4s" initialization="audio/128_aac_fmp4/init.mp4" duration="192000" startNumber="0" timescale="48000"/>
            </Representation>
        </AdaptationSet>
    </Period>
</MPD>

Example of a multi-codec MPEG-DASH manifest.

#EXTM3U
#EXT-X-INDEPENDENT-SEGMENTS
#EXT-X-VERSION:6
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="audio_128",NAME="audio_128.m3u8",LANGUAGE="en",URI="audio_128.m3u8"
#EXT-X-STREAM-INF:BANDWIDTH=2101985,AVERAGE-BANDWIDTH=1796254,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=854x480,AUDIO="audio_128"
video_h265_480p_1750.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2739681,AVERAGE-BANDWIDTH=2372431,CODECS="avc1.4D401F,mp4a.40.2",RESOLUTION=1280x720,AUDIO="audio_128"
Video_h265_720p_2350.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2049427,AVERAGE-BANDWIDTH=1824322,CODECS="hev1.1.6.L90.90,mp4a.40.2",RESOLUTION=854x480,AUDIO="audio_128"
video_h264_480p_1750.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=2706785,AVERAGE-BANDWIDTH=2421640,CODECS="hev1.1.6.L93.90,mp4a.40.2",RESOLUTION=1280x720,AUDIO="audio_128"
video_h264_720p_2350.m3u8

Example of a multi-codec HLS playlist.
On the playout side, as you can see in the code example below, by using Bitmovin Adaptive Player you just need to provide the dash, hls and/or progressive URLs as you would normally do. The player is responsible for identifying the most appropriate codec to deliver based on the browser/device capabilities.

var conf = {
  key: 'INSERTPROVIDEDKEYHERE',
  source: {
    dash       : 'http://path/to/mpd/file.mpd',
    hls        : 'http://path/to/hls/playlist/file.m3u8',
    progressive: [{
      url: 'http://path/to/mp4',
      type: 'video/mp4'
    }, {
      url: 'http://path/to/webm',
      type: 'video/webm'
    }]
  }
};
player.setup(conf).then(function(value) {
  // Success
}, function(reason) {
  // Error!
});

Example of a player configuration and setup.
There is also the possibility of having separated manifests/playlists for each codec you want to work with. In this case you would need to handle the logic of choosing the best source on your side – which would not be too complicated and you could have custom business rules as well.

Conclusion

As we were able to see, addressing multi-codec streaming can be a very effective measure towards reducing costs on bandwidth while delivering the same quality of experience to your viewers. Here at Bitmovin we treat this subject seriously and are constantly improving and adding new features such as Per-Title Encoding, Per-Scene Adaptation, Stream Conditions and others.
It is also important to mention that one of the complexities of multi-codec streaming is the increase of computational resources necessary to encode the same content, which usually also leads to higher turn-around times. However, by leveraging on Bitmovin Containerized Video Encoding, where we split the input file into multiple small parts to encode in parallel, this is just a matter of adding more nodes to the cluster, problem solved.
We are looking forward to helping you reduce your CDN costs or deliver higher qualities to your viewers.

The post Efficient Multi-Codec Support for OTT Services: H.264/HEVC/VP9 and/or AV1? appeared first on Bitmovin.

]]>
Utilizing Bitmovin Encoding Platform with Multiple Players https://bitmovin.com/blog/utilizing-bitmovin-encoding-platform-multiple-players/ Wed, 08 Feb 2017 08:15:10 +0000 http://bitmovin.com/?p=17449 The following tutorial will show you how to perform an encoding with the Bitmovin API and setup multiple players to playback the encoded content. The Bitmovin encoding platform can be used with a multitude of video players, including, but not limited to the Bitmovin Adaptive Streaming Player. This blog post will walk you through the...

The post Utilizing Bitmovin Encoding Platform with Multiple Players appeared first on Bitmovin.

]]>
- Bitmovin

The following tutorial will show you how to perform an encoding with the Bitmovin API and setup multiple players to playback the encoded content.

The Bitmovin encoding platform can be used with a multitude of video players, including, but not limited to the Bitmovin Adaptive Streaming Player. This blog post will walk you through the end to end process to setup the Bitmovin encoding pipeline, furthermore, we describe how to configure and setup multiple video players to ingest the encoded output.
First, we will configure an encoding job using the Bitmovin platform to produce an HLS & DASH ABR output stream set. The platform will handle the entire workflow of ingesting the media source content, transcoding/packaging, and output transfer. Second, we will step through various video player configurations showing how to playback the encoded video bits.

Encoding Pipeline Setup

For the encoding, we will utilize the newest version of the Bitmovin API. We will use the PHP API Client and its example to show you how to create MPEG-DASH/HLS content. First, you will need to install the PHP Composer dependency manager.

Insert your API key to the Example

$client = new \Bitmovin\BitmovinClient('INSERT YOUR API KEY HERE');

The client is now ready to use. We can start preparing the configurations for your input source, output destination, and encoding, containing all the renditions (quality levels), you want to create.
For this example we are using an HTTP input path. The Bitmovin platform supports a multitude of storage providers and protocols (AWS S3, Google Cloud Storage, Microsoft Azure, Aspera, and (S)FTP).

$videoUrl = 'https://example.com/path/to/your/movie.mp4';
$input = new HttpInput($videoUrl);

Next we will want to configure the output destination. Output also supports multiple storage providers and protocols. For this example we are going to use AWS S3.

$s3AccessKey = 'INSERT YOUR S3 ACCESS KEY HERE';
$s3SecretKey = 'INSERT YOUR S3 SECRET KEY HERE';
$s3BucketName = 'INSERT YOUR S3 BUCKET NAME HERE';
$s3Prefix = 'path/to/your/output/destination/';
$s3Output = new S3Output($s3AccessKey, $s3SecretKey, $s3BucketName, $s3Prefix);

Next we will configure the encoding profile. This profile is where you configure the output streams, bitrates and resolutions. You can add as many video stream configurations as required by your typical encoding profile.

$encodingProfileConfig = new EncodingProfileConfig();
$encodingProfileConfig->name = 'Test Encoding FMP4';
$encodingProfileConfig->cloudRegion = CloudRegion::AWS_EU_WEST_1;
// CREATE VIDEO STREAM CONFIG FOR 1080p
$videoStreamConfig_1080 = new H264VideoStreamConfig();
$videoStreamConfig_1080->input = $input;
$videoStreamConfig_1080->width = 1920;
$videoStreamConfig_1080->height = 1080;
$videoStreamConfig_1080->bitrate = 4800000;
$encodingProfileConfig->videoStreamConfigs[] = $videoStreamConfig_1080;
// CREATE VIDEO STREAM CONFIG FOR 720p
$videoStreamConfig_720 = new H264VideoStreamConfig();
$videoStreamConfig_720->input = new HttpInput($videoInputPath);
$videoStreamConfig_720->width = 1280;
$videoStreamConfig_720->height = 720;
$videoStreamConfig_720->bitrate = 2400000;
$videoStreamConfig_720->rate = 25.0;
$encodingProfileConfig->videoStreamConfigs[] = $videoStreamConfig_720;
// MORE VIDEO STREAM CONFIGS ...
$audioConfig = new AudioStreamConfig();
$audioConfig->input = $input;
$audioConfig->position = 1;
$audioConfig->bitrate = 128000;
$audioConfig->name = 'English';
$audioConfig->lang = 'en';
$encodingProfileConfig->audioStreamConfigs[] = $audioConfig;

Now we will configure the output streaming protocol
For this example, we will configure HLS & DASH output. Bitmovin supports other streaming protocols such as fMP4 and MSS, etc.

// ENABLE DASH OUTPUT
$jobConfig->outputFormat[] = new DashOutputFormat();
// ENABLE HLS OUTPUT
$jobConfig->outputFormat[] = new HlsOutputFormat();

Start the encoding

// RUN JOB AND WAIT UNTIL IT HAS FINISHED
$client->runJobAndWaitForCompletion($jobConfig);

After executing this example you will have two manifests, one for MPEG-DASH and one for HLS.

Video Player Setup

In this section, we will setup multiple video players to playback the encoded media from the Bitmovin platform. We will utilize the CDN-hosted versions of the players to lessen the load on our server and to have the most up to date version.

Bitmovin HTML5 Player

The Bitmovin team works to maintain the most technologically advanced video player currently on the market. The Bitmovin player also has the fastest load time compared to other online video players, along with the list below, you will also see that the Bitmovin player contains a deep stack of advanced features.
You can access the player analytics page and other player tutorials from inside of the Bitmovin portal. When you are ready to deploy the player to a domain, you will need to whitelist the hosted domain using the Bitmovin portal.

<!DOCTYPE html>
<html>
<head>
   <meta charset="utf-8">
   <meta http-equiv="Content-Type" content="text/html; charset=utf-8"/>
   <title>Bitmovin V7 Player (DASH / HLS / MP4)</title>
   <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20src%3D%22https%3A%2F%2Fbitmovin-a.akamaihd.net%2Fbitmovin-player%2Fstable%2F7%2Fbitmovinplayer.js%22%3E%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
</head>
<body>
<div id="unique-player-id"></div>
<img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20type%3D%22text%2Fjavascript%22%3E%0A%20%20%20var%20player%20%3D%20bitmovin.player(%22unique-player-id%22)%3B%0A%20%20%20var%20conf%20%3D%20%7B%3Ca%20href%3D%22https%3A%2F%2Fbitmovin.com%2Fdashboard%2Fsignup%22%3E%3C%2Fa%3E%0A%20%20%20%20%20%20key%3A%20%22xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx%22%2C%0A%20%20%20%20%20%20source%3A%20%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20dash%3A%20%22https%3A%2F%2Fexample.com%2Fpath%2Fto%2Fyour%2Fdash-content-master-playlist.mpd%22%2C%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20hls%3A%20%22https%3A%2F%2Fexample.com%2Fpath%2Fto%2Fyour%2Fhls-content-master-playlist.m3u8%22%0A%20%20%20%20%20%20%20%20%20%20%20%20%20%20progressive%3A%20%22https%3A%2F%2Fexample.com%2Fpath%2Fto%2Fyour%2Fcontent.mp4%22%0A%0A%20%20%20%20%20%20%7D%0A%20%20%20%7D%3B%0A%20%0A%20%20%20player.setup(conf).then(function%20(value)%20%7B%0A%20%20%20%20%20%20%20console.log(%22Successfully%20created%20bitmovin%20player%20instance%22)%3B%0A%20%20%20%7D%2C%20function%20(reason)%20%7B%0A%20%20%20%20%20%20%20console.log(%22Error%20while%20creating%20bitmovin%20player%20instance%22)%3B%0A%20%20%20%7D)%3B%0A%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
</body>
</html>

This is a simple test page displaying the Bitmovin player. By default, the Bitmovin player supports playback of both HLS and DASH streams, along with progressive MP4 support. A demonstration of different configuration options can be found in our Player Configuration. The player supports Unlimited Customization as well as many advanced features.

Google’s Open Source Shaka Player

Next, let’s setup Google’s open source Shaka player. By default Shaka only plays back DASH streams. Shaka is considered a good reference player for a team wanting to build an internal player product, but currently, there is no commercial support available for the player.

<!DOCTYPE html>
<html>
  <head>
    <!-- Shaka Player compiled library: -->
    <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20src%3D%22https%3A%2F%2Fcdnjs.cloudflare.com%2Fajax%2Flibs%2Fshaka-player%2F2.0.3%2Fshaka-player.compiled.js%22%3E%0A%20%20%20%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
    <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3E%0A%20%20%20%20function%20initPlayer()%20%7B%0A%20%20%20%20%20%20%20%20%2F%2F%20Install%20polyfills%20for%20legacy%20browser%20support.%0A%20%20%20%20%20%20%20%20shaka.polyfill.installAll()%3B%0A%0A%20%20%20%20%20%20%20%20%2F%2F%20Find%20the%20Shaka%20Player%20video%20element.%0A%20%20%20%20%20%20%20%20var%20video%20%3D%20document.getElementById('video')%3B%0A%0A%20%20%20%20%20%20%20%20%2F%2F%20Construct%20a%20Player%20to%20wrap%20around%20it.%0A%20%20%20%20%20%20%20%20var%20player%20%3D%20new%20shaka.Player(video)%3B%0A%0A%20%20%20%20%20%20%20%20%2F%2F%20Attach%20the%20player%20to%20the%20window%20so%20that%20it%20can%20be%20easily%20debugged.%0A%20%20%20%20%20%20%20%20window.player%20%3D%20player%3B%0A%0A%20%20%20%20%20%20%20%20%2F%2F%20Listen%20for%20errors%20from%20the%20Player.%0A%20%20%20%20%20%20%20%20player.addEventListener('error'%2C%20function(event)%20%7B%0A%20%20%20%20%20%20%20%20%20%20console.error(event)%3B%0A%20%20%20%20%20%20%20%20%20%20%7D)%3B%0A%0A%20%20%20%20%20%20%20%20%2F%2F%20Construct%20a%20DashVideoSource%20to%20represent%20the%20DASH%20manifest.%0A%20%20%20%20%20%20%20%20var%20mpdUrl%20%3D%20'http%3A%2F%2Fexample.com%2Fdash-content%2Fmanifest.mpd'%3B%0A%0A%20%20%20%20%20%20%20%20%2F%2F%20Load%20the%20source%20into%20the%20Player.%0A%20%20%20%20%20%20%20%20player.load(mpdUrl)%3B%0A%20%20%20%20%7D%0A%20%20%20%20document.addEventListener('DOMContentLoaded'%2C%20initPlayer)%3B%0A%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
  </head>
  <body>
    <video id="video" width="640" poster="//shaka-player-demo.appspot.com/assets/poster.jpg" controls autoplay></video>
  </body>
</html>

DASH.JS Reference Rlayer

In this example, we will setup the DASH.JS reference player from the DASH-IF industry form. This player typically will support the latest MPEG-DASH features. The player is also a continually moving target, meaning that the new feature additions will sometimes cause player issues with previously working DASH streams.
You can access the DASH.JS GitHub repository at https://github.com/Dash-Industry-Forum/dash.js/.

<html>
<head>
    <!-- DASH.JS include -->
    <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20src%3D%22http%3A%2F%2Fcdn.dashjs.org%2Flatest%2Fdash.all.min.js%22%3E%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
</head>
<body>
<h1>DASH.JS</h1>
<div>
        <video data-dashjs-player autoplay src="http://example.com/dash-content/manifest.mpd" controls></video>
    </div>
</body>
</html>

JWPlayers Video Player

Next, we will setup media playback from the Bitmovin platform using another commercial video player. By signing up at https://www.jwplayer.com/, you will have access to the basic player functionality. Unlike Bitmovin’s player which gives you full access to all features, to enable adaptive bitrate streaming you will need to purchase at least the minimal premium version of the player from JW.

<html>
<head>
    <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20src%3D%22%2F%2Fcontent.jwplatform.com%2Flibraries%2FpY4X09ce.js%22%3E%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
</head>
<body>
<h1>JWPlayer</h1>
<div id="myElement"></div>
        <img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20type%3D%22text%2FjavaScript%22%3E%0A%20%20%20%20%20%20%20%20%20%20var%20playerInstance%20%3D%20jwplayer(%22myElement%22)%3B%0A%20%20%20%20%20%20%20%20%20%20playerInstance.setup(%7B%0A%20%20%20%20%20%20%20%20%20%20%20%20file%3A%20%22%2Fplayer%2Fcontent2%2Fstream.mpd%22%20%7D)%3B%0A%20%20%20%20%20%20%20%20%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
    </div>
</body>
</html>

HLS.JS Setup

HLS.JS is another popular open source player option. As HLS.JS is open source, there is no support options available. Like Shaka, HLS.JS is a good option if you are looking to develop an internal player. The setup process is easy, just like all of the other HTML5 video players. You can access the Github repository at https://github.com/dailymotion/hls.js/.

<img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%20src%3D%22https%3A%2F%2Fcdn.jsdelivr.net%2Fhls.js%2Flatest%2Fhls.min.js%22%3E%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />
<video id="video"></video>
<img src="data:image/gif;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7" data-wp-preserve="%3Cscript%3E%0A%20%20if(Hls.isSupported())%20%7B%0A%20%20%20%20var%20video%20%3D%20document.getElementById('video')%3B%0A%20%20%20%20var%20hls%20%3D%20new%20Hls()%3B%0A%20%20%20%20hls.loadSource('https%3A%2F%2Fexample.com%2Fpath%2Fto%2Fyour%2Fhls-content-master-playlist.m3u8')%3B%0A%20%20%20%20hls.attachMedia(video)%3B%0A%20%20%20%20hls.on(Hls.Events.MANIFEST_PARSED%2Cfunction()%20%7B%0A%20%20%20%20%20%20video.play()%3B%0A%20%20%7D)%3B%0A%20%7D%0A%3C%2Fscript%3E" data-mce-resize="false" data-mce-placeholder="1" class="mce-object" width="20" height="20" alt="&lt;script&gt;" title="&lt;script&gt;" />

What’s Next

You can create a free Bitmovin account for testing and creating your own online media experiences by signing up here.
We hope this article has given you an insight how to use the Bitmovin video platform for encoding and video playback. We also hope this has demonstrated that you can use the Bitmovin platform for encoding without disrupting your existing video player infrastructure. If you have any additional questions or would like to discuss next steps for integrating Bitmovin into your media pipeline please contact us here

The post Utilizing Bitmovin Encoding Platform with Multiple Players appeared first on Bitmovin.

]]>