Tom Macdonald – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Thu, 01 Feb 2024 11:50:25 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg Tom Macdonald – Bitmovin https://bitmovin.com 32 32 Player Web X – A New Player for a New Era https://bitmovin.com/blog/introducing-player-web-x/ https://bitmovin.com/blog/introducing-player-web-x/#respond Wed, 05 Jul 2023 10:25:57 +0000 https://bitmovin.com/?p=263339 Over the past few months, we’ve given insight into how we were evolving as a team and updating the Bitmovin Player for the future with our past blogs on “Why Structured Concurrency Matters for Developers” and “Developing a Video Player with Structured Concurrency”. In this third and final installment, the moment has finally come to...

The post Player Web X – A New Player for a New Era appeared first on Bitmovin.

]]>

Over the past few months, we’ve given insight into how we were evolving as a team and updating the Bitmovin Player for the future with our past blogs on “Why Structured Concurrency Matters for Developers” and “Developing a Video Player with Structured Concurrency”. In this third and final installment, the moment has finally come to highlight and detail our latest development and welcome the next iteration of our Player’s Web SDK.

Player Web X, or Web Player Version 10, is the newest innovation from the Bitmovin Player engineering team and is already showing promising results compared to other players on the market.

In this blog, we will go into more detail regarding the framework, the Player proof of concept (POC) our team built within just 2 weeks, and our path to making it a minimum viable product (MVP).

Reinventing Playback – Player Web X

The new player was built using a new framework, which was reimagined and built from the ground up using structured concurrency (see previous blog). The architecture of this framework allows for a lot of flexibility and, ultimately, optimal performance.

Because of this, Player Web X is specifically designed to be lightweight and performant. This has the benefit of providing the highest quality of experience for viewers and a great developer experience for those deploying a streaming service.

Notably, Player Web X’s enhanced performance is not limited to high-end devices, as keeping performance optimal on lower-powered devices such as Smart TVs and STBs, where hardware can play a limiting factor, is often more important to ensure a similar experience for all viewers.

In order to validate the performance benefits of this new framework and POC player, we have run some benchmarking experiments against industry-leading open-source web players. We’ll discuss the details further down, but one of the reasons we find Player Web X’s performance impressive and promising is that it’s based on our 2-week POC, which has yet to be production optimized.

Benchmarking – The Results 

For these benchmarking tests, we ran a minimum of 40 tests for each player and stream combination for each metric, using Tears of Steel and Sintel.

Three metrics were measured:

  • Video Seeking: The time taken to resume playback after jumping to a random position in the timeline
  • Video Startup: The time taken to begin playback after loading a new source
  • Source Switching: The time taken to switch between two loaded sources

The three open-source players we used were 

  • Shaka Player
  • HLS.js 
  • Video.js

Each web player was set up with the most basic integration without any additional features beyond basic playback (i.e., no DRM, Ad insertion, etc.). The first set of tests was done on a Chrome browser, and you can see the correlating results below:

Graph of each player’s performance on Chrome

Created with Highcharts 10.3.3 Seeking hls.jsPlayer Web XShaka 0500100015002000

Test for video seeking time taken between the Bitmovin Player Web X, HLS.js, and Shaka Player

Created with Highcharts 10.3.3 Startup hls.jsPlayer WebXShakaVideo.js 2505007501000125015001750

Test for video startup time taken between the Bitmovin Player Web X, HLS.js, Video.js, and Shaka Player

Created with Highcharts 10.3.3 Source switching hls.jsPlayer WebXShakaVideo.js 5001000150020000 hls.js Source switching Maximum: 1 627.603Upper quartile: 1 366.67625Median: 877.6735Lower quartile: 687.66175Minimum: 657.56

Test for source switching time taken between the Bitmovin Player Web X, HLS.js, Video.js, and Shaka Player

These box plots show the distribution of the 40 test runs, highlighting the median and interquartile range of the data points.

Test Overview on Chrome

Test 1: Video Seeking 

The Player Web X POC has a larger range (total and interquartile) than both hls.js and Shaka player. However, from the graph, you can see the median seeking time for Player Web X is faster than the median time for HLS.js and Shaka Player. Note that Video.js is not represented in this graph because the seeking data points were far higher than the other three shown, and it wasn’t possible to showcase all of them and make the data easily visible.

Test 2: Video Startup 

HLS.js has a very even symmetrical distribution on startup time, while the other three players have some slower startup times that sit far from their relatively short interquartile range. Also, notably, Player Web X has a larger interquartile range than the open-source players, meaning the start time is not quite as consistent. However, 75% of all startup events observed for Player Web X were faster than the median startup times for all the open-source players. Also, as in the video-seeking test, the fastest startup time for Player Web X is faster than the fastest startup time for all of the open-source players.

Test 3: Source Switching 

The interquartile range for the four players is comparable, showing that source switching is not as consistent as startup, given the handling of two sources instead of just one. Player Web X has a larger overall range, though the slowest source switch is still not as slow as the slowest from the open-source players. And again, like with the other metrics, Player Web X’s median value for source switching outperforms the median values for the other players. 

It also has to be mentioned that for all three test metrics, the fastest Player Web X time is faster than the quickest time for all of the open-source players. 

For the second round of testing, we ran the same three tests on a 2021 Samsung Tizen SmartTV, and the correlating results can be seen below.

Graph of each video player’s performance on Samsung Tizen 2021 SmartTV

For the second round of testing, we ran the same three tests on a 2021 Samsung Tizen SmartTV, and the correlating results can be seen below.

Created with Highcharts 10.3.3 Seeking hls.jsPlayer Web XShaka 2004006008001000

Test for video seeking time taken between the Bitmovin Player Web X, HLS.js, and Shaka Player

Created with Highcharts 10.3.3 Startup hls.jsPlayer Web XShakaVideo.js 1k2k3k4k5k

Test for video startup time taken between the Bitmovin Player Web X, HLS.js, Video.js, and Shaka Player

Created with Highcharts 10.3.3 Source switching hls.jsPlayer WebXShakaVideo.js 125015001750200022502500

Test for source switching time taken between the Bitmovin Player Web X, HLS.js, Video.js, and Shaka Player

On this device, Player Web X still performs well compared to the other open-source players. It consistently outperformed Shaka player on average on all tests. However, there is a clear area for improvement for Player Web X when compared to HLS.js on this SmartTV device, as across the 3 different tests, HLS.js performed better when compared to Player Web X. We have identified that this is due to two factors: Player Web X parsing is not yet optimized for devices with less powerful hardware like SmartTVs, and the player architecture is based on a heartbeat (using setInterval). Taken together, this leads to delays that are more noticeable on Smart TVs. Both these factors are to be resolved as we continue to improve our codebase, including the plug-ins used in these tests.

Plug-in Marketplace

Modern services require a large number of features and integrations simply to keep up with the conventional experience modern consumers expect. Many streaming companies often feel they may have to fragment their own service in order to accommodate specific features on different devices.

That’s why for Player Web X, we’re creating a new plug-in system to create a more flexible and expansive framework to help clients using the Bitmovin Player, regardless of their use cases. This system will give development teams more control over what is included in their deployed player bundle, removing the need for any unnecessary bloating.

The plug-in system is an extension of the approach we took in Bitmovin Player Version 8 with the module system. However, it is far more extensive, as rather than having modules that extend the player, the plug-in system is built into the framework that underlies Player Web X, meaning that the player itself is built of plug-ins, which can be overridden or extended. This will allow for far more granular and powerful solutions to various use cases.


Additionally, in order to allow our customers to support their own use cases, we will be releasing an open-source plug-in template. This template will enable our clients, partners, and developer community to create their own plug-ins for Player Web X and enhance their service to achieve an optimal and unique viewer experience. This is particularly powerful for teams that want to have the flexibility of an open-source web player with the performance and stability of a commercial player.

Player Web X’s Current Stage and Next Steps

Player Web X is currently in alpha testing with some of our existing partners to help us on the road to making it an MVP. At IBC2023 in September, we will release more interesting updates on its progression and where we are at in the process. In the meantime, if you’d like to become an early adopter or would like more information about any details in this blog post, please get in touch.

Also, if you’d like to see how Bitmovin’s solution stack can benefit your streaming workflow – sign up for our 30-day free trial.

The post Player Web X – A New Player for a New Era appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/introducing-player-web-x/feed/ 0
Developing a Video Player with Structured Concurrency https://bitmovin.com/blog/developing-video-player-with-structured-concurrency/ https://bitmovin.com/blog/developing-video-player-with-structured-concurrency/#comments Fri, 10 Mar 2023 22:06:47 +0000 https://bitmovin.com/?p=253843 In software architecture, as in any problem-solving domain, it is important to solve the right problem in the right place. A very well known example of this is the React Virtual DOM. Before this concept was introduced, web developers used to have to add, update and delete DOM nodes on their own. React removes this...

The post Developing a Video Player with Structured Concurrency appeared first on Bitmovin.

]]>
In software architecture, as in any problem-solving domain, it is important to solve the right problem in the right place.

A very well known example of this is the React Virtual DOM. Before this concept was introduced, web developers used to have to add, update and delete DOM nodes on their own. React removes this concern by simply collecting the sum of the outputs of all the components in the tree into memory, comparing the two trees (the virtual one and the real one) and updating the DOM with the differences. This is great for the programmer, as they no longer have to think about those updates, and can concentrate on their domain. However, it also allows React to decide when is the best time to apply the result to the browser DOM, allowing for global performance improvements across the application. Furthermore, because each and every component is pure, unit testing is trivial. And there are other benefits too, like being able to target a completely different platform than the DOM, like in React Native.

Another example is the Erlang VM, BEAM. Designed for scalability and resilience, this VM implements concurrency using the actor model. This is great for writing software that has to manage many parallel operations, once again saving the programmer from having to think about such things. But, due to the use of message passing, it also allows code to be scaled up transparently to an arbitrary number of cores or even machines without needing to change the codebase materially. Of course, as processes are independent, it is possible to let misbehaving processes crash without the overall system going down.

To put it another way, solving a problem in the right place has a compounding effect on the rest of the decisions that will be made while architecting software. This will lead to the domain logic being easier to express and understand, as the programmer no longer needs to think about orchestration concerns or side effects like DOM node manipulation.

Structured concurrency solves the right problem in the right place

In my previous article, I introduced structured concurrency and the benefits my team and I experienced when implementing it. I also mentioned the framework we built and how it performed exceptionally well when we created a player POC using the framework.

These results are an excellent example of why solving a problem in the right place is so powerful, just like React’s Virtual DOM or the BEAM actor model. By solving the problem of “how to synchronise execution” or “how to avoid race conditions and dangling errors” as a fundamental part of the framework, it becomes possible to abstract away the async problems and concentrate on solving the domain problem instead, like building a video player.

Async bugs are a plague on programs that are highly dynamic and data-heavy. At Bitmovin, we know a thing or two about this, as video players are very much of that type: it is necessary to wait for and synchronise multiple different asynchronous browser APIs (Fetch, the MSE API, etc) to stream video in a reliable way successfully.

Check out this pull request for instance. This fixes an async bug in a large, and in our considered opinion, very high-quality video player.

The bug in question is a classic case of a race condition; during seeking, the previous version of the code was sometimes calling a method on an object after the object itself was disposed of. This was leading to unhandled exceptions and a player crash that was difficult to reproduce.

I know from experience that a non-trivial amount of time is spent hunting for, understanding, and fixing this kind of issue, even though the fix itself is only a line or two of code. So, to avoid spending engineering time tracking down problems of this type, wouldn’t it be better if we were able to avoid issues of this type systemically? This is exactly what structured concurrency offers.

A concurrent hierarchy of scopes

The Typescript framework that we have been working on implements structured concurrency so that we no longer have to worry about these problems.

In our framework, units of code are run within a scope that is known to the framework. It is possible to start other units of execution from within a scope and that code will then be executed in another dependent scope. Should an error occur, the error is automatically propagated to the parent scope, which either handles the error or recursively cancels all concurrent scopes before propagating the error in turn to its parent.

It is a trivial operation to fork a new scope, and this, in turn, allows the framework to intercept execution to implement cancellation. In this way, a parent scope can also cancel the execution of children.

Lastly, each scope is kept alive until all child scopes have finished executing.

This leads to the formation of a hierarchy of scopes, where any subtree is completely self-contained, cancellable, and disposable in case of error. It is possible to catch errors at any level of the tree and handle them gracefully while ensuring that the concerned sub-tree will no longer be referenced in any way.

Ultimately, writing code using the framework is as simple as writing standard async Typescript. There is no longer a need for the programmer to worry about race conditions due to unexpected errors, and cancellation is also possible, to the extent permitted by Javascript’s event loop. The programmer is, therefore, responsible for avoiding long-running calculations, which is no different from standard practice in Javascript.

video player - Bitmovin

Structured concurrency and scope termination

Performance is a consequence of structured concurrency

Let’s return to the bug we examined earlier. In our new model, the focus would no longer be on objects but on scopes and lifetimes. There would be a seek scope that would fork the necessary sub-scopes. Due to the scopes forming a tree, it would essentially be impossible to access an object that no longer exists, as a scope only has access to the context of its parent. That parent scope is available until after the child has ended, so that bug would not be able to exist.

The reason I am sure of this is that in order to see our framework in action, we built a player POC that leverages the capabilities of our new framework. In this player seeking works precisely as described above.

One great side effect of the programming model was that it turned out to be quite simple to implement background source loading and blazingly fast source switching. The way that this works is, when the source is switched, we can simply cancel the manifest download pipeline and start a new one with the new URL. All dependent scopes, like segment downloads and manifest parsing, are simply and immediately cancelled. There is no need to synchronise objects, re-instantiate them, or reset data storage.

The effect of this is that, although the player is only a technology preview at this time, it performs above expectations. I will present some actual figures in my next post on the subject, but for now, it’s enough to say that it already performs incredibly well for metrics such as seeking, switching and even startup time. As explained above, I believe this is because we are not forced to manually orchestrate promises and callbacks, which means that efficiency is there by default.

Overall, I’m very excited about the potential of the framework due to the performance and the productivity my team has been able to achieve using it to build the player POC. In my next blog post, I will go into the benchmarking data I mentioned earlier and why the performance of our player POC has made it worthy of your attention.

The post Developing a Video Player with Structured Concurrency appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/developing-video-player-with-structured-concurrency/feed/ 4
Why Structured Concurrency Matters for Developers https://bitmovin.com/blog/structured-concurrency-matters-developers/ https://bitmovin.com/blog/structured-concurrency-matters-developers/#comments Thu, 09 Feb 2023 07:38:46 +0000 https://bitmovin.com/?p=251524 What is the problem with async code? As every developer knows, writing concurrent code is hard. You could even ask any project manager, and they would agree the bugs in whichever project they are managing, which are the hardest to figure out and fix, are usually solved when the programmers diagnose a race condition of...

The post Why Structured Concurrency Matters for Developers appeared first on Bitmovin.

]]>
What is the problem with async code?

As every developer knows, writing concurrent code is hard. You could even ask any project manager, and they would agree the bugs in whichever project they are managing, which are the hardest to figure out and fix, are usually solved when the programmers diagnose a race condition of some kind.

This is not restricted to one programming language either. More or less every language that is older than about 10 years and based on a procedural or OOP approach seems to have problems in this area. In Javascript, these problems are very much present when writing async code.

For instance, let’s consider the Bitmovin MSE demo. In this very simplified example, to avoid problems, all the steps are scheduled in sequential order, using callbacks. The opening of the MSE triggers the downloading of the first segment, which then triggers the appending of the segment to the source buffer, which triggers the downloading of the second segment, and so on.

To get things ready a bit faster, It would make sense to start downloading the first segment at the same time as opening the source buffers, as playback depends on both of those things, but they are independent of each other. In this case, we could use Promise.all and start both things at once, but we already run into a problem: if opening the source buffers fails, the download will continue regardless. So we have exchanged one problem, waiting unnecessarily to start the first download, for another, which is to start downloading unnecessarily and throw away the data, or worse, keep it somewhere and cause a memory leak.

Of course, in every specific situation, a fix can be implemented by carefully making sure that error conditions are covered properly, so the problem cannot occur. In Javascript specifically, this means careful orchestration of promise chains, so nothing is left dangling – executing when it shouldn’t, and all stale data is cleared.

A more general solution would be better. Luckily, there is one.

What is structured concurrency?

The kind of problem we examined above is fairly common in applications that are data-heavy and highly dynamic. Web video players are both of those things, so here at Bitmovin, we have had our fair share of interesting work in this area. Therefore we always have an eye out for fresh approaches to solving this type of problem. So, when we discovered structured concurrency, we felt we had to take a look.

Structured concurrency is a very obvious idea, in hindsight. It’s simply the recognition of the fact that when programming languages allow the programmer to do certain things, certain problems can arise, which can be of various types.

In cases where a parent starts a parallel child execution, the parent can terminate before the child does. This may mean that errors in the child are simply not handled anywhere. This can cause a variety of problems, including deadlocks and application crashes.

video player - Bitmovin

An example of the main thread terminating before the child thread. In this case, the language has no place to send the error message or exception.

Similarly, two parallel executions (like in our download and MSE initialization example) can result in a situation where one continues executing even when the other has already thrown an error. There may be code that is waiting for the result of both executions, in which case the result of the second execution will never be used.

For an in-depth discussion of these problems, the NJS blog on structured concurrency provides good examples as well.

How does structured concurrency work?

Structured concurrency, then, is simply a set of rules that should be followed to avoid these types of errors.

  • All asynchronous code should have an explicit parent context, which should not terminate execution until all child contexts have.
  • All errors must propagate up until they are handled
  • Due to the first rule, termination must also propagate down, meaning that any children of an execution context must be terminated when an error occurs in that context.

What these rules describe is a tree of concurrent execution that it is possible to reason about. Because the programmer knows that any results will be brought into the parent scope, and any errors can also be handled there, the thread execution, just like a normal function call, can be treated like a pure function of its inputs and used as a building block for more abstract tasks.

Even more interesting is that because these rules are just conventions, it is possible to write a framework that implements them, even in the absence of specific language support. Users of such a framework will get the benefits of a more efficient programming model.

video player - Bitmovin

With structured concurrency, the main (or parent) thread must wait to terminate until all child threads have terminated. This means that the error can be bubbled up.

At Bitmovin, we have been working on exactly this: a structured concurrency framework for dynamic, data-heavy applications in Typescript that will support our work on video player technology.

Using our framework, our example from above is not modified in any significant way from the developer perspective. We would still run both asynchronous processes in parallel (via an equivalent to Promise.all). However, should one of them throw an error, the other would be automatically cancelled, and all contextual data cleaned up. This means that it becomes completely trivial to simply wait for the result of both executions, then continue pushing the segment into the MSE.

But it gets better. Because it is possible to reason about asynchronous executions, it becomes possible to compose them. This, in turn, means that it becomes possible to design complex data-heavy and concurrent applications, like video players, with no fear of implementing complicated concurrency bugs at the same time. We know this because we already wrote a video player using our framework, and not only did it have no concurrency bugs, it performs remarkably well.

In our next blog post, we will go further into the advantages of structured concurrency and how we’ve used it in the right places to solve problems in a way that pays off over time. To showcase this, we will show results from our new Player as a proof of concept and present benchmark numbers.

The post Why Structured Concurrency Matters for Developers appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/structured-concurrency-matters-developers/feed/ 2