Fabre Lambeau – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Wed, 10 May 2023 15:56:38 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg Fabre Lambeau – Bitmovin https://bitmovin.com 32 32 Cloud-based Per-Title Encoding Workflows (with AWS) – Part 3: Adding the Player and Video Analytics https://bitmovin.com/blog/implementing-video-player-per-title-encoding-aws-p3/ Wed, 24 Mar 2021 17:30:32 +0000 https://bitmovin.com/?p=162050 Cloud-Based Workflows with AWS: Plugging in the Video Player and Video Analytics In the first two parts of this series on using Bitmovin’s Per-Title Encoding on AWS, we’ve focused on the architecture of the application and setting up encoding using Bitmovin Cloud Connect. But now that we have our encoded videos saved to Amazon S3, what’s...

The post Cloud-based Per-Title Encoding Workflows (with AWS) – Part 3: Adding the Player and Video Analytics appeared first on Bitmovin.

]]>
Cloud-Based Workflows with AWS: Plugging in the Video Player and Video Analytics

- Bitmovin
In the first two parts of this series on using Bitmovin’s Per-Title Encoding on AWS, we’ve focused on the architecture of the application and setting up encoding using Bitmovin Cloud Connect. But now that we have our encoded videos saved to Amazon S3, what’s next? In this part, I’ll complete the circle by introducing Bitmovin’s Video Player and Video Analytics products. I’ll explain how these two tools work together, and how to use them to gather data on user interaction with your content and measure quality of service. Finally, I’ll walk you through the setup that we used in our 2020 Bitmovin + AWS Hackathon to demonstrate the cost savings and performance of per-title encoding.

Why Do Video Analytics Matter?

Understanding how your content performs is important for a few reasons. First, detailed analytics can help you improve your quality of service as in the case of Telekom Slovenjie:

“As a customer might call in, an agent could check the types of streams the user watched, which errors they were having and on which device, and would distinguish if the error is detected on a hardline or on the actual network. With the simple analytics collector and API implementation, Telekom Slovenjiie was able to reduce their support tickets by roughly 30 percent.”

Not all video analytics providers offer as much granularity as Bitmovin. One of the big advantages of using a dedicated service for your video analytics is that you don’t have to know exactly what metrics you want to track in advance. At Bitmovin, we record over forty metrics.
The Bitmovin dashboard is the easiest way to have a first look at your data. It breaks it down into 3 areas:

  • Audience shows you how people are engaging with your content. Metrics like number of plays, unique users, ISP, location, and view time are all available here.
  • Quality of Service tells you more about the user experience of your videos, which includes data like start time, bandwidth used, bitrate, etc.
  • Advertising is a must-have if you rely on advertising to fund your content, with metrics such as click-through rates, successful ad plays, and relative ad spot performance.

The Bitmovin Video Player

Getting this much data from users who are streaming your videos requires to be deeply embedded in the playback sessions and therefore in the players themselves. That’s where the Bitmovin Video Player comes in.
Not only does the Bitmovin Video Player provide the widest device support for playing your videos with efficient adaptive algorithms, including with multiple codecs, and allows you to dynamically insert ads into those streams, all through a rich universal yet configurable UI, but it also contains an event-based engine that will push that rich data to the Bitmovin Analytics solution to give you that fine-grained, accurate insight into how users are watching your videos.

Bitmovin Player in Action_Screenshot
Bitmovin Video Player in Action

Proof through Video Analytics

In the first and second parts of this blog series, in which we described the architecture and implementation of our application, we touched only briefly on the differences between a workflow that generates a static ladder and one that generated a ladder optimized with the Per-Title algorithm. That’s because those differences are small, and didn’t have a material impact on the implementation.
However, it is time to bring this back to the front. We are setting out to prove that Per-Title gives you significant savings when used in your production workflow, without impacting the playback experience. We can only do this through actual comparisons between different outputs encoded from the same assets.
There are usually two main ways in which Per-Title encoding delivers operational savings: reduced storage costs and reduced bandwidth costs.
The difference in storage costs is easy to calculate directly from the output of the encoding. Simply look at the difference in the total file size generated for the two encodings. The ratio between those will give you a simple and generally reliable answer. You can look at the files themselves on your Output bucket, or query the Bitmovin platform to retrieve the encoding’s statistics. Since Per-Title will behave differently with different assets, it is best to take the average across a few representative assets into consideration for this calculation. 
For bandwidth savings, it gets a bit more complicated. You could obviously look at the difference in bitrate between renditions in your 2 ladders, but there are a few complicating factors: the ladders will have a different number of renditions and different bitrates between them. And in reality, nobody streams all the renditions of your ladder at the same time. What renditions are actually played very much depends on your audience, what bandwidth they have available, what device they are playing on, etc.  You can try and model this playback usage, but at the end of the day, there is no better data than real data. Enter the Bitmovin Analytics…

Bitrate Expentiture for Encoding_Bitmovin's Video Analytics_Screenshot
Running a streaming simulation in the demo page based on a parallel playback session of the same asset with the 2 ladders shows a 49% saving in streaming costs for Per-Title. But how do we scale this to multiple assets, users, and playback sessions?

What metrics should we use for this?  We are obviously looking at the quality of service here, and the data we are after is captured by two main metrics: 

  • Data Downloaded which shows the amount of video data downloaded during playback sessions by users.
  • Video Bitrate shows the average played bitrate across all plays on the platform. We expect to see this one is reduced by the use of Per-Title

Whilst we are at it, there are probably a few other metrics that we may want to consider keeping an eye on when evaluating how Per-Title ladders behave:

  • Video Startup Time: a Per-Title ladder should not cause the startup time to increase
  • Rebuffering: A Per-Title ladder will usually contain fewer renditions than a static one. This should not be to the detriment of the playback session
  • Scale Factor, which is a numeric indicator of the relationship between the playback window size and the resolution your stream was delivered at. Most of the time Per-Title will allow users to stream higher resolutions at similar or lower bitrates compared to a fixed ladder, and therefore users will more often and more quickly get to watch the video that matches their player’s native resolution and stay at that level throughout the streaming session. We should therefore see this number get closer to 1 for the content encoded with Per-Title.

The best way to perform this comparison is to use an A/B test scenario. A/B tests are usually used to test the performance of different CDNs, or test the stickiness of different marketing videos. Here we will pitch static ladders against Per-Title ladders. 
Luckily, Bitmovin Analytics is perfectly suited to do A/B testing through experiments

Implementation

But before we get there, let’s complete our discussion of the implementation that will give us that information. In the previous two parts of this series, you saw how to use AWS Lambda to save video metadata and playback URLs to DynamoDB. In this section, I’ll walk you through the steps you need to add the Bitmovin Video Player and Video Analytics solutions to your application.

Video Player and Video Analytics_Encoding Workflow
Adding Video Player and Video Analytics into an Encoding Workflow with AWS

Embedding and Configuring the Bitmovin Web Player

We’ll create a very simple HTML page and embed the Bitmovin Web player into it.
First you will need to retrieve your Bitmovin Video Player license via the dashboard, and configure it to be allowed on your domain. 
To embed the video player, simply add the Bitmovin Player JavaScript library to the `<head>` section of your HTML file:

Embedding the Video Player JS Library_Code Snippet
Embedding the Bitmovin Video Player

Next, add a `<div>` that will contain the instance of your player, and use the following JavaScript snippet to instantiate the player with your license key:

Adding the Video Player Instance into a cloud-based workflow_code snippet
Adding the Bitmovin Player instance to your page

This adds an instance of the Bitmovin Player to your page. Now you need to populate it with some video data.

Passing Video Data Into the Video Player

If you’re emulating the AWS-based architecture in the previous section, you need to retrieve the video metadata saved to DynamoDB first. DynamoDB has a JavaScript API, so depending on how you want to query it, your code for retrieving records could look something like this:

Retrieving Video Metadata Records_Code Snippet
Retrieving Video Metadata Records

In our Demo application we were loading 2 players side by side, allowing the user to select an asset from a dropdown. We were retrieving all data for our small list of assets through a `dynamodb.scan()` operation. For a more realistic application, you will probably want to query data for a single asset instead through a `dynamodb.query()` call, and then use a random (or controlled) way of selecting playback information for the Per-Title or static ladder.
Note also that you’ll probably want to find a less insecure way of enabling access to your DynamoDB. The AWS documentation has good recommendations on this subject
Once the relevant video data has been extracted, the `configure_player_with_data` function creates a properly formatted `source` payload with the URLs to the DASH and HLS manifests, and passes it to the `player.load()` method:

Passing Video Player Configs to the Player Load Method_Code Snippet
Passing Video Player Configs to the `player.load()` Method

Integrating Analytics and Tagging Your Experiment

While the video player above works fine, we still need to connect it to our Bitmovin Analytics account to get the data flowing. This can all simply be done by modifying the `playerConfig` variable from earlier.

Video Player Configuration with Video Analytics Key_Code Snippet
Connecting Bitmovin’s Video Player and Video Analytics

We then set some of the asset metadata in the `source`, allowing us to later easily identify videos in the Dashboard or analytics data. This is also where we configure our A/B experiment by simply defining an `experimentName`, and setting its value to “static” or “per-title” based on the type of ladder selected.

Asset Metadata for Encoding Types_Code Snippet
Asset Metadata for Encoding Types

We can now let our users lose on our video player page and collect some data. Some will get a per-title ladder, others a static ladder. Let the data flow!

Looking at the data

After running a few experiments, we can look at the results in the Bitmovin dashboard, by going to Analytics > Quality of Service > Comparison > Experiments.
This multi-column view allows you to compare key metrics side by side for your named experiments. In our simple scenario, we only have 2 columns. Other metrics that are not displayed in this view can still be accessed in their respective dashboard view and can be broken down by experiment. 

Video Bitrate Ladder_Graph
Bitmovin’s Bitrate Ladder

Although we only got a very limited sample of data from this hackathon, we could already see some important trends:

  • Video startup time seems actually lower with the per-title ladder. This may not be significant but is certainly good news.
  • Buffering is also slightly reduced with our per-title ladder.
  • The average bitrate across playback sessions is very significantly lower with per-title, with a reduction of 65% in bandwidth. _Note that the colors for that metric are incorrectly set at the time of writing. A lower value is evidently better, assuming that the level of visual quality delivered is at least similar_ 
  • Data downloaded is naturally significantly lower too.
  • Scale factor was not significantly different, however, but that can be attributed to the fact that our playback sessions were probably all in favorable conditions that saw all players able to quickly jump to the top rendition, which has the same resolution in both ladders. 

Another interesting comparison between the two ladders, assuming both ladders had had a good number of views, would have been to use the video bitrate heatmap to compare how the two ladders performed in terms of bitrate usage over the duration of the videos and showed much more efficient use of the reduced number of renditions.
There was unfortunately not enough time during our 2020 hackathon to conduct a more realistic experiment. If we had, the data would have been more accurate, and likely would have shown a slightly reduced bitrate saving, but without putting into question the clear advantages of using Per-Title. 

Conclusion

In part one, you saw an example of a high-level AWS architecture you can use to perform Bitmovin’s Per-Title Encodings in your own AWS infrastructure. In part two, you learned how to implement and deploy the main parts of the workflow with AWS Lambda code and DynamoDB. Finally, in this part, you saw how to add the Bitmovin Player and Analytics to run experiments comparing Per-Title and Standard encoding ladders.
Analytics are an important part of the video streaming pipeline. Whether you use Cloud Connect or Bitmovin’s managed API, you’ve seen in this post how all three of Bitmovin’s products work together to improve the video delivery and streaming experience.
Now that you have a complete picture of what’s possible using Bitmovin, let us know how we can help. Reach out to Bitmovin’s team so we can help you get started with an efficient, scalable video streaming pipeline today.

The post Cloud-based Per-Title Encoding Workflows (with AWS) – Part 3: Adding the Player and Video Analytics appeared first on Bitmovin.

]]>
Cloud-based Per-Title Encoding Workflows (with AWS) – Part 2: Implementing the Encoding Workflow https://bitmovin.com/blog/cloud-based-per-title-encoding-aws-p2/ Tue, 02 Mar 2021 14:30:27 +0000 https://bitmovin.com/?p=159075 Implementing Cloud-Based Per-Title Encoding in the Real World Bitmovin’s gives developers access to industry-leading codecs and advanced encoding algorithms such as Per-Title. As you saw in the first part of this series, Per-Title Encoding allows you to save money on storage and streaming costs while giving viewers the best perceivable quality for their streaming session. In...

The post Cloud-based Per-Title Encoding Workflows (with AWS) – Part 2: Implementing the Encoding Workflow appeared first on Bitmovin.

]]>
- Bitmovin

Implementing Cloud-Based Per-Title Encoding in the Real World

Bitmovin’s gives developers access to industry-leading codecs and advanced encoding algorithms such as Per-Title. As you saw in the first part of this series, Per-Title Encoding allows you to save money on storage and streaming costs while giving viewers the best perceivable quality for their streaming session.
In part one of this three-part series, I outlined a demo application and discussed a practical high-level architecture to deploy Bitmovin’s encoding service into your own AWS account’s infrastructure using Cloud Connect
In the application we’re discussing in this blog, users are able to upload videos to an S3 bucket which triggers a Lambda function. The Lambda, in turn, calls the Bitmovin Encoding API to configure and start encoding. The Bitmovin platform then spins up instances in your AWS account to perform the encoding. On completion, playback information and metadata about the encoding is retrieved, stored, ready to be passed through to a front-end application that will let you watch the encoded asset.
In this second part, I’ll talk more about the implementation details for the encoding workflow. I’ll show you first how to enable and configure Cloud Connect, and then how to set up this AWS-based workflow to trigger Bitmovin encodings. If you want to skip ahead, check out part three of this series: Cloud-Based Workflows with AWS: Plugging in the Video Player and Video Analytics

Enabling Cloud Connect for AWSPer-Title Encoding Workflow on AWS_Flow chart

The Bitmovin Cloud Connect feature essentially allows you to perform encodings on virtual infrastructure inside your own AWS account, without any restrictions on the encoding features, codecs, and algorithms that Bitmovin offers.  
Let’s first look at how you need to configure things to use Cloud Connect. You will need an AWS account, and a Bitmovin account with Cloud Connect enabled (contact us if you don’t). The configuration process is documented in detail here, but I’ll walk you through the major steps below.

Configuring AWS

Bitmovin will be creating resources within your AWS account, so you’ll need to create an IAM user for it. This user needs to have access to EC2 with applicable permissions. For a prototype, using the `AmazonEC2FullAccess` is sufficient.
Next, you’ll need a Virtual Private Cloud (VPC) and Security Group in your AWS account. You likely already have a default VPC, but if you deleted it, recreate it before proceeding. The security group will make sure that the EC2 instances can communicate with the Bitmovin API and with each other. 
Finally, you will likely need to request quota increases from AWS, depending on your expected workloads and concurrency requirements. That’s due to the nature of the Bitmovin encoding process, which splits the video asset into chunks and encodes them in parallel on multiple worker instances.

Configuring Bitmovin

Now that you have a new IAM user and your AWS account configured, you need to create an infrastructure resource on the Bitmovin platform. From the Bitmovin dashboard, go to Encoding > Infrastructure > Add new Infrastructure.

Creating a new AWS Per-Title Encoding Infrastructure_Bitmovin Dashboard_Screenshot
Creating a new AWS Encoding Infrastructure in the Bitmovin Dashboard

Select AWS at the top and enter your credentials and a name for this infrastructure. Click Create. You now need to specify and configure what AWS region you want to work in. Click Add new Region Settings.

Adding new AWS Region_Bitmovin Dashboard_Screenshot
Adding new AWS Region in the Bitmovin Dashboard

Enter your `security_group_id` and a low number of Max parallel Encodings to start with. You won’t need to set the other settings for simple use cases.
Finally, you need to request access to Bitmovin’s Amazon Machine Images (AMIs). These will be used to create EC2 instances on your AWS account, so let your Bitmovin technical contact know your AWS account number to get access to the AMIs.
This is all you need to do to enable Bitmovin Cloud Connect on your AWS account. Once you start using Cloud Connect, you’ll be able to take advantage of volume pricing discounts, security rules, and configuration options that aren’t available to customers using Bitmovin’s managed encoding service. 
Make a note of the ID of the infrastructure. You will need to use it in your encoding configuration later on to instruct Bitmovin to route encodings to your AWS account.

Bitmovin Encoding workflow on AWS

We are now ready to look at the details of the implementation of the workflow. In the first part of this series, I discussed in some detail the architectural choices we made for this demo application. In the remainder of this post, we will focus on the components that interact directly with the Bitmovin encoding platform.
You can also find more details on using the Bitmovin Encoding API in the documentation. Otherwise, read on to see some code samples and more details for setting this up yourself.

Encoding with the Bitmovin API and SDK

I’ll focus first on the AWS Lambda code that calls the Bitmovin API when a new file arrives on S3.

Bitmovin Per-Title Encoding API and SDK_AWS Lambda Code_Flow chart
Lambda workflow for triggering an encoding

After you’ve set up S3 event notifications to call a Lambda function, you need to write some code that will handle the notification. Lambda supports several common web programming languages and Bitmovin provides SDKs for most of the same ones, but I’ll use the Python SDK for this example. You should be able to adapt these samples to your language of choice.
A Lambda function is configured to invoke a single method in the code. That method is passed an event that contains contextual information and in this case essential information about the S3 bucket and file that triggered the function. We need both to pass into the encoding configuration.


Per Title Encoding Configuration_Lambda_Python Code Snippet
Lambda handler function

You will previously have configured an S3 Input object on the Bitmovin platform that allows the encoder to grab files from that bucket. For simple workflows with a single watch folder, you would not even need to retrieve the bucket name from the event data, but here we do it to keep the code generic and allow multiple watch folders in different buckets.
The `encode` function will initialise the Bitmovin API with your credentials, which are passed to the Lambda function via environment variables, then retrieves the S3 Input resource that corresponds to the one that triggered the event. It then creates our encoding object.

Triggering Encoding Object in AWS Lambda_Python Code Snippet
Creating an Encoding Object in Python

Compared to a “standard” encoding, there is really only one difference when using Cloud Connect: note how we pass the infrastructure ID and AWS region when creating the encoding object. That’s all! From here on, it is a standard Per-Title configuration, which will generate an ABR ladder optimised for that video asset.

Encoding Configuration for Per-Title on AWS_Python Code Snippet
Encoding configuration for video

In my code I make use of helper functions that wrap the Bitmovin SDK, to improve the readability of the high-level functions and reusability. I’ve left a lot of detail out because the Bitmovin Python SDK is already well documented. In particular, you can get full details on how to configure Per-Title from our tutorial
When it comes to audio, I want to make sure that the encoding only attempts to create an audio stream if it’s present in the source. This is easily accomplished with stream conditions:

Encoding Configuration_Custom Audio_Stream Condition Input_Python Code Snippet
Encoding configuration for audio

We want to be able to play our video through standard web players, and we therefore also need to create HLS and DASH manifests for them. Since our use case is quite simple, I will make use of Bitmovin’s Default Manifest functionality that will create a standard manifest with very little configuration needed. So the `encode` function continues…

Adding Manifests to the Encoding_Python Code Snippet
Encoding configuration for default manifests

With this done, we are now ready to start the encoding. The start call needs a request payload that instructs the encoder to use the Per-Title algorithm and gives it complete freedom to choose the number, bitrate, and resolution of the renditions to generate for that input asset. 
We also let the encoder generate the manifests automatically when the encoding process has completed.

Start Encoding Request_AWS Lambda_Python Code Snippet
Encoding configuration – start request

The code is complete, we are now ready to deploy it to our Lambda function. You will need to package it with the Bitmovin Python SDK as dependency. Check the AWS documentation on the various methods that you can use for this deployment. We also need to set the environment variables as appropriate.

Deploying Encode on AWS Lambda Function_AWS Dashboard_Screenshot
Lambda function deployed

At this point, your Lambda will be triggered every time a new video file is added to your S3 bucket. When encoding starts, Bitmovin will split the file, spin up Spot Instances in your AWS account, and begin the encoding process. When the video is finished, it will be saved in the Output S3 bucket you configured. You’ll be able to monitor the encoding process in the Bitmovin dashboard.

Completed encode_Bitmovin Encoding Dashboard_screenshot
Completed encoding in the Bitmovin Dashboard

Reacting on Completion of the Encoding

In our workflow, we want to gather some information when the encoding is complete to feed to the front-end application. At a minimum, we want to know whether the encoding succeeded, the name of the asset, and the URLs of the manifests. In essence, this performs the same function as an online video platform or content management system would in a more traditional setup.
All of this should be automatic, so we will use Lambda again to retrieve that information. To trigger it, the Bitmovin platform will notify a webhook endpoint when the encoding is finished. The Lambda function will retrieve that information and then store it to DynamoDB

Retrieving content info in DynamoDB_Flow Chart
Lambda workflow for retrieving encoding information

The AWS console makes it easy to create an AWS Lambda function triggered with an HTTP call, through an API Gateway endpoint. 

Triggering AWS Lambda Function_HTTP Call_AWS Dashboard_Screenshot
Triggering AWS Lambda Function with an HTTP Call

On the Bitmovin side of things, the simpler way to configure a webhook is through the Dashboard. I will create a single “catch-all” webhook that gets triggered for all finished encodings in my Bitmovin account. If I wanted to do it on a per-encoding basis, I could just add a webhook in the encoding configuration instead, in my `encode` function.

Webhook Configuration_Bitmovin Dashboard_Screenshot
Configuring Webhooks in Bitmovin’s Dashboard

Amazon has documentation for setting up DynamoDB, so I won’t cover that here. Instead, I’ll show you how to save data coming in from the completed encoding job. I chose DynamoDB to store the data because it allows you to quickly store unstructured data like this. Great for prototyping during a hackathon!
This second Lambda function has its own code, with its own handler. This time the event data contains the payload from the Bitmovin notification, which contains the encoding ID.

Importing Encoding Data_Lambda Function 2_Python Code Snippet
Lambda handler function

The `summarize_encoding_info` uses the Bitmovin SDK to retrieve the asset name, path, manifest URLs, status, and other useful metadata into a JSON object. I won’t go through the details here, but you will find tips on how to do this in my tutorial on retrieving encoding information.
This function also translates S3 URLs into CloudFront CDN URLs, which the player will use for streaming.
Having extracted that metadata, we save the `info` object to our DynamoDB table (using AWS’ excellent boto3 Python library):

Saving Metadata to DynamoDB_Python Code Snippet
Saving the Metadata to DynamoDB

From now on, the data will flow in automatically, and you can use the AWS DynamoDB interface to look up metadata about your encodings.

AWS DynamoDB Interface_AWS Dashboard Screenshot
Encoding metadata in the AWS DynamoDB

Multiple encodings

You may remember from part one of our blog series that for the 2020 AWS + Bitmovin hackathon project, we wanted to compare Per-Title and static ladders for our content. The eagle-eyed among you will also have noticed from the previous screenshot that my DynamoDB table does indeed contain 2 sets of info data for the asset.
The code I presented in this post is indeed a simplification, which only creates a single Per-Title ladder, to make it more of a real-life use case. If you wanted to match what we did, the differences are actually quite small:

  • The handler for the first Lambda function triggers 2 encodings in parallel. 
  • A single parameter on the “encode” function offers a switch to allow it to handle the small differences between a static and a Per-Title configuration
  • Each encoding is independent and triggers a completion notification individually. The summarize_encoding_info function in the second Lambda determines whether Per-Title was used (which I do through the use of `labels` on the encoding) and updates the corresponding fields in the DynamoDB table 

What’s Next?

In this post, you saw how to configure Bitmovin’s Cloud Connect for Amazon Web Services and call Lambda functions each time a video is uploaded and encoded. Using the encoding complete webhook, you can save metadata about each video into DynamoDB.
Going back to the high-level architecture from part one, there’s just one more piece of the application to cover. In the last part of this three-part series, I’ll show you how to implement the Bitmovin Player and gather data about how users are interacting with your content using Bitmovin Analytics.
Finally, if you need help setting up a scalable video encoding pipeline on AWS, reach out to Bitmovin’s team or read more in the encoding API’s documentation.

The post Cloud-based Per-Title Encoding Workflows (with AWS) – Part 2: Implementing the Encoding Workflow appeared first on Bitmovin.

]]>
Cloud-based Per-Title Encoding Workflows (with AWS) – Part 1: Establishing the Architecture https://bitmovin.com/blog/cloud-based-per-title-encoding-aws-p1/ Mon, 08 Feb 2021 15:00:12 +0000 https://bitmovin.com/?p=155117 If you work with video on the internet, you know how resource-intensive encoding can be. While moving from in-house to cloud-hosted servers can save you a lot of money, that doesn’t change the fact that processing large videos with modern codecs takes significant computing power.  “Video transcoding is one of the most computationally challenging things...

The post Cloud-based Per-Title Encoding Workflows (with AWS) – Part 1: Establishing the Architecture appeared first on Bitmovin.

]]>
Cloud-based per-title encoding_Featured image
If you work with video on the internet, you know how resource-intensive encoding can be. While moving from in-house to cloud-hosted servers can save you a lot of money, that doesn’t change the fact that processing large videos with modern codecs takes significant computing power.

 “Video transcoding is one of the most computationally challenging things you can do right now. As we’re moving toward more advanced codecs, those challenges become even bigger… [Bitmovin is] trying to deliver the best quality per bit so we can reach users on their mobile device or lower quality connections, while saving on their CDN spend.” – Paul MacDougall, Principal Sales Engineer, Bitmovin

Bitmovin’s video-encoding service and its unique parallelized architecture gives your developers access to the best codecs and encoding algorithms in the industry, allowing them to efficiently transcode videos in the cloud without maintaining their own custom software or hardware. This is how you can set up your very own cloud-based encoding workflow using Bitmovin Encoding with AWS.

Why Per-Title Encoding?

One of the biggest advantages of using Bitmovin is per-title encoding. Unlike standard encoding ladders, Per-Title Encoding offers you the best perceivable quality at the lowest possible bitrate. This can lead to fewer encoded files to store and lower bandwidth usage when your video is streamed.
When you compare Per-Title and standard encoding, you’ll notice that the video quality is essentially the same, but the bitrate savings are significant.

Per-title encoding vs standard encoding bitrate ladder_image and table comparison
Legend: The two ladders compared, with the top rendition streamable for a bandwidth connection limited to 5 Mbps. The Per-Title top rendition has a higher quality (as measured by PSNR and VMAF) at a higher resolution, yet with a 50% reduction of bitrate compared to the highest rendition within the static ladder that can be streamed at that available bandwidth.

Serving video at a lower bitrate means you will stream less data, which in turn means lower hosting costs. As you can see from some of the data obtained during Bitmovin’s AWS Hackathon in 2020, Per-Title Encoding can lead to a large cost saving compared with a standard ladder (up to 69% for some of the assets used in this particular application).

Comparing storage and streaming costs for standard and Per-Title encoding_Linear graph comparison
Legend: Running a streaming simulation with the highest rendition possible under the bandwidth restriction of 5 Mbps shows a 49% saving in streaming costs for per-title encoding. And as for storage, the whole per-title ladder comes with a 78% reduction.

Finally, Per-Title Encoding typically means fewer encoded files to store, which also reduces your hosting costs. While results will vary depending on the complexity of your video, Per-Title Encoding is almost always the right choice, as it optimizes the ladder for every asset individually.

Bitmovin Cloud Connect on AWS

With Bitmovin’s new Cloud Connect encoding option, you can now deploy Bitmovin’s software to your own public cloud account, including Amazon Web Services.
Bitmovin’s Cloud Connect option can help you further reduce your costs, allowing you to take advantage of bulk pricing deals by letting you run Bitmovin’s encoding process on your own AWS infrastructure, including with the Per-Title algorithm. Cloud Connect also gives you more control over how your infrastructure is deployed and lets you apply your own security policies while getting complete access to Bitmovin’s robust software and auto-scaling for maximum performance.

Per-Title Encoding in the Real World using Cloud-Based Workflows (with AWS)

In this three-part series, you’ll see how to deploy a real-world application that uses Bitmovin’s Per-Title Encoding in a standard cloud-based AWS workflow and with Cloud Connect. This series is based on a workflow that we built in a 2-day winter 2020 hackathon between Bitmovin and AWS.
This first part will give you an overview of the high-level architecture and AWS resources you need to run Bitmovin Per-Title Encoding. In the second part, we’ll dive into the code so you can see some of the important details your engineers will need to run encoding on your AWS architecture with the Per-Title algorithm, and in the third part, you’ll learn how to use the Bitmovin Player to gather analytics and examine how users are consuming your videos in real-time.

What We’re Building

This demo application allows users to upload a video, and then view that video. Behind the scenes, we’ll use both our classic Bitmovin Per-Title as well as our new Cloud Connect options to process and encode the video using a Per-Title ladder.
Before starting, it’s important to understand Bitmovin’s services. We offer three services that will be used throughout this application:
Encoder: Breaks the uploaded video into chunks, transcodes each piece, and stitches them back together when complete.
Player: Allows users to view the transcoded video on any device and any browser.
Video Analytics: Give you insight into how users interact with your video, their bitrate, and the amount of data streamed.

Bitmovin Service offering_Workflow
Bitmovin’s product’s in a video workflow

When using Cloud Connect, you still use the Bitmovin platform via its APIs to orchestrate your encoding workflow, but the encoding tasks are performed in your own AWS account. While Cloud Connect runs on your infrastructure, Bitmovin handles most of the hard work of scaling up the number of instances for encoding, so you’ll just need a few Amazon services to glue the pieces together.

Key AWS Services

AWS has several great products that we’ll use together to handle authentication, file uploads, calling the Bitmovin API, and storing transcoded files and URLs. Let’s look at the key pieces required for this application and what their roles are.

Cognito

AWS Cognito handles user authentication so that only authenticated users can upload a video to your portal. While you could build your own authentication, Cognito saves you a lot of time by integrating with your existing SSO solution and Amazon’s other services.

Amplify

AWS Amplify will power your upload page. It integrates with Cognito and AWS’s various data storage options, so you can focus on the core parts of your application logic rather than the glue that moves data into and out of your backend.

AWS Amplify and Cognito for login_workflow
AWS Amplify and Cognito for login

Simple Storage Service (S3)

AWS S3 offers fast, affordable file hosting. We’ll use it to store the raw video files uploaded by users and the encoded files processed by Bitmovin.

Lambda

AWS Lambda is Amazon’s serverless hosting option, which allows you to write a few lines of code and instantly make them available to run in the cloud rather than having to build and deploy a robust application from scratch. Our Lambda functions will trigger the Bitmovin encoding cluster and handle the results when the encoding is finished.

EC2

Bitmovin’s encoder runs on Amazon EC2 and takes advantage of the pricing discounts for Spot Instances to help you keep your costs low. While you won’t need to manually spin up any EC2 instances to process videos (Bitmovin handles this for you), it may be helpful to know that this is what’s running under the hood.

Bitmovin's Cloud Connect Encoding Workflow on AWS_Illustration
Bitmovin Cloud Connect uses EC2 to process videos

API Gateway

Amazon API Gateway will provide a webhook URL that Bitmovin will notify when encoding is complete. API Gateway can then pass data to other Amazon services, so we’ll use it to post encoding information when the processing of a new video is done, to trigger our post-processing Lambda.

DynamoDB

Amazon’s proprietary database, DynamoDB, will store the encoded videos’ URLs. The Bitmovin Player will use these URLs to stream video to the user at the appropriate bitrate and resolution.

CloudFront

Finally, we’ll use AWS CloudFront to cache the encoded video files. A CDN is essential for streaming video because it ensures that viewers around the world will be able to watch your videos with minimal latency.

Building a Cloud-based Encoding Architecture with AWS

Now that we’ve reviewed how each Amazon service will be used, let’s walk through the flow of data through your application.

Bitmovin Products using AWS Services_Workflow
Bitmovin Products within AWS Service Workflow

There’s a lot going on in the diagram above, so let’s break it down:

  1. A user logs in using Cognito and is directed to a simple upload page hosted on Amplify.

- Bitmovin

  1. The user can upload a file, which is then stored in an S3 bucket.
  2. An S3 event notification triggers a first Lambda function.
  3. The Lambda function configures and triggers a new Bitmovin encoding using the API, taking as parameter the new file’s location on S3.
  4. Bitmovin spins up Spot Instances as needed to encode the file, and saves the outputs of the encoding to another S3 bucket. 
  5. On completion (or failure) of the encoding, Bitmovin calls a webhook hosted on Amazon API Gateway.
  1. The API gateway endpoint triggers a second Lambda function, which retrieves the manifest URLs from the finished encoding job.
  2. The Lambda saves these URLs to DynamoDB along with some video metadata.
  3. The Demo Page with embedded Bitmovin player retrieves the data from DynamoDB and triggers playback.
  4. The video files are served from S3 through CloudFront to the Bitmovin Player.
  5. The Player sends data to Bitmovin Analytics.

The application built for the Bitmovin-AWS hackathon actually triggered 2 encodings, one for a static ladder and the other one with Per-Title. This allowed us to compare and contrast the two types of encodings. It goes without saying that you are unlikely to need to do this in a real-life application, but the principles remain the same. We will highlight in the remainder of this 3-part blog when a step is taken specifically to enable this demo use case.

Other Options

While the architecture discussed here takes advantage of many of AWS’s latest services and features, you might want to integrate the Bitmovin Encoder into an existing application that’s set up very differently. Fortunately, the encoder can be called from any codebase and Bitmovin offers SDKs and getting started guides for using the API.
For example, you don’t need to use Lambdas to call the Bitmovin API or API gateway to listen for the completed encoding jobs. You can easily call the encoder’s API from an application deployed to EC2 or ECS. Similarly, there’s no reason you have to use DynamoDB (or a NoSQL data store at all) if you prefer MySQL or Postgres.

What’s Next?

Now that you have a high-level view of how you can deploy Bitmovin Per-Title Encoding with Cloud Connect to your AWS account, you’re probably ready to see some real API calls so you can replicate this yourself. In the next part of this series, I’ll get more tactical, showing some of the code you can use to run this application. Finally, in the third part of this series, you can learn how to use Bitmovin Analytics to understand how your users interact with your content delivered through the player.
In the meantime, you can check out our API documentation or our step-by-step tutorial for deploying Cloud Connect to learn more. When you’re ready to implement Bitmovin, contact us so we can help you get started.

The post Cloud-based Per-Title Encoding Workflows (with AWS) – Part 1: Establishing the Architecture appeared first on Bitmovin.

]]>