Marco Sussitz – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Mon, 09 Jan 2023 11:57:30 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg Marco Sussitz – Bitmovin https://bitmovin.com 32 32 Creating a Live Streaming Setup with Android Devices and Bitmovin https://bitmovin.com/blog/creating-live-stream-android-bitmovin/ Tue, 20 Dec 2022 11:27:55 +0000 https://bitmovin.com/?p=247692 It is that time of the quarter again at Bitmovin, Hackathon time. Our Hackathon includes two days of hacking solutions together before presenting the results to the wider business. Every programmer loves it and even more when your team is winning.  During this Hackathon, we decided to step out of our backend-comfort zone and tinker...

The post Creating a Live Streaming Setup with Android Devices and Bitmovin appeared first on Bitmovin.

]]>
It is that time of the quarter again at Bitmovin, Hackathon time. Our Hackathon includes two days of hacking solutions together before presenting the results to the wider business. Every programmer loves it and even more when your team is winning.  During this Hackathon, we decided to step out of our backend-comfort zone and tinker a bit with UI.

To be more specific, we tried to write an android app that would serve as an RTMP input source for a live stream.

Setting up an RTMP input can be lengthy, especially when there is no computer with FFmpeg at hand. And syndetic input can be boring at times. So why not use a live feed from your mobile device for this? My go-to tool to create an RTMP stream is FFmpeg so let’s try to use this in android. We quickly realized that that was a futile endeavor, especially in two days. So we did what any software developer would do and looked for a library that had been written by somebody smarter than us. We tried a few libraries like the Larix Broadcaster SDK or some small GitHub repositories like the LiveVideoBroadcaster but weren’t able to get any of these to work in the limited time we had. Finally, a bit more searching revealed the grail we had been looking for; we unearthed this library and managed to get something running.

How to use it

The library creates a very handy OpenGLView that lets you show the camera in your activity. If you’d like to follow along with the steps we took, then to start, simply add it to your XML config like this:

<com.pedro.rtplibrary.view.OpenGlView
    android:layout_width="0dp"
    android:layout_height="0dp"
    app:layout_constraintBottom_toBottomOf="parent"
    app:layout_constraintEnd_toEndOf="parent"
    app:layout_constraintStart_toStartOf="parent"
    app:layout_constraintTop_toTopOf="parent"
    android:id="@+id/surfaceView"
    app:keepAspectRatio="true"
    app:aspectRatioMode="fill"
    app:AAEnabled="false"
    app:numFilters="2"
    app:isFlipHorizontal="false"
    app:isFlipVertical="false"
    />

The corresponding code will need an OpenGlView object as well as a RtmpCamera object. Let’s go ahead and create those:

private RtmpCamera2 rtmpCamera;
private OpenGlView openGlView;

Those elements also need some values assigned, and we used the onCreate method for that:

this.openGlView = findViewById(R.id.surfaceView);
this.rtmpCamera = new RtmpCamera2(openGlView, this);
this.openGlView.getHolder().addCallback(this);

Starting an RTMP stream

As all of the necessary configurations were complete, we just have to start the RTMP stream. To do that, the only thing we need to add to the code below (how to obtain that will be shown later).

@Override
public void startStreaming() {
   runOnUiThread(() -> {
       if (!rtmpCamera.isStreaming()) {
           if (rtmpCamera.isRecording()
                   || rtmpCamera.prepareAudio() &&                      rtmpCamera.prepareVideo()) {
               rtmpCamera.startStream("URL to you live stream");
               setStateStreamIsRunning();
           } else {
               this.toastError( "Error preparing stream, This device cant do it");
           }
       } else {
           rtmpCamera.stopStream();
       }
   });
}

We used a separate thread and a callback for that. That is why there is a runOnUiThread, but if you start the RTMP stream from your activity, you will not need that.

The RTMP URL can be set here rtmpCamera.startStream("URL to you live stream");

Setting up the live stream encoder

For our encoder, we used the simple API https://bitmovin.com/docs/encoding/articles/simple-encoding-api-live, so we needed minimal configuration to get the live stream up and running. We used the CDN https://bitmovin.com/docs/encoding/articles/bitmovin-cdn-output output as well because it would work out of the box.

To get this going, the first thing to set up is the input:

private SimpleEncodingLiveJobResponse setUpLiveStream() {
   SimpleEncodingLiveJobInput input = new SimpleEncodingLiveJobInput();
   input.setInputType(SimpleEncodingLiveJobInputType.RTMP);
   SimpleEncodingLiveJobRequest job = new SimpleEncodingLiveJobRequest();
   SimpleEncodingLiveJobCdnOutput outputUrl = givenCdnOutputWithFullHDResolution();

Next is the output. In the spirit of the hackathon, we used a CND because of the minimal configuration, which you can set up like this:

private SimpleEncodingLiveJobCdnOutput givenCdnOutputWithFullHDResolution() {
   SimpleEncodingLiveJobCdnOutput output = new SimpleEncodingLiveJobCdnOutput();
   output.setMaxResolution(SimpleEncodingLiveMaxResolution.FULL_HD);
   return output;
}

Now only some minor configurations are left.

   job.setInput(input);
   job.addOutputsItem(outputUrl);
   job.setCloudRegion(SimpleEncodingLiveCloudRegion.EUROPE);
   job.setName("Android app live stream");
   return bitmovinApi.encoding.simple.jobs.live.create(job);
}

Finishing touches

A live stream takes a bit of time to be ready. in the essence of time, we swiftly did a very ugly “busy waiting” loop to wait until the live stream was done with the setup. To improve this, we have left it as an exercise for the reader.

public boolean setupLiveStreamAndWaitForRunningState(Callback cb) {
   this.stopped = false;
   this.encodingId = null;
   SimpleEncodingLiveJobResponse jobResponse = setUpLiveStream();
   try {
       while (!readyForIngestOrFailed(jobResponse) && !stopped) {

           Thread.sleep(300);
           jobResponse = bitmovinApi.encoding.simple.jobs.live.get(jobResponse.getId());
           if (jobResponse.getEncodingId() != null && this.encodingId == null) {
               this.encodingId = jobResponse.getEncodingId();
           }
           cb.reportStatus(jobResponse.getStatus().toString());
       }
   } catch (Exception e) {
       cb.toastError(e.getMessage());
   }
   if (this.stopped || this.encodingId == null || this.encodingId.isEmpty())
   {
       return false;
   }
   cb.startStreaming();
   this.rtmpUrl = String.format("rtmp://%s/live/%s", jobResponse.getEncoderIp(), jobResponse.getStreamKey());
   return true;
}

To retrieve the RTMP URL that needs to be inserted into the code, get it by using:

this.rtmpUrl = 
String.format("rtmp://%s/live/%s", jobResponse.getEncoderIp(), 
jobResponse.getStreamKey());

We had to run this in a separate thread because you cannot have HTTP requests in the UI thread in android, and it would not be a good user experience anyway.

Once the hard part was done, we spent the second day of the hackathon refining the app and making the UI better and even got some much-appreciated help from one of the Bitmovin UX designers.

Wrapping up

As the hackathon drew to a close, the competition was fierce, with great projects submitted by other teams. However, in the end, all of the efforts paid off, as our team won first place! 

Here are a couple of images of how the live stream worked on an Android phone:

live streaming android - Bitmovin

Visual of stream interface pre-live

live streaming android - Bitmovin

Visual of active live stream

Additionally, if this project was interesting to you and you’re currently looking to test your streaming application across Android, iOS, or any other platform/device, check out our 30-day free trial where you can test the Bitmovin Live and VOD Encoder, Player or any of our other SaaS solutions completely free with no commitment.

The post Creating a Live Streaming Setup with Android Devices and Bitmovin appeared first on Bitmovin.

]]>
Introducing Bitmovin’s Simple Encoding API https://bitmovin.com/blog/simple-encoding-api/ Mon, 17 Jan 2022 09:57:12 +0000 https://bitmovin.com/?p=212336 If one looks at video and audio encodings like H.264, H.265, VP9, AV1 and all of the things that are associated with it, it can be very easy to get lost. There are millions of possible configurations and figuring out the right one can be a difficult task. The Bitmovin API makes it possible to...

The post Introducing Bitmovin’s Simple Encoding API appeared first on Bitmovin.

]]>
If one looks at video and audio encodings like H.264, H.265, VP9, AV1 and all of the things that are associated with it, it can be very easy to get lost. There are millions of possible configurations and figuring out the right one can be a difficult task.
The Bitmovin API makes it possible to granularly tinker with your video settings and cover a vast array of use cases. This alone can become a daunting task, as setting up even a simple encoding can take a lot of time. That is why we’ve set out to make taking your first encoding step easier. 
Bitmovin’s new Simple Encoding API uses our Per-Title technology to deliver the best video experience to you with little to no configuration necessary. Using the Simple Encoding API will set up a complete (and automated) encoding job for you by making use of our base API. The encoding will use an H264 codec for video inputs and an AAC stereo for audio.
*UPDATE – Feb 8, 2022* With the release of Encoder version 2.109.0, the Bitmovin Simple Encoding API now supports AV1 encoding, in addition to the default H264! Click below to try it out in Postman and check the documentation tab there for more details.

Run in Postman

How to use the Simple Encoding API

The Simple Encoding API will help you to create an encoding that will fit a vast array of use cases with only a single endpoint. You can either use any of Bitmovin’s SDKs or directly call the endpoints with tools like Postman. For additional information on how to add the API key to the request can be found here: https://bitmovin.com/docs/encoding/tutorials/get-started-with-the-bitmovin-api.
With a single endpoint in mind, the only things that you need to set up are the inputs and the outputs. 
The first thing you will need is input. The most common input types are supported: S3, GCS, Azure Blob Storage, Akamai NetStorage, HTTP(S), and (S)FTP.
You can choose between four different input types: audio, video, subtitles, and closed captions. If you don’t specifically state an input type for a file, it will be assumed to contain a video track and an optional audio track.
Multiple audio and subtitles/closed captions inputs can be used but only one video input is possible. Specifying a language is mandatory for subtitles and closed captions; as a best practice, we also recommend specifying the language for audio inputs.

Simple Encoding Output

Let’s define an input. 

"inputs":[
   {
    "url": "https://ftp.halifax.rwth-aachen.de/blender/demo/movies/ToS/ToS-4k-1920.mov"
   }
]

Next, we are going to define an output. The Simple Encoding API supports S3, GCS, Azure Blob Storage, and Akamai NetStorage as output, further details can be found in our documentation. One thing to consider is if the created files need to be private or public. If the files are public the output has to be able to support that.
Next, let’s set up a private S3 output.

"outputs": [
        {
            "url": "s3://your/output/path",
            "credentials": {
                "accessKey": "accessKey",
                "secretKey": "secretKey"
            }
        }
    ]

Now you only need to choose a name for your encoding job and put the inputs and the outputs into one JSON. It will look somewhat like this:

{
    "name": "simple_encoding_name",
    "inputs": [
        {
            "url": "https://ftp.halifax.rwth-aachen.de/blender/demo/movies/ToS/ToS-4k-1920.mov"
        }
    ],
    "outputs": [
        {
            "url": "your/output/path",
            "credentials": {
                "accessKey": "accessKey",
                "secretKey": "secretKey"
            }
        }
    ],
}

With the JSON completed the last thing to do is to send it to the API via this endpoint.

POST https://api.bitmovin.com/v1/encoding/simple/jobs/vod

The response will look like this

{
    "id": "854a2a86-9028-4e33-863b-d602a6bac24b",
    "status": "CREATED",
    "encodingId": null,
    "inputs": [
        {
            "url": "https://ftp.halifax.rwth-aachen.de/blender/demo/movies/ToS/ToS-4k-1920.mov"
        }
    ],
    "outputs": [
        {
            "url": "your/output/path",
        }
    ],
    "createdAt": 2022-01-17T14:26:54Z,
    "modifiedAt": 2022-01-17T14:26:54Z,
    "name": "simple_encoding_name"
}

Now that the start call is fully set up, the last thing to do is wait for the encoding to finish.
The endpoint will show the current status of your encoding

GET /encoding/simple/jobs/ vod/{simple_encoding_job_id}
{
    "id": "854a2a86-9028-4e33-863b-d602a6bac24b",
    "status": "FINISHED",
    "encodingId": "5b71dcf1-5f92-4534-b530-47706003e7a4",
    "inputs": [
        {
            "url": "https://ftp.halifax.rwth-aachen.de/blender/demo/movies/ToS/ToS-4k-1920.mov"
        }
    ],
    "outputs": [
        {
            "url": "your/output/path",
            "makePublic": true
        }
    ],
    "name": "simple_encoding_name"
}

Once the status is set to “finished” the encoding is done.
The Simple Encoding API can be accessed by our SDKs as well.

Simple Encoding API output

Once the simple encoding has finished, it will write the output to your specified location. The encoding will create segmented video files as well as HLS and DASH manifests. The encoding will furthermore create thumbnails, sprites, subtitles, closed captions, and audio if your input contains them.
live streaming android - Bitmovin
We know that starting your video encoding can be hard, thus we aimed to keep this API as simple as possible while still providing you with everything you might need for your workflow. We will continue to improve our Simple Encoding API, so if you have any suggestions for improvements or how we can make it more suitable for your use case, please let us know!

How to Troubleshoot the Simple Encoding API & FAQs

Error Messages

“No API key found in request”
API requests must be authenticated using your API key.  This is found in the dashboard under your name (top right/Account settings).  This is placed as the X-Api-Key in the header of your POST request.  We suggest using a tool such as POSTMAN to create the POST request during testing 
Could not determine the scheme
Output URLs follow the standard URL structure provided by the storage providers.   These are written above as “your/output/path” however must also include the scheme identifier:   for google this is gcs://<bucket-name>/folder,  for amazon this is s3://<bucket-name>/folder,  for ftp ftp://__ etc.    Failure to put the <something>:// will result in this error message.

Questions

I want to the Simple Encoding API,  but I don’t have an API key:
The easiest way is to sign up for a trial via the bitmovin homepage bitmovin.com /  Try for free (top right)
I have submitted the POST request,  the response says “SUCCESS” but nothing is appearing in the dashboard:
There may be something wrong in credentials or file locations.

  1. Copy the “id”  found in the first line of the response
  2. Query it with GET /encoding/simple/jobs/vod/{simple_encoding_job_id}  (where simple_encoding_job_id is the id you just copied)
  3. This will provide more thorough details of the simple-encoding-job.

What’s the difference between simple_encoding_job_id (referenced above) and Encoding ID (in the dashboard)? 
Once a simple-encode-id-job is successfully submitted an encodeId will be assigned.  You can find the encodeId for a particular simple_encoding_job_id via GET /encoding/simple/jobs/vod/{simple_encoding_job_id} and looking for encodeId in the response.
How to add different input types to the Simple Encoding request?
You can view all input types and how to use them in our documentation here: https://bitmovin.com/docs/encoding/articles/simple-encoding-api
Try out our new Simple Encoding API for yourself by signing up for a trial today.

Video technology guides and articles

The post Introducing Bitmovin’s Simple Encoding API appeared first on Bitmovin.

]]>
Bitmovin’s Intern Series: Analyzing Docker images for optimal sizes https://bitmovin.com/blog/docker-images-layers/ https://bitmovin.com/blog/docker-images-layers/#comments Mon, 21 Sep 2020 08:00:05 +0000 https://bitmovin.com/?p=127237 On a mission to slim down Docker images As an engineer, one thing that you should know when working with Docker images is that they can result in large files/folders. Docker has an established set of best practices that you can apply to reduce the size of your images. However, there are still some additional...

The post Bitmovin’s Intern Series: Analyzing Docker images for optimal sizes appeared first on Bitmovin.

]]>
live streaming android - Bitmovin

On a mission to slim down Docker images

As an engineer, one thing that you should know when working with Docker images is that they can result in large files/folders. Docker has an established set of best practices that you can apply to reduce the size of your images. However, there are still some additional steps that you can take to further reduce the size of them. Using lightweight Docker images has many advantages over larger ones; ranging from bandwidth and storage decreases to performance and security improvements. Unfortunately, optimizing size without compromising functionality is much easier said than done. Today, we’ll review the steps necessary to achieve this goal.

How to reduce docker image sizes

The first step is to follow Docker’s best practices, most importantly, only include what you will need to run your application in your image. Multi-stage builds will help keep dependencies only needed during the build stage out of the final image. In many cases, switching to a smaller base image, like Alpine, will reduce the overall image size dramatically. However, depending on your application’s needs it might be necessary to install additional dependencies so everything will work as expected. The higher complexity that your Dockerfile is and the more dependencies that you add, the harder it can get to distinguish between what your image contains and what’s essential for your application to work properly. 
Therefore, the most logical step is to review the contents of your images to find out which dependencies take up the most space. A nice tool that’ll help simplify this task is Dive, a UI based analysis tool that lists all of the files in a Docker container, and the layer they belong to. Given that manually scrolling through thousands of files is still too much labour, we built a tool that extracts the file paths, sizes, and the layer for any Docker image. However, before we jump into details about this tool, we’ll take a look at Docker images internals first.

How do Docker images work?

Docker uses a Union Mount File system to efficiently store images. The Union Mount File allows you to merge multiple File Systems into one. In Docker, a layer translates roughly into a single file system. There are various Union Mount File systems available, for example, ‘auf’ and ‘overlay’ each with their own advantages and disadvantages.
The diagram below indicates a typical layout of an image. The various layers are mounted on top of each other and are read-only. The topmost thin read/write layer collects any changes made to the running image.

Docker Images _ Basic Layout_Illustrated
Typical layout of a docker image file

But where are these layers located on your machine? The docker info command, answers this question. Among many other things, it shows which storage driver your system uses and where the files are located. 
Docker Images _Storage info output 1_Code Snippet
Docker storage info output 1

Docker Images _ Storage info output 2_Code Snippet
Docker storage info output 2

Before the release of Docker v1.10 finding the contents of layers on disk would have been easy because the directory names corresponded directly to the ones that the docker history command returned. That means for our use case it would have been enough to parse the output of the history command and run an ls for each layer. Unfortunately, we are no longer on v1.09.

What changed after Docker v1.10

The original and somewhat simple method had its own set of problems, the biggest one being that it was impossible to detect if an image’s contents had been tampered with. Therefore, with  v1.10, a digest derived from the content was added to identify layers. This digest changes if and when the layer content is modified, making it possible to detect any changes.
Another update added in v1.10 is that layers can now be shared between images that are built on different machines, further increasing storage efficiency. However, a consequence of the updates is that directory names are now disconnected from the image ID displayed by the docker history command. Making our analysis (and potential implementation) much harder.
For additional information about Docker images please view the following blog post: Explaining Docker Image IDs

Modern problems require modern solutions

Docker maintains manifest files for each image so it’ll know which directories need to be mounted in which order to get the desired image. With these handy files, we can connect the content of a directory to a layer. However, to directly access the root dir would be terribly inconvenient, because we would need to implement special handling for the various Union Mount Systems. Fortunately, a Docker command exists that does all of the heavy lifting for you.

> docker save <image_id>

This command will generate a tar file that contains all of the layers of an image and two manifest files that carry all of the information necessary to connect the various directories to their respective layers.

Tar Files

The first manifest file, manifest.json, contains the location of the various layers and the name of the other manifest file (<Image_id>.json).
Docker Images _ Manifest Tar Files_manifest.json_Code Snippet
The second file carries additional information like the architecture, the entrypoint, the layers included, and whether it’s an empty_layer or not.
Docker Images - Manifest File_Layers_Code Snippet
If we ignore empty layers in the <Image_id>.json file, both files will show the same number of layers. We can then join them by iterating over the <Image_id>.json file and selecting the created_by field, while concurrently iterating over the manifest.json to pick the location of the layer. This method combines the content of a layer with the command that created it.
You can find the Python script we created that performs all of these steps and generates a CSV with all of the collected data here.

Summary

We explained why it’s important to reduce the image size of your applications and showed which best practices exist. The CSV file created by our tool allows you to filter and sort the contents of your images easily and will (hopefully) find the excessive dependencies in your workflow and can be safely removed.

The post Bitmovin’s Intern Series: Analyzing Docker images for optimal sizes appeared first on Bitmovin.

]]>
https://bitmovin.com/blog/docker-images-layers/feed/ 1