live encoding – Bitmovin https://bitmovin.com Bitmovin provides adaptive streaming infrastructure for video publishers and integrators. Fastest cloud encoding and HTML5 Player. Play Video Anywhere. Wed, 18 Sep 2024 14:49:30 +0000 en-GB hourly 1 https://bitmovin.com/wp-content/uploads/2023/11/bitmovin_favicon.svg live encoding – Bitmovin https://bitmovin.com 32 32 The Importance of Observability in Live Video Streaming https://bitmovin.com/blog/live-streaming-observability/ https://bitmovin.com/blog/live-streaming-observability/#respond Sun, 04 Aug 2024 23:49:40 +0000 https://bitmovin.com/?p=285237 In today’s digital age, live video streaming has become an essential medium for communication, entertainment, and information dissemination. Whether it’s broadcasting live sports, conducting virtual conferences, or streaming a gaming session, the demand for seamless, high-quality live video has never been higher. However, ensuring a smooth streaming experience is no small feat. This is where...

The post The Importance of Observability in Live Video Streaming appeared first on Bitmovin.

]]>
  • Why Monitoring is Crucial
  • The Role of Alerts
  • The Live Heartbeat
  • Example Use Cases for Monitoring and Alerts
  • In today’s digital age, live video streaming has become an essential medium for communication, entertainment, and information dissemination. Whether it’s broadcasting live sports, conducting virtual conferences, or streaming a gaming session, the demand for seamless, high-quality live video has never been higher. However, ensuring a smooth streaming experience is no small feat. This is where the importance of observability comes into play. In this blog post, we’ll talk in more detail about two key pillars of system observability, monitoring and alerts; and we’ll also introduce our newest feature the Live Heartbeat.

    - Bitmovin

    Why Monitoring is Crucial

    Monitoring in live video streaming involves continuously checking various parameters to ensure that everything is functioning correctly. This can include checking the video quality, stream latency, buffer health, and server performance. Effective monitoring can help in identifying issues before they impact the viewer’s experience.

    1. Ensuring error-free delivery: Whatever the screen size, device, or location today viewers expect to view video and hear audio in the highest quality possible. Monitoring helps in maintaining the video quality by detecting issues such as bitrate fluctuations, frame drops, and resolution problems. By keeping an eye on these metrics, streamers can take corrective actions to ensure a consistently high-quality viewing experience.
    2. Staying “On Air”: Whether this is a Live event or a Live linear 24/7 service, maintaining the output to ensure the content is available and the audience experience is uninterrupted has been a vital part of video delivery since the very beginning. Broadcasters would go to great lengths to ensure their systems had resilience in place, with backup systems and disaster recovery processes in place to maintain business continuity. All of those backup systems are only effective if monitoring is in place, to ensure that any issues with the current delivery path are identified with corrective action able to be taken automatically or via a human operator as soon as possible.
    3. Buffer management: Buffering is one of the most common issues in live streaming. Effective monitoring can help in managing buffer health, ensuring that the stream is pre-loaded sufficiently to avoid interruptions. By tracking buffer levels, streamers can adjust the streaming settings or improve the content delivery network (CDN) performance.
    4. Technical service performance: The technical performance of any streaming service is critical for delivering live video. Typically a live signal is processed by a host of interconnected products, forming the service. Monitoring each component to ensure proper behaviour within a business tolerance for error, is crucial to be able to effectively carry out root cause analysis and hold suppliers to account with data if they breach their SLAs. Selecting the correct quality and measurement tools for each component is vital. 

    The Role of Alerts

    While monitoring is essential, it’s impractical for human operators to watch these metrics 24/7. For large service providers monitoring 100s or 1000s of linear channels, monitoring is often restricted to one or two sections of a delivery chain displayed on large video walls or multiviewers, and sometimes only displayed in exception. Even when a single event is being monitored there can be so many components to monitor, relying on an “eyes on glass” approach might not be practical. 

    This is where automated alerts come in. Alerts are notifications triggered by specific events or thresholds, enabling rapid response to potential issues.

    1. Proactive issue resolution: Alerts enable proactive issue resolution by notifying operators of potential problems before they escalate. For example, if the stream bitrate drops below a certain threshold, an alert can notify the technical team to investigate and fix the issue before it affects the viewers.
    2. Minimising downtime: Automated alerts can significantly reduce downtime by ensuring that issues are addressed promptly. For instance, if a product or entire service goes down or experiences high load, alerts can notify the support team to take immediate action, ensuring minimal disruption to the live stream, and reducing the meantime to repair.
    3. Improving viewer experience: By addressing issues quickly through alerts, streamers can maintain a high-quality viewing experience. This leads to higher viewer satisfaction and engagement, which is crucial for retaining an audience and building a loyal following.
    4. Resource optimization: Alerts can also help in optimising resources by providing insights into usage patterns and potential bottlenecks. For example, if alerts indicate that a particular server is consistently under high load, it may be time to scale up the infrastructure or redistribute the load more efficiently.

    The Live Heartbeat

    - Bitmovin

    Over the past few months, the engineering team at Bitmovin has been looking at how to improve the observability our Live Encoder product offers, by improving our alert notifications. We wanted to make key improvements to our platform.

    1. Frequency: Often customers need to be aware of issues in a service as soon as they arrive, and typically issues will arrive during a change in state. For Live Encoding, this can happen at packet level on the input, and this would affect the segments written in the output. By offering lower intervals between alerts, we aim to allow customers to get updates at the frequency at which segments are written. 
    2. Scalability: As our customer base increases in both size and account usage, the number of concurrent live encodings reporting alerts and notifications also increases. Because we offer a SaaS platform, where the infrastructure and platform is managed by Bitmovin we initially aggregated our alerts as well. For some alerts, this will still remain true, but for the Live Heartbeat, it will come directly from the Live Encoder improving the confidence users can have in the service, and removing any bottlenecks for scaling. 
    3. Reliability: As mentioned earlier, it only takes one false positive to undermine the trust in any mission-critical system and for alerts that are particularly true. By making the Live Encoder responsible for the Live Heartbeat, it becomes the single source of truth for the health of the product and the section of the live transmission path that Bitmovin provides. 
    4. Flexibility: There are a raft of data points we can report on in a notification from the Live Encoder, and making sure the payload structure is flexible and easy to add to is also essential. If a customer needs to know something about a function the software is performing and we can report it, adding it to the payload should be swift. 

    This is an example payload of the first version of the Live Heartbeat, reporting the status of input video and audio streams.

    - Bitmovin

    Who should operate the observability?

    By now, hopefully, the benefits of a good observability system in place are clear. Monitoring component health, response times, and error rates can help in maintaining optimal system performance. There is a monetary benefit as well of course, by identifying and addressing system issues promptly, companies can prevent potential downtime and ensure uninterrupted streaming. 

    Just before looking at how to implement, it’s also important to ask – who will this observability system be for? Typically in Broadcast stations producing a few core services, the multiple products responsible for maintaining the station output would be monitored by a Master Control Room (MCR) or Transmission control room (TX), they would be supported by a dedicated team of engineers. Service providers or Telcos might have an enormous number of services to monitor, and only be responsible for a certain section of the transmission path, such a large organisation already will have a dedicated team of staff monitoring multiple services in large Network Operation Centers (NOC). These large rooms resemble air traffic control centres, with video walls surrounding the staff, showing feeds at different points in their signal paths, along with diagnostic information.

    woman sits in tv control room, where monitoring and alerts notify her of issues with the video

    Some companies staff these control rooms themselves, and simply need the tools in place to perform the job and other companies might be looking for someone to provide this for them. Sometimes they assume that Bitmovin provides this, but we’re a product company not a managed service provider. We do have some partners that offer this however and are always willing to introduce customers to those partners. 

    Implementing Effective Monitoring and Alerts

    When implementing monitoring and alerts solutions, it’s sometimes useful to start on paper and if you have users gather their requirements and define their user stories. In most cases if something should be monitored and trigger an alert, a solution can be engineered to do this. Consider which links in the chain are critical and can offer tools to aid fault finding, and by alerting give an early warning to aid preventative maintenance. 

    Once you have an idea about what needs to be monitored you can consider more of the details such as:

    1. Define key metrics: Identify and define key metrics that are critical for your streaming service. This can include video quality indicators, audio quality indicators, metadata integrity, latency, and server performance metrics.
    2. Set thresholds: Establish appropriate thresholds for these metrics. Thresholds should be set in a way that they trigger alerts for potential issues without causing unnecessary alarms for minor fluctuations. Typically every company will have a level of fault tolerance it is willing to accept, and the lower the tolerance the higher the cost to achieve that Service Level Agreement (SLA).
    3. Use the right tools: Utilise reliable monitoring and alerting tools that can integrate with your streaming infrastructure. There are various tools available that offer real-time monitoring, analytics, and alerting capabilities tailored for live video streaming.
    4. Regularly review and adjust: Regularly review the performance data and adjust thresholds and monitoring strategies as needed. Continuous improvement is key to maintaining an effective monitoring and alert system.

    Example Use Cases for Monitoring and Alerts

    To better understand the significance of monitoring and alerts in live video streaming, let’s explore some example use cases:

    1. Live sports broadcasting:
    • Scenario: During a live sports event, maintaining high availability and error-free delivery is crucial for an engaging viewer experience.
    • Monitoring: Continuously track the health of main and backup transmission paths, typically demarcation points from key equipment suppliers that are the responsibility of the team monitoring the equipment. Often an “off-air” confidence monitor, showing what the “viewer at home” is seeing. 
    • Alerts: Set up alerts for core supplier demarcation points, increased error rates, or system downgrades to immediately address any issues. Measure system enhancements such as graphics systems separately so they can be bypassed if required.
    1. Virtual conferences and webinars:
    • Scenario: Hosting a virtual conference with multiple speakers and interactive sessions requires smooth transitions and minimal disruptions.
    • Monitoring: Typically far fewer equipment suppliers and components will be involved, so aggregation can be leveraged to streamline the number of monitoring points. Monitor stream health, website load, and participant connectivity.
    • Alerts: Trigger alerts for server overloads, participant dropouts, or stream interruptions to quickly deploy backup resources or troubleshoot connectivity problems.
    1. Gaming streams:
    • Scenario: Streaming a live gaming session where real-time interaction with viewers is key to maintaining engagement.
    • Monitoring: Keep an eye on frame rates, latency, and viewer engagement metrics. Larger events have also become similar to live sporting events, and will have similar requirements to those listed above. 
    • Alerts: Set alerts for frame rate drops, increased latency, or significant drops in viewer engagement, allowing for immediate corrective actions.
    1. News broadcasting:
    • Scenario: Broadcasting live news where timeliness and reliability are critical.
    • Monitoring: Continuously track the health of main and backup transmission paths, typically demarcation points from key equipment suppliers that are the responsibility of the team monitoring the equipment. Often an “off-air” confidence monitor, showing what the “viewer at home” is seeing. Check latency and your rivals – if you’re not first you’re last
    • Alerts: Generate alters similar to live sports events, with additional attention paid to the multiple platforms being delivered to and confidence monitors, typically news needs to be on as many screens as possible.
    1. 24/7 Live Linear Channels
    • Scenario: Broadcasting 24/7 linear channel services that are always serving content to users. 
    • Monitoring: Multiple outputs from key infrastructure components in the chain, typically demarcation points from key equipment suppliers that are the responsibility of the team monitoring the equipment. Often an “off-air” confidence monitor, showing what the “viewer at home” is seeing. 
    • Alerts: Set up alerts for core supplier demarcation points, increased error rates, or system downgrades to immediately address any issues. Every service should have the main (most popular/most viewed) off-air platform monitored for each service. If there is an issue there, you’ll want to be able to resolve and direct engineering support teams as efficiently as possible.

    Recommended Monitoring and Alerting Tools

    At an extremely simplified and high level, here are some of the demarcation points in a signal chain and segments of similar equipment in a transmission path. For each category we provide a list of products that can be used to implement a robust monitoring and alerting workflow for live video streaming, they are by no means exhaustive or an endorsement of any particular solution:

    - Bitmovin

    A. Aggregated Mass Notification Systems
    These solutions would typically be used as endpoints for pub/sub or push notifications from multiple systems, aggregating alerts from multiple manufacturers to display the health of a single service, or providing a holistic view of a technology platform. Here we have split the tools into two categories, Data and Media, because you would want to aggregate alarms and health monitoring into a single interface, and separately you would also want to see and hear media into a single large display. 

    Data

    1. New Relic
      • Features: Offers real-time performance monitoring, error tracking, and alerting for server and application performance.
      • Use Case: Ideal for monitoring server load, response times, and application health during live streaming.
    2. Datadog
      • Features: Provides end-to-end monitoring with detailed analytics, real-time alerts, and integrations with various streaming platforms.
      • Use Case: Suitable for comprehensive monitoring of video quality, latency, and server performance.
    3. DataMiner by Skyline Communications
      • Features: Offers end-to-end monitoring, fault management, and performance analytics specifically designed for media and broadcasting industries.
      • Use Case: Best for comprehensive monitoring of entire broadcast chains, optimising resource management, and ensuring high-quality content delivery.
    4. Prometheus and Grafana
      • Features: Prometheus offers powerful time-series monitoring, and Grafana provides flexible and interactive visualisations.
      • Use Case: Effective for creating customised dashboards to monitor various metrics such as server performance, video bitrate, and latency.
    5. Databricks
      • Features: Offers a data aggregation platform to collect metrics from multiple data sources, across a software stack. Uses AI models to provide insights and elevated reporting. 
      • Use Case: Offering a great overview of entire plant operations, supporting troubleshooting by technical teams, observability for operations and data insights in terms of performance for executive stakeholders. 
    6. Nagios
      • Features: An open-source platform that can be used to build live dashboards monitoring systems with multiple components taking alerts via API calls, SNMP or pub/sub webhooks. Also has a great log collector function for root cause analysis. 
      • Use Case: For anyone looking to invest significant time to build a comprehensive solution, this is a great tool that can be customised and useful for operations and engineering teams. 

    Media

    1. Grass Valley
      • Features: Grass Valley Kaleido Multiviewers are configurable multi input software that comes available with a range of input interfaces and models. They can display multiple video, audio and data sources in a single video wall and issue alerts. 
      • Use Case: Suitable for a modular based approach, where future scalability is key. Could monitor signals in each step of the chain.  
    2. Imagine Communications
      • Features: Selenio and Platinum products are ideal for production environments where high bitrate video input sources need to be monitored.  
      • Use Case: Live production studios and control rooms, or playout centres distribution content up to Transmission. 
    3. TAG Video
      • Features: TAG Video is dedicated to building monitoring solutions for multiviewers, monitoring and data analysis products. The platform supports a wide range of input interfaces and models. They can display multiple video, audio and data sources in a single video wall and issue alerts. 
      • Use Case: Suitable for monitoring a holistic overview of each step of the chain.  

    B. Acquisition
    Products and components at this part of the chain are responsible for capturing the video, audio and data sources. Monitoring tools here for a production workflow would normally be test and measurement devices to ensure the equipment is properly calibrated and that the output from the devices meets certain specifications. Typically this is the most critical part of the chain, where it’s much harder to have back-up devices ready to take over.

    1. Leader
      • Features: Waveform monitors and rasterizers display a range of scopes for measuring uncompressed video signals over SDI or IP. A great range of products from high to mid-end. 
      • Use Case: Measuring signal health, and error rates and also line-up ensuring correct calibration.
    2. Telestream
      • Features: The company’s waveform monitors and rasterizers display a range of scopes for measuring uncompressed video signals over SDI or IP. 
      • Use Case: Measuring signal health, and error rates and also line-up ensuring correct calibration. Useful in the camera control rooms, post production facilities and Quality Control. 
    3. TSL Systems
      • Features: Provides audio metering products to measure level, loudness, signal presence and phasing. 
      • Use Case: Audio monitoring for signal levels and integrity in any customer acquisition environment. 
    4. Leader/PHABRIX
      • Features: Also from Leader, but the popular portable handheld devices are well known as a standalone brand to any engineer working with baseband video. The devices can generate signals and analyse them using a multitude of scopes, in robust cases with a long battery life and high quality screen and simple controls. 
      • Use Case: An essential tool for analysing a host of different components in a chain during installation, routine maintenance or during fault finding. 

    C. Processing & Routing

    1. Bridge Technologies
      • Features: Specialists in monitoring probes of signals at different stages of the production train, able to measure uncompressed signals, compressed contribution (Transmission), compressed domain (Distribution) and off-air platforms. 
      • Use Case: Provides signal quality and health in a holistic manner, measuring the muxed video, audio and data streams in a consolidated signal. 
    2. Interra Systems
      • Features: Provide quality control software for measuring the signal quality, in terms of artefacts and content quality (perceptual visual and audio quality). 
      • Use Case: Can be used to measure content according to a set of business rules and allow teams to manage bulk content and alert operators on request. 

    D. Transmission

    1. Bridge Technologies
      • Features: Specialists in monitoring probes of signals at different stages of the production train, able to measure uncompressed signals, compressed contribution (Transmission), compressed domain (Distribution) and off-air platforms. 
      • Use Case: Provides signal quality and health in a holistic manner, measuring the muxed video, audio and data streams in a consolidated signal. 
    2. IMAX
      • Features: Using StreamSmart and StreamAware, deploy monitoring solutions such as quality probing software measuring quality from multiple points along a transmission path using SSIM quality metrics. 
      • Use Case: To ensure that a benchmark of audio and video quality is met and maintained throughout the transmission path. 
    3. Interra Systems
      • Features: Provide quality control software for measuring the signal quality, in terms of artefacts and content quality (perceptual visual and audio quality). 
      • Use Case: Can be used to measure content according to a set of business rules and allow teams to manage bulk content and alert operators on request. 

    E. Distribution

    1. Hydrolix
      • Features: Offering a data lake platform that can capture vast quantities of logging information across a distribution platform and make that queryable via an indexed search. 
      • Use Case: A perfect tool for teams responsible for monitoring multiple content delivery networks and security platforms.  
    2. PROMAX ELECTRONICS
      • Features: A range of tooling for monitoring MPEG encoders and POPs for distribution of content over DTTV, Satellite and Cable Optical Delivery Networks. 
      • Use Case: Companies managing multiple distribution traditional broadcast network. 
    3. Touchstream
      • Features: Observability tools for monitoring media distribution over CDNs, monitoring performance and health of network distribution. Additionally providing a virtual NOC for monitoring the health of key components in the OTT transmission path from Encoder to OTT devices. 
      • Use Case: Tools are crafted for teams looking for greater observability over OTT distribution paths. 

    F. Off-Air Platforms

    1. Bridge Technologies
      • Features: Specialists in monitoring probes of signals at different stages of the production train, able to measure uncompressed signals, compressed contribution (Transmission), compressed domain (Distribution) and off-air platforms. 
      • Use Case: Provides signal quality and health in a holistic manner, measuring the muxed video, audio and data streams in a consolidated signal. 
    2. IMAX
      • Features: Using StreamSmart and StreamAware, deploy monitoring solutions such as quality probing software measuring quality from multiple points along a transmission path using SSIM quality metrics. 
      • Use Case: To ensure that a benchmark of audio and video quality is met and maintained throughout the transmission path. 
    3. Interra Systems
      • Features: Provide quality control software for measuring the signal quality, in terms of artefacts and content quality (perceptual visual and audio quality). 
      • Use Case: Can be used to measure content according to a set of business rules and allow teams to manage bulk content and alert operators on request. 
    4. Bitmovin Analytics
      • Features: Focuses specifically on video streaming with detailed insights into video performance, viewer engagement, and quality of experience.
      • Use Case: Excellent for monitoring video quality metrics, buffer health, and viewer engagement in real time.

    G. Managed Service Providers

    1. Stream AMG
      • Features: Leading sports OTT platform provider that allows clubs, leagues, rights holders and more to build online video services to monetize their content.
      • Use Case: Integrate with their “Headless OTT” or use their full end-to-end solution for live video delivery, monetization, engagement, analytics and content protection.
    2. M2A Media
      • Features: Automation and orchestration of AWS Media Services for premier live events. 
      • Use Case: Operations teams can use M2A interfaces to build, monitor and capture live streaming video content running on AWS, without any cloud or dev skills necessary. 
    3. LTN Global
      • Features: High-quality video transport and distribution services. Ultra-low latency video delivery, cloud-based media workflows, live productions tools and comprehensive monitoring. 
      • Use Case: Real-time news coverage and remote guest contributions. Live sports events and cloud-based, remote media production. 
    4. Telstra Broadcast Services
      • Features: Comprehensive media and broadcast solutions provider with global low-latency media network. Specialists in live events and media workflow solutions.
      • Use Case: Live sports broadcasting and remote production; Festival, concert and event streaming. 
    5. Irdeto
      • Features: Managed broadcast and online content distribution infrastructure; Design, build and optimise new video platforms. 
      • Use Case: Video compression and delivery network management, high-profile event management.

    Conclusion

    In the dynamic world of live video streaming, maintaining a seamless and high-quality viewer experience is paramount. Monitoring and alerts play a crucial role in achieving this by ensuring that potential issues are identified and addressed promptly. By implementing robust monitoring and alerting systems, streamers can enhance their service reliability, optimise resources, and ultimately deliver an outstanding experience to their audience.

    The post The Importance of Observability in Live Video Streaming appeared first on Bitmovin.

    ]]>
    https://bitmovin.com/blog/live-streaming-observability/feed/ 0
    The 20 Best Live Streaming Encoders: Software & Hardware [2023] https://bitmovin.com/blog/live-streaming-encoder/ https://bitmovin.com/blog/live-streaming-encoder/#respond Thu, 30 Mar 2023 14:47:05 +0000 https://bitmovin.com/?p=255772 In today’s competitive online video market — where quality of experience is table stakes — ensuring your team has the right encoder for your unique needs is key. But where do you start when selecting an encoder? Look no further. Our team of video engineering experts has put together a comprehensive comparison of the best live contribution encoders available in 2023.

    The post The 20 Best Live Streaming Encoders: Software & Hardware [2023] appeared first on Bitmovin.

    ]]>
    Consumers expect quality video experiences at the touch of a button. Without live streaming encoders, though, this wouldn’t be possible.

    Streaming encoders are an essential tool for transporting live video across the internet. Their utility is two-fold: content distributors use encoders to digitise video (changing from analog to digital) while simultaneously shrinking gigabytes of data down to megabytes.

    In today’s competitive online video market — where quality of experience is table stakes — ensuring your team has the right encoder for your unique needs is key. Streaming data must be compressed for efficient delivery without sacrificing quality. While most encoders deliver on this requirement, they vary in terms of performance and feature set.

    But where do you start when selecting an encoder?

    Look no further. Our team of video engineering experts has put together a comprehensive comparison of the best live contribution encoders available in 2023. From free software options to 4K live streaming encoder hardware, we cover it all. Keep reading for the what, where, when, and why of video encoding.

    What is a live streaming encoder? 

    A live streaming encoder is a solution used to convert RAW video data and compress it for distribution across the internet. Sometimes encoders are built into the camera — as with IP surveillance systems. But more often, broadcasters rely on software and hardware live streaming encoders to get the job done. 

    Milliseconds after a stream is captured, an encoder uses video compression algorithms called codecs to condense the data. Live encoders employ lossy compression, tossing out unnecessary data to ensure the greatest reduction in file size possible without degrading perceptual video quality.

    The encoder then packages the stream for delivery across the internet. This involves putting the components of the stream into a commonly accepted contribution format such as Real-Time Messaging Protocol (RTMP) or Secure Reliable Transport (SRT). RTMP and SRT describe streaming protocols that transport content between the encoder and the online video host. 

    In most cases, these streams are repackaged at the next step of the workflow for delivery to the end user. Protocols like HTTP Live Streaming (HLS) and Dynamic Adaptive Streaming over HTTP (DASH) come into play here. These protocols make the content more scalable and adaptable for delivery to viewers with varying internet speeds.

    Once the stream reaches viewers, a video decoder built into the player software or set-top box will decompress the data for playback. At this point, the video content has often been encoded, transcoded, delivered globally, and decompressed. Thanks to the efficiency afforded by the encoding solution used, viewers are none the wiser. All they know is that the video content is streaming live and in high quality.

    Transcoding vs. encoding: What’s the difference?

    The terms transcoding and encoding are often used interchangeably. We’ve even been known to combine the two here at Bitmovin. For the sake of clarity, let’s define each term:

    What is encoding?

    Encoding describes the process of converting RAW video into a compressed digital format directly after the video source is captured. Video encoding always occurs early in the streaming workflow. It’s also a must for every broadcast scenario because video content can’t be transmitted across the internet without being shrunk into a more manageable size. 

    Sometimes the encoder is built into the capture device itself. Other times, it requires a secondary software or hardware encoder for live streaming. With contribution encoding, content distributors generally convert the stream for delivery via RTMP, RTSP, SRT, or another ingest protocol.

    What is transcoding? 

    Transcoding involves taking an encoded stream, decompressing and altering the content in some way, and then compressing it for delivery to end users. Transcoding isn’t always required, but when it is, it occurs after the video source has been encoded. 

    Transcoding can be done using a live video streaming solution like Bitmovin, a live stream platform like Facebook Live that has transcoding technology built into its infrastructure, or an on-premises streaming server. In common streaming workflows, RTMP-encoded streams are ingested by the transcoder and then repackaged for adaptive bitrate delivery via HLS and DASH. This ensures that the content reaches more users, plays back on more devices, and adapts to viewers’ connectivity constraints.

    Find out more about transcoding in our ultimate guide: Video Encoding: The Big Streaming Technology Guide [2023] 

    A simple analogy for transcoding and encoding 

    Let’s use the gasoline supply chain to better demonstrate the difference between these two live streaming processes. 

    1. First, crude oil is extracted from underground reservoirs. This crude oil can be thought of as the RAW video source itself.
    2. Next, the crude oil is refined into gasoline for bulk transport via pipelines and barges. This is the encoding stage, where the video source is distilled to its essence for efficient transmission.
    3. Finally, the gasoline is blended with ethanol and distributed to multiple destinations via tanker trucks. This represents the transcoding step, where the content is altered and packaged for end-user delivery.
    Live Streaming Encoder Workflow

    Why do I need a live streaming encoder? 

    The ability to fit more data into less space has changed the way video is stored and distributed. What once required renting VHS tapes or purchasing DVDs can now be accomplished by simply streaming video content over the top (OTT) or storing it in the cloud. Video encoders make this possible by compressing streaming data into a manageable size.

    No matter the industry or use case, encoding is a key step in the video delivery chain. Looking to build immersive online fitness experiences like ClassPass? You’ll need an encoder. Hoping to distribute breaking news online by swapping out expensive satellite trucks for a remote streaming setup? Your live encoder will play a vital role. Simulcasting to multiple online video platforms (OVP) like YouTube and Facebook? Great, here are instructions for connecting an encoder to each platform: 

    What about simple broadcasts that don’t require additional software or hardware?

    Encoding may seem like an unnecessary step given that anyone can go live using their smartphone. But it’s always taking place in the background. And even when you have the option to stream directly to a site without using an encoder, doing so sacrifices quality and control. That’s why most social media sites offer live encoding software integrations like Instagram Live Producer.

    Above anything else, implementing one of the recommended encoders below paves the way for more professional live broadcasts. Most encoders allow you to manage complex productions by switching between cameras, microphones, and media assets. What’s more, advanced solutions allow you to add special effects and graphics for a more polished end-user experience.

    Luckily, free software encoders and low-cost hardware encoders exist. That means there’s no need to break the bank when designing your live streaming setup. It’s up to you to decide whether you need all the bells and whistles or if free software encoding does the trick. 

    Let’s look at some of the considerations that might sway you in either direction.

    How important is low latency?

    How important is low latency? It depends on what you’re streaming. For standard live streams like online news, your viewers won’t likely notice a 10-second lag. On the other hand, if you’re building interactive video experiences for online gaming or e-commerce, even five seconds of latency could ruin the entire event.

    In our 2022/2023 Video Developer Report, live low latency ranked as the second-biggest challenge that content distributors are experiencing with video technology. It’s also the area where survey participants see the most opportunity for innovation in their service. Despite this, 47% of those surveyed indicated that they weren’t using low-latency streaming technology.

    - Bitmovin
    Video Developer Report 2022/23: Which technology do you use for low-latency streaming?

    So what gives? Why would latency rank as a top concern when almost half of the developers participating in our report aren’t leveraging technologies designed to reduce video lag?

    As it turns out, ‘low latency’ is a relative term. For some, sub-five seconds is the ultimate goal. But for truly interactive video applications (like online betting, live auctions, and multi-player quizzes) playback delay often needs to be in the hundreds of milliseconds.

    There’s also the issue of perceived need. In conversations with customers, we’ve found that video distributors fall into one of three camps:

    1. They absolutely need low-latency or real-time streaming to ensure that the viewer experience is high quality and competitive in their market.
    2. They think they need to decrease latency because there is so much buzz surrounding the topic, but in reality, reducing the lag has minimal impact on how well the content is received.
    3. They’re well aware that quality and scalability are more important factors for their audience, and as a result, aren’t investing resources in driving down latency.

    Take it from our Chief Architect Igor Oreper:

    - Bitmovin

    “A better question may not be how do I minimize my live stream’s delay? but what is the target latency I want my audience to have? Target latency can make a world of difference to the playback experience you’re offering your viewers.”

    Igor Oreper (Chief Architect, Bitmovin)

    Source: Low Latency vs. Target Latency: Why there isn’t always a need for speed

    How does live encoding impact latency?

    For use cases where reducing latency is a must, there are multiple opportunities to decrease the broadcast delay across the video supply chain. The live encoder, packager, CDN, and player must all be optimised accordingly. 

    Things that can impact the speed of video encoding include the encoder itself, which codec and protocols you use, and configurations like the bitrate and resolution. 

    Broadcasters committed to lightning-speed delivery should look for contribution encoders that support:

    1. Low-latency protocols like SRT and Zixi
    2. Ethernet connectivity 

    Additionally, you might have to compromise on quality by decreasing frame rate and resolution if low latency is essential.

    When would I need a 4K live streaming encoder?

    4K streaming (and even 8K) comes into play on the other end of the spectrum. 

    Content distributors prioritising ultra-high-definition video will need an HD live streaming encoder capable of producing source streams with a resolution of 3840 x 2160 pixels. 

    While sharper than 1080p, these high-bitrate streams are resource-intensive and costly to distribute. For that reason, you’ll want to be sure that 4K UHD is a business need and not a nice-to-have that lacks any real ROI.

    Beyond your 4K live streaming encoder, you’ll also need a 4K camera or higher, a transcoding service capable of ingesting and egressing 4K video, and a HTML5 player that supports 4K playback. Your viewers will also need 4K playback devices to benefit from these efforts. Finally, both broadcasters and end-users will require high-speed internet for these types of streams.

    But back to the question at hand: What types of broadcasts warrant 4K HD video? We can’t provide clear-cut criteria. In general, though, the following video streams are best suited for 4K encoding:

    • High-production value content like live sports events
    • Immersive experiences like virtual reality (VR) and gaming
    • Cinematic content for over-the-top (OTT) distribution

    Check out our 4K streaming customer spotlight on the Brazilian broadcaster Globo.

    Software vs. hardware encoders: Which is right for me?

    Once upon a time, dedicated hardware was the only choice for live video encoding. Computers are now powerful enough to handle such a strenuous task — but just because you can use software doesn’t mean you should. 

    Hardware encoders have the dedicated power to encode high-quality streams quickly. Software encoders, on the other hand, must make concessions to encode in real time. As a result, you’ll sacrifice quality for efficiency — or vice versa — when going with a software encoder. 

    That’s not to say that software encoding isn’t a viable option for professional broadcasting. Live streaming software like OBS, Wirecast, and VMix are cost-effective and easy to use. For that reason, we’d recommend starting with one of these solutions if you’re new to broadcasting. Audio, video, and graphics are often stored on a computer anyways, so software encoding can streamline the process. One caveat, though: Make sure your computer is up to the task if you’re going this route.

    With hardware encoding, alternatively, you’re able to free up resources and support more advanced configurations. Hardware can get pricey, though. In our list below, the best hardware encoders for live streaming ran the gamut from just over $200 to just over $12,000. 

    There’s also a third route to take. Encoding expert Jan Ozer advises using a hybrid workflow:

    “Many producers who use software programs like Wirecast and vMix (and TriCaster for that matter) use an external hardware-based encoder for producing their live output streams, which totally removes the encoding load from your mixing station. In very high profile engagements, you should always consider this option as well.”

    Jan Ozer, Founder of the Streaming Learning Center

    TL;DR:

    Check out our chart below for a quick breakdown of the software vs. hardware encoding debate.

    SoftwareHardware
    Cost effective and sometimes freeCan get pricey
    Runs on your computerPhysical appliance
    Accessible and versatileMore robust and reliable
    Slower encoding timesCan encode quickly and in high quality
    Power is dependent on your computing resourcesActs as a dedicated resource for encoding workload
    Eliminates the need for additional equipmentFrees up computing resources
    Best for simple broadcasts and user-generated content (UGC)Best for complex productions and  live television or cable studio setups

    11 considerations when choosing a live streaming encoder

    Aside from the considerations above (whether or not your workflow will include encoding and transcoding, software vs. hardware solutions, 4K resolution, and low-latency encoding), here are 10 factors to mull over before selecting a live streaming encoder.

    1. Cost and/or ability to trial

    Price point will always be the deciding factor. If your budget is nonexistent, that makes things easy: Go with a free software option like open broadcaster studio (OBS). Alternatively, you might be able to gain internal buy-in on a pricier option by creating a proof of concept first. In those cases, Telestream’s Wirecast production studio software and the vMix live video production software both offer free trials to get started. 

    Hardware will always be the most expensive avenue. Even so, software options have hidden costs because they must be deployed on a reasonably powerful computer. If you don’t have an adequate computer to start, hardware encoding might be right for you. Anyone dead-set on hardware but lacking in budget should go with a low-cost option like the Videon EdgeCaster.

    2. Support for your ingest protocol(s)

    All encoders covered below support RTMP output. This is the de facto standard for first-mile contribution. Most media servers can receive RTMP and all major social media players like Facebook, YouTube, and Twitch accept it. That said, there’s a growing list of RTMP alternatives today. These include SRT, Zixi, QUIC, Reliable Internet Stream Transport (RIST), and Web Real-Time Communications (WebRTC)

    Often, these new technologies are open source and more advanced. SRT and RIST, for example, promise better resilience to network issues like packet loss while ensuring low-latency delivery over public networks. If these protocols play a role in your workflow, you’ll want to find an encoder that can output them. OBS and vMix both support SRT on the software front; Ephiphan’s Pearl Nano and Haivision’s Makito X are great SRT options in the hardware world. 

    3. Integration with existing equipment and capture devices

    Today’s encoders range from specialised component tools to out-of-the-box studio production kits. While hardware encoders help integrate all of your equipment into a full-functioning studio, they might not be compatible with your current gear. Confirm that your encoder supports the input types (HDMI vs. SDI), resolution (1080p vs. 4K), and frame rate (30 vs. 60 fps) of your camera or video source. 

    4. Compatibility and/or integration with your destination

    Is your encoder compatible with the platform to which you’re streaming? Whether the next step in your workflow is a transcoding solution, social media service, or something else entirely, you’ll want to ensure that it connects with your destination(s) prior to settling on a live streaming encoder. Some encoders even offer custom integrations with common video workflow tools. 

    OBS, for instance, integrates with a variety of video sources and transcoding solutions. These include integrations with Zoom and Bitmovin’s Streams product. Similarly, the Matrox suite of hardware encoders integrates with Facebook Live and YouTube. 

    To rehash the last three considerations in this list: It’s vital that you look at your entire streaming ecosystem and make sure the encoder you’re leaning toward fits with your tech stack. 

    5. Internet connection

    In a perfect world, all live stream encoding would utilise a wired Ethernet connection to high-speed internet. That’s not always the case though. Remote encoding has become increasingly common, which is why many encoders today offer the flexibility to use Wi-Fi, Ethernet, or both. If neither Wi-Fi nor Ethernet is available at your production location, you’ll need an encoder like LiveU that can connect via mobile networks. 

    Regardless, we always suggest testing your internet strength to verify the stability of your broadcast signal. High-speed internet is also crucial for producing 4K streams, so try to go with an Ethernet-connected encoder when UHD resolution is the goal.

    6. Use case

    The perfect encoder for your application will be ill-suited for another’s. That’s why the specifics of your scenario should help determine which encoder makes the most sense. 

    Are you streaming a high-action football match that switches between multiple cameras or a talking head commentary with a single video and audio source? Do you need to encode from remote locations or are you always broadcasting from the same studio? All of these specifics will dictate which option’s best.

    7. Feature set

    Encoders vary drastically in terms of the feature set. Some broadcasters require preset configurations for different productions, while others are vlogging from their desk with minimal requirements. There’s a lot to think about in terms of encoding features — including recording, composting, audio mixing, lower-thirds graphics, subtitles, analytics, and monitoring.

    8. Simulcasting

    It’s also worth pondering whether you require multi-encoding functionality, simulcasting capabilities, or both. Anyone simultaneously streaming to multiple destinations should prioritise these capabilities or use a streaming solution like Bitmovin to build custom video experiences — including distribution to any device or social media platform.

    9. Redundancy

    Depending on the criticality of your streaming content, you might require encoder and/or output redundancy. This helps ensure that your stream is resilient enough to survive a cable failure, loss of internet connectivity, or hardware (computer or encoder) failure. For anyone hosting live shopping experiences or news streams, redundancy is an important consideration.

    10. How much noise can you handle (hardware) 

    Powerful hardware encoders often come with noisy built-in fans. If your encoder is stored away in a closet, this won’t impact your decision. But if your entire studio setup is constrained to the same closed space from which your stream is being broadcast, you’ll want to find a hardware encoder that keeps the sound to a minimum.

    11. Operating system (software)

    Streaming software like vMix only works with Windows 7 and later. Likewise, Wirecast isn’t available for Linux operating systems. Make sure to verify that your live software encoder is supported by your operating system before making a purchase. 

    The best encoder software for live streaming is always going to be one that runs on your OS!

    Live streaming encoder glossary

    Here’s some encoding terminology to brush up on as you shop around. 

    • AV1: An open-source, royalty-free, next-generation video codec developed by the Alliance for Open Media. AV1 delivers 30% bandwidth savings compared to alternatives like VP9 and HEVC while also improving visual quality. Learn more in our AV1 datasheet.
    • Bitrate: The amount of video data transferred across a connection in a set amount of time. Video bitrate is measured in bits per second (bps). When you have a high-bitrate live stream, you’ll need a network connection capable of meeting that bps demand to ensure quality and reliability.
    • Codecs: A portmanteau of ‘coder-decoder’ or ‘compressor-decompressor’, codecs are the two-part compression algorithms that allow content distributors to condense their video and audio content for transmission across the internet. Popular video codecs include H.264/AVC, H.265/HEVC, and AV1. Popular audio codecs include AAC and MP3.
    • Frame rate: Because video is actually a series of still pictures shown in rapid succession (like a flipbook), frame rate measures how many frames appear within a second. Thus, the acronym fps (frames per second) is used synonymously. A high frame rate translates to a smooth viewer experience, whereas a low frame rate makes things a bit choppy. Frame rates can range anywhere from 10-120fps, depending on the broadcast type.
    • HLS: Short for HTTP Live Streaming, Apple HLS is the most common streaming protocol today for last-mile delivery. The adaptive HTTP-based format scales easily across content delivery networks (CDNs), and can be played back across all Android, Linux, Microsoft, and MacOS devices. 
    • H.264/AVC: Also referred to as Advanced Video Coding, H.264/AVC is a widely supported codec with significant penetration into streaming, cable broadcasting, and even Blu-ray disks. It plays on virtually any device and delivers quality video streams, but is gradually declining in usage due to more advanced alternatives like H.265/HEVC and AV1. We cover these trends in more detail in our annual Video Developer Report.
    • H.265/HEVC: H.265 was developed by the ISO/IEV Moving Picture Experts Group as the successor to H.264. Also called High Efficiency Video Coding (HEVC), it generates smaller files than H.264 and supports 8K resolution. This codec is universally supported on Smart TVs and Google also recently added support, so we anticipate a major uptick in use. 
    • Keyframe interval: Also called an i-frame interval, this encoding setting determines how often a whole picture is transmitted. When streaming, the complete image is only included in an initial keyframe, while subsequent delta frames depict changes from that image. This reduces bandwidth by only transmitting new data in each frame. A keyframe interval of two seconds is sufficient when streaming static scenes like a news desk, but action-packed content like sporting events often requires a shorter keyframe interval of approximately one second.
    • Mixing: Video mixing is the process of combining different audio and video files to create a singular feed. This is a primary function of most encoders and includes the ability to transition via smooth dissolves, wipes, and other special effects. 
    • MPEG-DASH: Dynamic Adaptive Streaming over HTTP, a.k.a. MPEG-DASH, is the industry-standard alternative to Apple’s HLS protocol. As such, it facilitates the last-mile delivery of streaming content to end users. Because DASH doesn’t enjoy the same playback support as HLS (iOS and Apple TV don’t support it), it lags behind in adoption.
    • RTMP: The Real-Time Messaging Protocol is an ingest protocol used to transport video and audio data from encoders to social media platforms and video streaming solutions like Bitmovin. Originally designed for the now-defunct Adobe Flash Player, RTMP runs into compatibility issues when it comes to playback on popular devices. For that reason, RTMP streams are almost always repackaged into a format like HLS or DASH.
    • Resolution: Resolution describes the number of pixels in a video frame, which determines how realistic the video appears. The higher the resolution, the crisper the picture. Resolution is measured in pixels, with today’s displays weighing in at 480p, 720p, 1080p, 4K, and 8K.
    • Simulcasting: Also called multi-streaming, simulcasting describes the ability to broadcast a live or recorded stream to multiple destinations at once. These destinations can include websites or social media platforms like YouTube and Facebook.
    • SRT: Secure Reliable Transport is an RTMP alternative designed by Haivision to ensure reliable, low-latency video contribution regardless of network quality. This emerging technology doesn’t boast the same support as RTMP but it’s quickly being adopted as a newer and better solution. SRT is especially valuable for remote video production and has powered broadcasts for ESPN, Microsoft, and Al Jazeera. Other advantages of SRT include its support for multiple audio streams, its ability to transport closed captions, and its error-correction qualities when transporting high-bitrate content.
    • VP9: As an earlier version of AV1, VP9 is another royalty-free, open source video codec. It enjoys greater compatibility than many of its alternatives and works well for high-quality compression like 4K streaming requires. Even so, the industry’s prioritisation of AV1 has stifled adoption.
    • WebRTC: Web Real-Time Communication is a technology designed for peer-to-peer streaming between browsers. The framework supports sub-500 ms latency but struggles to scale beyond small chat-based applications. Today, organisations are experimenting with using it as a contribution protocol, delivery protocol, and end-to-end. Whether or not it has a future in large-scale broadcasting has yet to be seen.
    • Zixi: Zixi is a content- and network-aware contribution protocol that dynamically adjusts to varying network conditions and employs error correction techniques for streaming over IP networks globally. This resolves the inherent limitations of low-latency live video delivery, regardless of network conditions. Capable of ~0ms latency, Zixi is a leading technology for sending professional-grade content over the internet with protection and error correction built in. Zixi is technically similar to SRT. That said, while SRT is open source, Zixi is a proprietary and licensed protocol developed by the company of the same name.

    The top 20 live streaming encoders: software and hardware

    We’ve covered the terminology and key considerations. So, without further ado, here’s our list of the top 20 live streaming encoders in 2023 — broken up to detail the software options first and hardware options next.

    Software

    1. OBS
    2. Wirecast
    3. VMix

    Hardware

    1. Videon EdgeCaster EZ Encoder
    2. AJA HELO Plus
    3. Matrox Monarch HD
    4. Osprey Talon 4K
    5. VCS NSCaster-X1
    6. Haivision Makito X and X4
    7. TASCAM VS-R264
    8. Datavideo NVS-40
    9. Magwell Ultra Encode
    10. Blackmagic ATEM Mini
    11. Black Box
    12. Orivision
    13. Axis
    14. LiveU Solo
    15. YoloLive
    16. Pearl Nano
    17. Kiloview Encoders

    1. OBS (Software)

    OBS is the live encoding software that everyone should start with. The open-source solution is free, proven, and available on multiple systems (Windows, Mac, Linux). Its dedicated user base of OBS developers work to keep the tool relevant, which means its feature set is constantly growing. Useful plug-ins and integrations are always being added for that reason. 

    OBS provides everything needed to get a live streaming studio running on a laptop — including transitioning between cameras, mixing audio, and integrating additional material into the production. The software can input multiple sources, and output can take the form of a live stream, recording, or virtual camera.

    Support can be found in the community on their forums, discord, and Facebook groups. OBS also offers developer docs and a knowledge base of guides curated by their volunteer support team. When you first fire up the program, OBS Studio even offers a wizard to optimise setup.

    We’ve used OBS internally at Bitmovin to generate RTMP streams for many years. We’ve also developed a new plugin for the OBS project that makes it simple to connect OBS to Bitmovin Streams. This helps streamline the process and reduce the likelihood of typos when connecting OBS with Bitmovin.

    obs live streaming encoder
    obsproject.com

    Best use case: Simple live broadcasts that require a broad range of functionality.

    Key features:

    • Real-time audio and video mixing with transitions and filters
    • Picture-in-picture shots, personalised watermarks, lower-thirds animation, and more
    • File management
    • Screen recording and multi-screen recording
    • Video conferencing and collaboration tools
    • Can be configured for low-latency and 4K streaming
    • Codecs: H.264, MP3, AAC
    • Protocols: RTMP, RTSP, SRT, RIST

    Pros:

    • Highly customisable
    • The scenes functionality allows you to configure stream settings and jump back and forth between them
    • Frequent updates and new plugins from dedicated community
    • Free

    Cons:

    • The user interface isn’t always intuitive
    • Quickly switching between sources and manipulating the broadcast in real time can be challenging
    • OBS comes with a learning curve to configure the encoding settings
    • There’s no dedicated customer support if you need to troubleshoot an issue
    • Minor bugs and glitches are reported
    • You’ll need a laptop to run OBS

    Price: OBS is and likely always will be free. This makes it a no-brainer for projects where the budget is constrained.

    Compatibility: OBS is the most widely compatible software on this list, suitable for macOS Catalina 10.15 and newer, Windows 10 release 1809 and newer or Windows 11, and Linux/Unix W window system or Wayland.

    2. Wirecast (Software)

    Telestream’s Wirecast comes in two tiers: Wirecast Studio and Wirecast Pro. Both provide extensive customisation for professional productions while also enabling broadcasters to go live in a snap with built-in presets for YouTube, Facebook, and more. The intuitive interface pre-populates fields with recommended settings, and you don’t have to add new features using plug-ins as you do with OBS.

    Affordability is not Wirecast’s claim to fame, and even if you want to test it with a free trial you’ll be stuck with watermarked audio and video.

    wirecast live streaming encoder
    telestream.net/wirecast

    Best use case: Live sports events and similar live events.

    Key features:

    • Audio mixer and pan-tilt-zoom (PTC) controllers 
    • Options for live streaming, recording, and streaming to external sources
    • Automated production workflows
    • Video conferencing and remote productions
    • Integrated text, transitions, chroma Key, and clocks
    • Social media comment moderation 
    • Stock media library and built-in lower-thirds title library
    • Can be configured for low latency and 4K streaming
    • Codecs: H.264, MP3, AAC
    • Protocols: RTMP, RTSP, SRT

    Pros:

    • Quick learning curve
    • Pre-populates with best encoding settings
    • Unlimited sources and destinations
    • Replays, scoreboards, clocks, and timers for sports producers

    Cons:

    • Costly software option
    • Requires a lot of computing resources
    • You’ll need a laptop to run Wirecast

    Price: Wirecast Studio costs $599 and Wirecast Pro costs $799

    Compatibility: Wirecast is compatible with macOS Catalina 10.15 and newer, Windows 10 release 1809 and newer or Windows 11.

    3. VMix (Software)

    vMix bridges the gap between software encoding and professional-quality video productions. Designed to run on a laptop but robust enough to run alongside purpose-built hardware, vMix has some great feature sets even for the minimum $60 price tag. Plus, the 60-day trial comes with all of the Pro features.

    Wirecast is a great option for Windows users. Unfortunately, it isn’t supported for Mac users and we don’t recommend running it in a virtual environment on OS X when you could go with an alternative like Wirecast or OBS.

    vmix live streaming encoder
    vmix.com

    Best use case: Windows users requiring a turnkey studio and live production system.

    Key features:

    • Audio mixing and ability to combine multiple video files
    • Simultaneous streaming, recording, and output
    • Offers extensive transition effects including cut, fade, zoom, wipe, slide, etc.
    • 100+ built-in animated titles, scoreboards, and tickers
    • HD virtual sets with high-quality chroma key
    • Can add up to 8 remote guests via vMix Call
    • Can be configured for low latency and 4K streaming
    • Codecs: H.264, H.265, AAC
    • Protocols: RTMP, RTSP, SRT, TS

    Pros:

    • vMix social allows integration with social media comments 
    • The lowest tier comes at a very affordable price, with the flexibility to pay more for additional flexibility
    • Stable and easy-to-use software
    • Consistently adding new technologies and capabilities to the platform 
    • Generous 60-day trial period

    Cons:

    • Only a viable option for Microsoft users
    • You’ll need a laptop to run vMix

    Price: vMix offers a wide range of pricing options.

    • vMix basic — $60
    • vMix HD — $350
    • vMix 4K — $700
    • vMix Pro — $1,200

    Compatibility: Windows 10 or newer.

    4. Videon EdgeCaster EZ Encoder (Hardware)

    The Videon EdgeCaster EZ Encoder is a portable appliance that brings cloud functionality on premises with LiveEdge. In this way, it combines the flexibility of software encoders with the power and reliability of hardware solutions. Regular software updates ensure support for the most advanced features and the latest industry standards. What’s more, the Videon Compute Platform is integrated with Bitmovin, ensuring that live productions can be quickly configured and sent to the Bitmovin Live Event Encoder distribution.

    videon live streaming encoder
    videonlabs.com/edgecaster-ez-encoder

    Best use case: Broadcasters looking to combine a dedicated out-of-the-box solution with cloud management.

    Key features:

    • Small portable appliance
    • Simultaneous streaming to three platforms including Facebook, YouTube, and Twitch
    • Support for ultra-low latency and 4K streaming
    • Codecs: H.264, H.265, AAC
    • Protocols: RTMP, RTSP, SRT, HLS, DASH, Low-Latency HLS, and Low-Latency CMAF for DASH
    video live streaming encoder workflow

    Pros:

    • Powerful contribution encoder designed for point-of-production
    • Ultra-low latency: Can achieve less than three seconds of latency worldwide
    • Combines benefits of on-premises hardware with cloud-based software
    • Broad protocol support
    • Easy-to-understand web interface
    • Support for HLS and DASH using CMAF

    Cons:

    Price: Starts at $1,300

    Compatibility: The Videon Edgecaster supports HDMI and SDI input.

    5. AJA HELO Plus (Hardware)

    AJA’s HELO Plus is a reliable and compact H.264 live streaming encoder. The appliance tops out at 1080p, so anyone looking for a 4K live streaming encoder should keep on reading. But for broadcasters requiring something portable and quiet to get the job done, this is a great option. 

    AJA also offers a handful of other encoders, including the U-Tap HDMI, U-Tap SDI, and Io 4K Plus.

    aja helo plus live streaming encoder
    www.aja.com/products/helo-plus

    Best use case: On-the-go streaming in tight quarters

    Key features:

    • Portable appliance
    • Can be controlled remotely via a web browser or locally
    • Powerful multi-input video processing
    • Simultaneous streaming and recording
    • Up to two streaming outputs and destinations
    • Picture-in-picture and other graphics functionality 
    • Codecs: H.264, AAC
    • Protocols: RTMP, RTSP, RTP/UDP, SRT, HLS

    Pros:

    • Fanless and silent
    • Calendar integration for live events
    • Supports SRT streaming for a cost-effective alternative to satellite contribution
    • Comes with a three-year warranty

    Cons:

    • Doesn’t support 4K streaming

    Price: The AJA HELO plus costs $1,869

    Compatibility: The AJA Helo Plus offers both HDMI and SDI input and output. Additional compatibility details can be found here.

    6. Matrox Monarch HD (Hardware)

    The most notable encoder of the Matrox suite of hardware is the Monarch HD. It’s small, rack-mountable, and fanless.

    matrox monarch hd live streaming encoder
    matrox.com/monarch-hd

    Best use case: Broadcasting live content that will be repurposed for additional distribution after the event.

    Key features:

    • Simultaneous streaming and recording
    • Integrates with Facebook Live and YouTube
    • One-touch stream and recording buttons
    • Remotely controlled
    • Ability to record to an SD card, USB drive, or system drive on computer
    • Codecs: H.264, AAC
    • Protocols: RTMP, RTSP

    Pros:

    • Recording is done in a higher quality to allow post-event editing with a master file
    • Presets and profiles for simple configuration
    • Accessible via web UI or API
    • Two-year warranty with live phone support

    Cons:

    • Doesn’t support 4K streaming
    • Doesn’t support protocols like SRT

    Price: The Matrox Monarch HD starts at $945

    Compatibility: The Matrox uses an HDMI input/output.

    7. Osprey Talon 4K (Hardware)

    As the first hardware encoder on our list that’s purpose-built for 4K encoding, the Osprey Talon 4K can stream up to 10 bit 4:2:2 4K 60 fps. What’s more, it supports emerging protocols for real-time video delivery.

    The Osprey Talon 4K can be controlled over a web interface and is suitable for use as a wall-mounted appliance or on a desktop. This appliance is also integrated with Bitmovin.

    osprey talon 4k live streaming encoder
    ospreyvideo.com/talon-encoders

    Best use case: Ultra low-latency streaming for interactive experiences like live auctions

    Key features:

    • Designed for 4K encoding
    • Simultaneous streaming and recording
    • Two-year warranty
    • Codecs: H.264, H.265, AAC, OPUS
    • Protocols: RTMP, RTP, UDP, SRT, Zixi, SRT, WHIP

    Pros:

    • WHIP integration for low-latency streaming
    • Unique in offering support for a wide range of protocols, including WHIP for WebRTC, SRT, and Zixi

    Cons:

    • At almost $3K, the Osprey Talon 4K is our priciest option yet

    Price: The Osprey Talon will set you back $2,690

    Compatibility: The Osprey Talon supports HDMI and SDI input.

    8. VCS NSCaster (Hardware)

    Rather than taking the form of a black box, the NAGASOFT VCS NSCaster-X1 is a touchscreen tablet for broadcasting, switching, mixing, recording, special effects, and monitoring. This complete live production system provides the flexibility to input an encoded stream from multiple cameras and devices and produce a highly professional live stream with graphic overlays, audio mixing, recording, and distribution. Alternatively, it can also be used like a contribution encoder.

    Designed to make live streaming easier to operate, the touchscreen allows broadcasters to quickly switch between channels and start broadcasts. The NSCaster-X1 also offers Ethernet, Wi-FI, and 4G connectivity to meet the needs of remote encoding.

    vcs nscaster-x2 live streaming encoder
    wp.vcs.ch/product/nscaster-x1/

    Best use case: Live sports production with professional broadcast features.

    Key features:

    • Touchscreen with simple user interface
    • Lightweight and portable
    • Picture-in-picture, scoreboard templates, and other graphic overlays
    • Streaming and recording
    • Multi-platform streaming to Facebook, YouTube, and more
    • Live+ connection for smartphone control
    • Can be used to operate up to four PTZ cameras with zoom, focus, aperture operation, and camera movement
    • Codecs: H.264, AAC
    • Protocols: RTMP, RTP, UDP

    Pros:

    • Equivalent of a video production truck in a tablet with more capabilities than most hardware encoders out there
    • Uniquely designed as a navigation tablet
    • Flexibility to connect to the internet via mobile networks if needed
    • Supports 2 3G-SDI and dual HDMI inputs
    • 1 3G-SDI, 1 HDMI PGM, 1 HDMI display output, gigabit ethernet, and cellular (4G) outputs
    • 802.11 b/g/n

    Cons:

    • Limited protocol and codec support
    • Doesn’t support 4K streaming
    • Not designed for low latency
    • Not cheap

    Price: The VCS NSCaster-X1 costs $4,175.

    Compatibility: The NSCaster-X1 includes two HDMI and SDI inputs, as well as two HDMI and one SDI output.

    9. Haivision Makito X and X4 (Hardware)

    The award-winning Haivision Makito X and X4 series of encoders and decoders support low-latency, high-quality transport over unpredictable networks. Specifically, Makito encoders were designed for use cases where low latency matters, including broadcast, government, enterprise, and more. 

    The Haivision Makito is your best bet when you need it all — low latency, reliability, and broadcast quality video streaming over IP. One reason for this is that Haivision themselves created the SRT protocol for low-latency, reliable video streaming.

    - Bitmovin
    haivision.com/makito-x4-series/

    Best use case: Remote field production requiring high quality and low latency video contribution over ip.

    Key features:

    • Multi-bitrate streaming of up to four 1080p 60 fps feeds
    • Broadcast quality video up to 4K UHD and in HDR
    • Portable design, also available as a blade for modular installation
    • Native SRT support for reliable ultra-low latency streaming over IP
    • 4:2:2 chroma subsampling for pristine color
    • Codecs: H.264, H.265, AAC
    • Protocols: RTMP, RTP, UDP, SRT

    Pros:

    • Supports SRT streaming for a cost-effective alternative to satellite contribution
    • Up to 8 encoding cores for synchronised multi-camera video streaming

    Cons:

    • Not designed with affordability in mind.

    Price: Haivision encourages interested buyers to request pricing, but we’re estimating that this live video and SRT streaming encoder falls somewhere in the $6,000-$12,000 range depending on the number of SDi inputs.

    Compatibility: The Makito X and X4 series provides 12G-SDI, 6G-SDI, 3G-SDI, HD-SDI and ST 2110 inputs and outputs.

    10. TASCAM VS-R264 (Hardware)

    Designed to address the growing demand for standalone YouTube encoders in live streaming environments, the TASCAM VS-Rs64 is a no-frills solution for transporting video across public networks.

    tascam vs-r264 live streaming encoder
    tascam.com/us/product/vs-r264/

    Best use case: Ideal for live streaming presentation for enterprises, house of worship, and education AV environments.

    Key features:

    • Simultaneous recording and streaming, as well as encoding and decoding
    • Simultaneous distribution to multiple streaming platforms
    • RESTful API integration
    • Supports Power over Ethernet (PoE)
    • Codecs: H.264, AAC
    • Protocols: RTMP, RTSP, and HLS
    tascam live encoding workflow
    Source: TASCAM

    Pros:

    • Can be used for both encoding and decoding

    Cons:

    • No support for 4K
    • Not designed for low latency

    Price: The TASCAM VS-R264 can be purchased for $1,499

    Compatibility: The TASCAM VS-R264 offers HDMI input and output.

    11. Datavideo NVS-40 (Hardware)

    Like many others on this list, the Datavideo NVS-40 enables video encoding and recording. The multi-channel streaming encoder has an easy-to-use web admin menu for controlling the appliance via your computer, tablet, or phone.

    datavideo nvs-40 live streaming encoder
    datavideo.com/NVS-40

    Best use case: Multi-channel live broadcasting.

    Key features:

    • Four channel capture and streaming
    • Each incoming signal is dual encoded
    • Dynamic parameter settings adjustment
    • Picture-in-picture and picture-by-picture mode
    • Codecs: H.264, AAC
    • Protocols: RTMP, RTSP, SRT, TS, HLS

    Pros:

    • Allows you to record directly to hard disk in master quality with fully customisable frame rate, GOP, and more
    • Can be used for end-user delivery via HLS

    Cons:

    • Input resolution maxes out at 1080p 60 fps

    Price: The Datavideo NVS-40 costs $1,999

    Compatibility: The Datavideo NVS-40 provides four HDMI inputs

    12. Magwell Ultra Encode (Hardware)

    This affordable universal encoder for live streaming offers configurable presets for streaming to Facebook, Twitch, and YouTube — including streaming to multiple destinations simultaneously. It’s a complete appliance for video production, contribution, and monitoring.

    magewell ultra encode live streaming encoder
    magewell.com/ultra-encode

    Best use case: Simple remote productions on a budget.

    Key features:

    • Simultaneous streaming to multiple destinations
    • Native support for Facebook, Twitch, and YouTube streaming
    • Can be controlled via web UI or APIs
    • Suitable for various network environments
    • Internet connectivity via wired Ethernet and wireless networks
    • Codecs: H.264, H.265, AAC
    • Protocols: RTMP, RTSP, HLS, RTP

    Pros:

    • Can be camera-mounted for easy remote contribution
    • Affordable

    Cons:

    • Doesn’t support high bitrate encoding or 4K streaming
    • Recording is bound to the same quality as the encoding settings.

    Price: Both the SDI and HDMI models are affordable at $469

    Compatibility: The Magwell Ultra Encode comes in two varieties: The Magwell Ultra Encode HDMI and The Magwell Ultra Encode SDI

    13. Blackmagic ATEM Mini Pro (Hardware)

    Another affordable option, the Blackmagic ATEM Mini makes it simple to switch up to eight high-quality video inputs live. Easy to use, fast to learn, and portable, this is a great encoding hardware that also offers a wealth of production features.

    blackmagic atem mini live streaming encoder
    blackmagicdesign.com/products/atemmini

    Best use case: On-the-go encoding for multi-camera setups.

    Key features:

    • Mini switcher for multi-camera environments
    • Ability to connect 4G and 5G phones to use mobile data when no Ethernet connection is available
    • Advanced chroma key for green screen keying effects
    • Built-in graphics with photoshop plug-in and ability to connect PowerPint slideshows or gaming consoles
    • Ability to create virtual sets
    • Source monitoring 
    • Codecs: H.264, AAC
    • Protocols: RTMP

    Pros:

    • Includes the free ATEM Software Control Panel
    • Compact design
    • Super affordable

    Cons:

    • Limited protocol and codec support
    • Not designed for novices

    Price: At $295, it’s one of the cheapest options out there.

    Compatibility: The Blackmagic ATEM mini has 4 HDMI inputs.

    14. Black Box HDMI-over-IP H.264 Encoder (Hardware)

    As the name suggests, this Black Box encoder is a straightforward H.264 live streaming encoder for delivering media over IP networks. The encoder comes in two versions: two or four ports. It can also be paired with the Black Box VS-2001 DEC Decoder for streaming across LAN or WAN.

    - Bitmovin
    black-box.de/HDMI-over-IP-H264-Encoder

    Best use case: Enterprise collaboration and corporate communications.

    Key features:

    • Supports standard definition through up to 1920x1200p
    • Can be controlled via the web interface or Telnet API
    • Power over Ethernet
    • Comes with standard three-year warranty
    • Codecs: H.264, H.265, AAC, and MP2
    • Protocols: RTMP, RTP, TS through UDP, and HLS

    Pros:

    • Interfaces with HDMI signals to deliver media as far as your network reaches
    • Flexibility to choose how many HDMI ports you’d like

    Cons:

    • An expensive option considering its bare bones feature set
    • Tops out at 1080p
    • Limited codec support

    Price: Pricing isn’t public on Black Boxes website, but online retailers sell the two-port version for approximately $1,782 and the four-port version for approximately $2,273. 

    Compatibility: Buyers have the option to select between one, two, or four HDMI inputs.

    15. Orivision H.265 1080p HDMI Encoder (Hardware)

    The Orivision H.265 1080p HDMI Encoder touts itself as a stable live streaming solution for remote video transmission or transporting content over WAN. It’s simple and affordable, with a solid feature set.

    orivision live streaming encoder
    orivisiontech.com/h265-1080p60hz-hdmi-video-encoder-with-lcd/

    Best use case: IPTV systems, online courses, and meeting broadcasts are all good uses for this encoder.

    Key features:

    • Ability to monitor IP status and encoding parameters in real time via built-in LCD screen.
    • Logo, image, text, and mosaic overlay with support for adjusting font sizes or scrolling subtitles
    • Local and remote transmission
    • Output 4 channels simultaneously
    • Power over Ethernet
    • Codecs: H.264, H.265, AAC
    • Protocols: RTMP, RTSP, SRT, HLS, TS over UDP, and Onvit

    Pros:

    • Affordable
    • Web interface offers Chinese and English language selections
    • Image rotation and cropping makes it optimized for social media streaming with support for vertical screen view
    • Three-year warranty includes remote technical service and free firmware upgrade

    Cons:

    • Resolution maxes out at 1080p

    Price: The Orivision H.265 1080p HDMI Encode costs around $233, making it a very affordable option.

    Compatibility: The Orvision H.265 allows HDMI input.

    16. Axis M71 Video Encoder (Hardware)

    Axis encoders were designed for IP-based video surveillance systems that could benefit from improved image quality, better scalability, video analytics, and a lower cost of ownership than relying on analog CCTV systems. For this reason, PTZ controls and built-in intelligent analytics are key features of the Axis M71 Video Encoder.

    axis m71 live streaming encoder
    axis.com/axis-m71-series

    Best use case: Axis encoders are best suited for CCTV across campuses, IP-based video surveillance retail environments, and other surveillance streaming use cases.

    Key features:

    • Full frame rate in all resolutions 
    • PTZ control
    • Power over Ethernet (PoE)
    • Intelligent analytics such as motion detection and active tampering alarm
    • Built-in cybersecurity features
    • Supports all types of standard resolution analog babies
    • Codecs: H.264, H.265, AAC
    • Protocols: RTSP

    Pros:

    • Supports up to 16 channels
    • Purpose-built for surveillance and organisations making the migration from legacy systems to IP surveillance
    • Zipstream technology to analyse video streams in real time

    Cons:

    • Only supports 720p resolution (which is more than enough for surveillance)
    • Minimal production capabilities
    • Limited protocol support 

    Price: The AXIS M7104 Video Encoder will set you back about $335, whereas the AXIS M7116 Video Encoder costs around $875.

    17. LiveU Solo PRO HDMI/SDI (Hardware)

    The LiveU Solo encoder is well known in the industry for its ability to deliver reliable 4K video via bonded 4G and 5G. It lets content distributors go live touch of a button and is ideal for remote locations or congested network environments. It’s also integrated with destinations like Facebook, Amazon Live, Microsoft Teams — and will soon be integrated with Bitmovin.

    liveu solo live streaming encoder
    liveu.tv/liveu-solo

    Best use case: As a portable encoder that supports 4K, the LiveU Solo is great for remote streaming from sports events and conferences with congested networks.

    Key features:

    • Portable bonding encoder for 4K streaming
    • Compact and mobile with a built-in battery
    • 4K streaming
    • Solo Stream Tools for personal branding, stream protection, and multi-destination publishing
    • Web-based remote control from smartphones, laptops, tablets, or web browsers
    • Codecs: H.264, H.265, AAC
    • Protocols: RTMP, SRT, LRT (see below)

    Pros:

    • Offers LiveU Reliable Transport (LRT), a proprietary patented protocol allowing broadcasters to combine multiple IP connections (including cellular, WiFi, and Ethernet) to ensure bandwidth consistency
    • Lithium Ion battery supports three hours of wireless streaming

    Cons:

    • You pay for the expansive feature set that LiveU offers

    Price: Pricing for the LiveU Solo Pro starts at $1,495.

    Compatibility: The LiveU Solo PRO HDMI/SDI offers HDMI and SDI video interfaces.

    18. YoloLiv YoloBox Pro (Hardware)

    YoloLive positions their encoders as “the industry’s first REALLY all-in-one live production system that doesn’t require anything external.” Thus, the YoloBox Pro is a one-stop encoder, video switcher, recorder, and monitor in one appliance that’s portable and reliable. This system combines the touchscreen control of the NAGASOFT VCS NSCaster-X1 with the portability and mobile connectivity of LiveU Solo.

    yolobox pro live streaming encoder
    yololiv.com/yoloboxPro

    Best use case: YoloLiv is perfect for broadcasters looking for an all-in-one solution that eliminates the need for any additional equipment. 

    Key features:

    • All-in-one encoder, switcher, and monitor
    • Ability to live switch up to six video sources
    • Compact and mobile with a built-in battery
    • Simultaneous streaming to multiple destinations
    • Built-in chroma key for adding different backgrounds 
    • Ability to add video sources and PDF from SD card
    • Audio mixing and switching
    • Overlays for branding, picture-in-picture, scoreboards, comments, and more
    • Codecs: H.264, AAC
    • Protocols: RTMP

    Pros:

    • An all-in one solution with robust functionality
    • Portable and eliminates need for computer or workstation
    • All premium features are free with purchase and continuously being added

    Cons:

    • The compact monitor has no built-in speakers

    Price: Pricing must be requested on their website but retailers sell it for approximately $1,298 

    Compatibility: The YoloBox Pro has 3 HDMI inputs.

    19. Epiphan Pearl Nano (Hardware)

    Another portable and versatile encoder, the Epiphan Pearl Nano is a live video production hardware designed for small-scale events. It offers a range of capabilities in a compact package and includes a built-in screen for monitoring quality during live events.

    epiphan pearl live streaming encoder
    epiphan.com/compare-pearl-systems/

    Best use case: The Epiphan Pearl Nano is ideal for small-scale live events utilising SRT contribution. 

    Key features:

    • Flexible streaming, recording, and storage
    • Cloud-based configuration and monitoring
    • Built-in front screen for basic control and peace of mind
    • Production tools and custom layout designer for picture-in-picture, dynamic backgrounds, and other custom graphics
    • HDMI pass-through
    • Power over Ethernet (PoE)
    • Codecs: H.264, H.265, AAC
    • Protocols: RTMP, SRT, HLS, DASH

    Pros:

    • Portable and powerful solution
    • Broad protocol support

    Cons:

    • 4K is only available with paid upgrade
    • No chroma key capabilities

    Price: The Epiphan Pear Nano is priced at $1,695.

    Compatibility: Both HDMI and SDI video inputs can be connected to the Pearl Nano.

    20. Kiloview H.264 HD SDI/HDMI Encoder (Hardware)

    As the final encoder, the Kiloview H.264 HD encoder compresses live video for streaming over the internet just as all the others on our list. It doesn’t offer anything fancy like a touchscreen or 5G bonding, but it also won’t break the bank if all you’re looking for is H.264 encoding.

    - Bitmovin
    kiloview.com/h264-wired/

    Best use case: The Kiloview H.264 encoder is ideal for lectures, online courses, web training, remote learning, and course recording. 

    Key features:

    • Simultaneous recording and streaming
    • Streaming to multiple destinations like YouTube and Facebook
    • Logo, text, and image overlays
    • Codecs: H.264, AAC
    • Protocols: RTMP, RTSP, RTP, SRT, HLS, Onvif

    Pros:

    • Broad protocol support

    Cons:

    • Doesn’t support 4K streaming
    • Basic feature set

    Price: This affordable encore costs about $350, depending on where you’re located.

    Compatibility: The Kiloview H.264 encoder comes in two models: one with HDMI and one with SDI video inputs.

    Conclusion

    So, what’s the best live streaming encoder? It all depends on your needs. For big production content and complex studio setups, a hardware encoder or even hybrid hardware + software solution is often the best route. Simple live broadcasts are well-suited for software encoders — some of which are open-source and free to use (OBS) — or affordable hardware encoders like the Blackmagic ATEM Mini Pro. 

    You’ll want to weigh all the considerations detailed above and make sure it fits with your tech stack.

    Once you’ve settled on an encoder, your next task is to find the best platform for live streaming. At Bitmovin, we deliver video infrastructure to live streaming service providers building world-class video platforms. Our live and VOD platforms can ingest streams from any of the encoders detailed above and output HLS and DASH for delivery to streaming services.  

    Find out how you can achieve the highest quality of experience on the market and deliver unbreakable streams. Get started with a free trial today.

    Alternatively, if you have any questions or require further support, don’t hesitate to reach out. We’re always eager to help you navigate the complex world of streaming.

    The post The 20 Best Live Streaming Encoders: Software & Hardware [2023] appeared first on Bitmovin.

    ]]>
    https://bitmovin.com/blog/live-streaming-encoder/feed/ 0
    OBS & Bitmovin: Creating a Rockstar Streaming Experience https://bitmovin.com/blog/create-rockstar-streaming-experience-with-obs/ https://bitmovin.com/blog/create-rockstar-streaming-experience-with-obs/#respond Fri, 10 Feb 2023 10:00:57 +0000 https://bitmovin.com/?p=251685 At Bitmovin, we are fortunate to have a hugely exciting roster of customers – from some of the biggest media and entertainment brands in the world to dynamic startups looking to utilize video to help them achieve their business goals. We spend a great deal of time talking to our customers to better understand their...

    The post OBS & Bitmovin: Creating a Rockstar Streaming Experience appeared first on Bitmovin.

    ]]>
    At Bitmovin, we are fortunate to have a hugely exciting roster of customers – from some of the biggest media and entertainment brands in the world to dynamic startups looking to utilize video to help them achieve their business goals.

    We spend a great deal of time talking to our customers to better understand their needs, pain points and how our products can help them. There’s no one-size-fits all when it comes to products. Not every feature or update is suitable for every customer, but it’s a real joy when you find something that can benefit as many people as possible. 

    With this in mind, we think now is a good time to present our contribution to the Open Broadcaster Software project (OBS). OBS is very much a rockstar in the world of software! It comes from humble beginnings but has become a renowned and much-used tool within the video streaming industry. 

    Our contribution addresses the need to quickly and easily generate RTMP streams and we have made that possible with our easy-to-use product Streams, which now includes a Live feature. We know it’s not always easy to set up live streams, and typing out URLs is not anyone’s idea of a good time, so we hope this first iteration will make life a little easier for our customers using OBS and Bitmovin today. 

    Who uses OBS and Why?

    First of all, OBS is not the only option for contribution encoding. With most of our customers, they will have deployed hardware that is dedicated for this purpose on-premise: providing a box that your equipment can be plugged into, producing the signal that can be sent to another platform. However, OBS has a major advantage, one that is of interest to any of our customers: OBS is free and runs on Linux, Mac and Windows, i.e. most laptop or desktop computers. 

    OBS has a huge list of users: from entrepreneurs looking for a tool to let them stream live events and generate another revenue stream alongside ticket sales; developers that need a way to send a signal to a new digital platform looking to disrupt an established market; and single-user content creators vlogging to their followers, and yes, even video experts at large media companies that just need to run some quick tests. OBS is easier to monitor and use than FFMPEG, and it has a wealth of features continually being added to by the project community. It’s a well-known application amongst almost anyone that works with video.  

    From humble beginnings 

    OBS started eight years ago, back in 2013. Developer Hugh “Jim” Bailey wanted to stream his StarCraft games over the internet. He began building out the product because he had an interest in technology, a desire to build his own tools, a need to stream and the skills to code. He explains, in this wonderful podcast, the beginnings of OBS, some fundamentals around the architecture, the importance of managing latency and the drive to keep it open source.

    The development from a passion project into the product that is used by so many worldwide is a bit of a software phenomenon. It is software with a rock and roll story. OBS is the equivalent of a musician writing a song in a bedroom that goes on to become a global anthem. Companies invest millions of dollars to try and develop successful products but, just like in music, people value authenticity. Sometimes, it’s the product (or song!) that was overlooked by executives that become the most popular. 

    Part of the reason for its meteoric rise is that it solved a common problem when more and more people were looking for a similar solution on a low-to-no budget. It allowed anyone with the software to record their screen and stream the output over the internet. Its inception in 2013 has coincided with the rise of YouTube, Playstation live, XBox live and services such as Twitch fueled the demand. 

    Being an open-source project, like-minded developers found they could add to the application to incorporate graphics, other video sources or video files and produce a simulated TV production using their laptop. It’s not only live streamers that benefited from OBS. Content creators for platforms like Instagram, TikTok, and YouTube, also see the value because it allows them to stand out in a very crowded online space with still screens and jump cuts.

    Where are we now?

    Historically we used OBS internally at Bitmovin, the same way most of our larger customers used the software: to quickly generate RTMP streams as a source to test our Live Encoder. The workflow would be:

    1. Login to the Bitmovin Dashboard or use our API
    2. Create a new Live Encoding
    3. Select RTMP Push as an input
    4. While it starts, open OBS and open preferences
    5. Go to Stream – then select Custom
    6. Retrieve the RTMP push endpoint (the IP address, port and stream key)
    7. Enter into OBS
    8. Save and start streaming 

    Initially, this was used for testing features and functionality in the Bitmovin encoder, quickly changing resolution, adding and removing audio streams and so forth. Very quick to do in OBS if you don’t have some pre-written scripts for FFMPEG, which most customers would not have to hand. 

    In recent years though, from engaging with customers from broader backgrounds than media and entertainment, it’s become clear that OBS is used as the production tool for their live events. In some instances, we hear of customers having multiple staff members at numerous sites, all using OBS to broadcast to Bitmovin, which then delivers the high-quality HLS and DASH outputs to their digital platforms. 

    Using the existing workflow, we recognise it’s awkward to have to type in your RTMP details or just copy and paste from Bitmovin to OBS. We know this because we do it ourselves and think, “it would be so much easier if there were a Bitmovin plugin in the dropdown list instead of having to use Custom”. 

    The new workflow – connecting to Streams Live

    For users of Streams, they want to get going with the least amount of fuss, not just in Bitmovin but also in OBS, and we have contributed our own plugin to the OBS Studio project. Using the new Bitmovin plugin, users can now create as many live encodings as they need to in Bitmovin Streams. Simply follow these steps:

    Pre-requisites:

    Download version 29.0.0 or later of OBS from the official website

    If you don’t already have a Bitmovin account, sign up for a free trial on our website.

    Connect

    1: We suggest getting OBS ready first, so open up the application and open the Settings, either by navigating via the top menu

    OBS : Open Preferences

    Or pressing settings on the main control UI.

    OBS : Open Settings

    2: This opens up the Settings window, then navigate to Stream:
    (Hint: the default view here is the Custom settings – use these to connect to the RTMP endpoint with our customisable Live Encoder – great if you want to connect with SRT or RTMP)

    OBS : Settings Window

    3: Click on Service From the Dropdown window select “Bitmovin”

    OBS : Service From Dropdown

    4: Now open you Bitmovin Dashboard – https://bitmovin.com/dashboard/streams/home
    Navigate to Streams and Create new Live

    OBS : Bitmovin Dashboard

    5: This takes you to the Streams Live creation flow – just press Start Stream to get started.

    OBS : Streams Live Creation Flow

    6: Once it has started you’ll just need to copy the Streamkey

    OBS : Bitmovin Streamkey

    Or if the Stream was already running, copy it from the Stream status page

    OBS : Stream Status

    7: Then paste this key into OBS Settings and press OK:

    OBS : OBS Settings

    8: Press Start Streaming in OBS

    OBS : Start Streaming

    That’s it! You’re now streaming live.

    Not only does this reduce the number of steps that need to be taken, but it also reduces the likelihood of typos. The user only needs to know the API key, which generally remains static, and the name of the stream they want to connect to. 

    For customers with multiple live events running simultaneously, being streamed by users at multiple locations will allow them to connect to streams with names that match their event, for example. 

    We have been using this version internally for testing, which is a much more efficient way to use the power of OBS and Bitmovin. Because we use it ourselves, you can bet we’ll be improving it further in the future! 

    To try it out, download the latest version of OBS from the official website

    If you don’t already have a Bitmovin account, sign up for a free trial on our website.

    Then start streaming, and let us know what you think or find answers for any questions in our Community.

    If you use OBS for profit, consider contributing to their Patreon account and give something back to the great folks that keep this project running. 

    The post OBS & Bitmovin: Creating a Rockstar Streaming Experience appeared first on Bitmovin.

    ]]>
    https://bitmovin.com/blog/create-rockstar-streaming-experience-with-obs/feed/ 0
    ATHENA Labs: Improving the Quality and Efficiency of Live Video Streaming with Optimizing Resource Utilization in Live Video Streaming (OSCAR) https://bitmovin.com/blog/multicast-live-video-streaming-oscar/ Thu, 08 Apr 2021 16:00:33 +0000 https://bitmovin.com/?p=164377 Live video streaming is a specific type of streaming that a video is broadcasted in real-time. The actual source of the video can be pre-recorded or simultaneously recorded. Live streaming is suitable for live venues, conferences, and gaming. In recent years the demands for watching live venues such as news, concerts, and sports have increased,...

    The post ATHENA Labs: Improving the Quality and Efficiency of Live Video Streaming with Optimizing Resource Utilization in Live Video Streaming (OSCAR) appeared first on Bitmovin.

    ]]>
    - Bitmovin
    Live video streaming is a specific type of streaming that a video is broadcasted in real-time. The actual source of the video can be pre-recorded or simultaneously recorded. Live streaming is suitable for live venues, conferences, and gaming. In recent years the demands for watching live venues such as news, concerts, and sports have increased, with an additional boost in demand due to the COVID-19 pandemic. Moreover, new applications like E-learning, online gaming, worship, e-commerce, and social networks like Facebook and Instagram further increase the demand for live streaming support.  On the client side, a large number of devices and applications with different capabilities like display resolution have emerged, resulting in an increasing demand for video streaming with various characteristics such as higher resolution, high perceptual visual quality, and frame rate. To satisfy clients’ needs,  it is crucial to offer multiple customized services, such as different quality levels/resolutions of the various video representations.

    How to Improve Quality of Live Video Streams

    Separate representation transfers

    The first and naive solution is to transfer each requested representation separately. Considering the fact that the number of representations is limited and a high number of users can watch a live video, we should transfer a quality level from the origin server to corresponding clients many times. This approach can generate redundant traffic and waste a significant amount of limited network bandwidth, resulting in the degradation of other users/services quality.

    Transferring representation with multicast trees

    The alternative solution is employing multicast. To employ this methodology, a service provider needs to create a multicast “tree” from the origin server to function as the “root” for each desired quality level to a corresponding client. The origin server sends a single copy of each requested representation to clients through the multicast tree. The video packets are automatically duplicated in the network elements, such as routers, switches, and cellular network base stations, whenever the multicast tree is branched. Fig.1 below shows a simple scenario where a video is delivered in different qualities to seven unique clients connected through different base stations. Each delivered quality level is depicted using a different color, is independent, and should be delivered separately. As depicted below, the duplicated quality levels are sent once along a common path. For example, see the QId-4(blue) between “S1” to “P3”.

    Multicast Approach_Live Video Streaming_Workflow
    Fig.1 Multicast example

    The multicasting approach results in a considerable reduction in bandwidth utilization, especially in the internet’s core where the origin server is located. However, this approach still faces several challenges. First, each router has to maintain the state of a multicast group, which requires complicated operations in routers. Second, IP multicast routers do not have a global view of the network status and can hardly determine optimal multicast trees to ensure end-to-end quality of service (QoS) requirements. Finally, the multicast topology for video streaming is usually dynamic, i.e., clients can join and leave on the fly. However, current IP networks are not able to re-configure routing paths dynamically and adaptively. 

    Introducing the OSCAR approach

    To alleviate the current issues of classic multicasting mentioned above, the Christian Doppler Laboratory ATHENA at Alpen-Adria-Universität Klagenfurt proposes OSCAR (On Optimizing Resource Utilization in Live Video Streaming) as a new live video streaming approach. OSCAR employs two types of Virtual Network Functions (VNFs): 

    1. A set of virtual reverse proxy servers (VRPs) that are applied at the edge of the network to aggregate the clients’ requests and send them to a Software-Defined Networking (SDN) controller.
    2. A set of virtual transcoder functions (VTFs) to serve clients’ requested quality levels by transcoding them from the highest quality level. 

    After gathering requests from VRPs, the controller executes an optimization model to determine a multicast tree from the origin server to an appropriate subset of VTFs. As illustrated in Fig.2, using VTF(s) enables bandwidth usage reduction by sending only the highest requested quality level (here QId-4) from the origin server to VTF(s) over a multicast tree. Since VTFs are responsible for satisfying VRPs’ requests, they produce the lower quality levels from the highest quality level and then transmit them to the VRPs in a multicast fashion. For example, as depicted in Fig. 2, QId-4 is delivered from S1 to a VTF on P6 and then transcoded to the client’s requested quality levels in P6.

    Replacing Multicast Live Streaming_OSCAR Approach_Worflow
    Fig.2 OSCAR approach

    The OSCAR approach can be summarized into three overarching steps:

    1. In the first step, VRPs gather clients’ requests such as join, leave, and change the quality and then update the SDN controller accordingly.
    2. The SDN controller runs an optimization model to determine a multicast tree from the origin server to VRPs that pass through the VTFs.
    3. The VTFs produce the VRPs’ requested quality levels by transcoding from the highest quality level.
    4. The last step comprises applying outputs of the SDN controller to the network (e.g., setting up datapaths), running the VTFs,  and then transmitting data from the origin server to the requesting VRPs.

    The new OSCAR approach ensures that more viewers can view higher quality content at lower overall bandwidth expenditure (measured at the server level) at significantly faster speeds by only delivering the specified quality representations.

    Conclusion

    Throughout our testing of the OSCAR approach and its algorithms, we found that using VTFs resulted in substantial savings in network bandwidth usage due to transcoding to other requested quality levels. We evaluated the performance of the OSCAR approach by comparing bandwidth usage and network path selection effort of the open-source Tears of Steel video sequence using a “superfast” encoding preset with AVC codec. In the end, our most recent OSCAR test showed a 65% and 75% reduction in bandwidth usage and path selection overhead in comparison with state-of-the-art approaches, respectively.
    To view the full research, study, and analysis, download our paper on the official IEEE website published within the IEEE Transactions on Network and Service Management.
    Citation: A. Erfanian, F. Tashtarian, A. Zabrovskiy, C. Timmerer, and H. Hellwagner, “OSCAR: On Optimizing Resource Utilization in Live Video Streaming,” in IEEE Transactions on Network and Service Management, vol. 18, no. 1, pp. 552-569, March 2021, DOI: 10.1109/TNSM.2021.3051950.
    Learn more about the ATHENA Christian Doppler (CD) Laboratory here.
    Acknowledgment: The financial support of the Austrian Federal Ministry for Digital and Economic Affairs, the National Foundation for Research, Technology and Development, and the Christian Doppler Research Association, is gratefully acknowledged.
    About the Authors:
    This project is a collaboration between the ATHENA Lab, the Klagenfurt Alpen-Adria University, and Bitmovin

    • Christian Timmerer is an Associate Professor the Alpen-Adria University, Klagenfurt, lead researcher at ATHENA, and CIO & Head of Research at Bitmovin
    • Hermann Hellwagner is a full professor at the Alpen-Adria University, Klagenfurt and lead researcher at ATHENA
    • Alireza Erfanian is a researcher and Ph.D. student from the Christian Doppler Laboratory ATHENA, Alpen-Adria-Universität, Klagenfurt
    • Farzad Tashtarian is a post-doctoral researcher for ATHENA
    • Anatoliy Zabrovskiy is a researcher and lecturer at the Alpen-Adria University, Klagenfurt

    Video technology guides and articles

    The post ATHENA Labs: Improving the Quality and Efficiency of Live Video Streaming with Optimizing Resource Utilization in Live Video Streaming (OSCAR) appeared first on Bitmovin.

    ]]>
    Video Tech Deep-Dive: Live Low Latency Streaming Part 3 – Low-Latency HLS https://bitmovin.com/blog/live-low-latency-hls/ Mon, 10 Aug 2020 09:50:09 +0000 https://bitmovin.com/?p=122639 This blog post is the final piece of our Live Low-Latency Streaming series, where we previously covered the basic principles of low-latency streaming in OTT and LL-DASH. This final post focuses on latency when using Apple’s HTTP Live Streaming (HLS) protocol and how the latency time can be reduced. This article assumes that you are...

    The post Video Tech Deep-Dive: Live Low Latency Streaming Part 3 – Low-Latency HLS appeared first on Bitmovin.

    ]]>
    - Bitmovin
    This blog post is the final piece of our Live Low-Latency Streaming series, where we previously covered the basic principles of low-latency streaming in OTT and LL-DASH. This final post focuses on latency when using Apple’s HTTP Live Streaming (HLS) protocol and how the latency time can be reduced. This article assumes that you are already familiar with the basics of HLS and its manifest/playlist mechanics. You can view the first two posts below:

    Why is latency high in HLS?

    HLS in its current specifications favors stream reliability over latency. Higher latency is accepted in exchange for stable playback without interruptions. In section 6.3.3. Playing the Media Playlist File the HLS specification states that a playback client

    SHOULD NOT choose a segment that starts less than three target durations from the end of the playlist file

    Low Latency HLS _Earliest stream segment to join_linear visual
    Honoring this requirement results in having a latency of at least 3 target durations. Given typical target durations for current HLS deployments of 10 or 6 seconds, we would end up with a latency of at least 30 or 18 seconds, which is far from low. Even if we choose to ignore the above requirement, the fact that segments are typically produced, transferred, and consumed in their entirety poses a high risk of buffer underruns and subsequent playback interruptions, as described in more detail in the first part of this blog series.
    The HLS media playlist for the above depicted this live stream would look something like this:
    [bg_collapse view=”button-blue” color=”#f7f7f7″ icon=”eye” expand_text=”View HLS media playlist” collapse_text=”Close HLS media playlist” ]

    Low-Latency HLS _HLS media playlist call request_code screenshot

    [/bg_collapse]

    Road to Low-Latency HLS

    2017’s Periscope, the most popular platform for live streaming of user-generated content at the time, investigated streaming solutions to replace their RTMP- and HLS-based hybrid approach with a more scalable one. The requirement was to offer similar end-to-end latency as RTMP but in a more cost-effective way; considering that their use case was streaming to large audiences. Periscope presented their solution to high latency issues: which took Apple’s HLS protocol, made two fundamental changes and called it Low-Latency HLS (LHLS):

    1. Segments are delivered using HTTP/1.1 Chunked Transfer Coding
    2. Segments are advertised in the HLS playlist before the are available

    If you read our previous blog posts about Low-Latency streaming, you might recognize these simple concepts as being the key ingredients for today’s OTT-based Low-Latency streaming approaches, like LL-DASH. Periscope’s work likely sparked and influenced the following developments around low-latency streaming such as LL-DASH and a community-driven initiative for defining modifications to HLS aiming to reduce streaming latency that started at the end of 2018. 
    The core of the community proposal for LHLS was the same as the aforementioned concepts. Segments should be loaded in chunks using HTTP CTE and early availability of incomplete segments should be signaled using a new #EXT-X-PREFETCH tag in the playlist. In the example below, the client can already load and consume the currently available data of 6.ts and continue to do so as the chunks become available over time. Furthermore, the request for the segment 7.ts can be made early on to save network round-trip time, even though production had not started yet. It is also worth mentioning that the LHLS proposal preserves full backward-compatibility allowing standard HLS clients to consume such streams. This was the gist of the proposed implementation; you can find the full proposal in the hlsjs-rfcs GitHub repository.
    [bg_collapse view=”button-blue” color=”#f7f7f7″ icon=”eye” expand_text=”View LHLS media playlist proposal” collapse_text=”Close LHLS media playlist proposal” ]
    Low-Latency HLS _LHLS modification proposal_code screenshot
    [/bg_collapse]
    Individuals across several companies in the media industry came together to work on this proposal with the hope that also Apple, being the driving force behind HLS, would join in and work the proposal into the official HLS specification. However, things came to fruition very differently than expected as Apple presented its own preliminary version, a very different approach during their 2019’s Worldwide Developers Conference.
    Despite it being (and staying) a proprietary approach, some companies, like Twitch, are successfully using it in their production systems.

    Apple’s Low-Latency HLS

    In this section we’ll cover the principles of Apple’s preliminary specification for Low-Latency HLS.

    Generation of Partial Media Segments

    While HLS content is split into individual segments, in low-latency HLS each segment further consists of parts that are independently addressable by the client. For example, a segment of 6 seconds can consist of 30 parts of 200ms duration each. Depending on the container format, such parts can represent CMAF chunks or a sequence of TS packets. This partitioning of segments decouples the end-to-end latency from the long segment duration and allows the client to load parts of a segment as soon as they become available. Compared to LL-DASH, this is achieved by using HTTP CTE, however, the MPD does not advertise individual parts/chunks of segments.
    [bg_collapse view=”button-blue” color=”#f7f7f7″ icon=”eye” expand_text=”View partial media segment generation in low latency HLS” collapse_text=”Close partial media segment generation in low latency HLS” ]
    Partial media segment generation in Low-Latency HLS _code screenshot
    [/bg_collapse]
    Partial segments are advertised using a new EXT-X-PART tag. Note that partial segments are only advertised for the most recent segments in the playlist. Furthermore, the partial segments (filePart272.x.mp4) and the respective full segments (fileSequence272.mp4) are offered.
    Partial segments can also reference the same file but at different byte ranges. Clients can thereby load multiple partial segments with a single request and save round-trips compared to making separate requests for each part (as seen below).
    Low-Latency HLS_byterange variations for partial segment requests_code screenshot

    Preload hints and blocking of Media downloads

    Soon to be available partial segments are advertised prior to their actual availability in the playlist by a new EXT-X-PRELOAD-HINT tag. This enables clients to open a request early and the server will respond once the data becomes available. This way the client can “save” the round-trip time for the request.
    Low-Latency HLS _Preload hints for media segments_code screenshot

    Playlist Delta Updates

    Clients have to refresh HLS playlists more frequently for low-latency HLS. Playlist Delta Updates can be used to reduce the amount of data transferred for each playlist request. A new EXT-X-SKIP tag replaces the content of the playlist that the client already received with a previous request.

    Blocking of Playlist reload

    The discovery of new segments becoming available for an HLS live stream is usually applied by the client reloading the playlist file in regular intervals and checking for new segments being appended. In the case of low-latency streaming, it is desirable to avoid any delay from a (partial) segment becoming available in the playlist to the client discovering its availability. With the playlist reloading approach, such discovery delay can be as high as the reload time interval in the worst case.
    With the new feature of blocking playlist reloads, clients can specify which future segment’s availability they are awaiting and the server will have to hold onto that playlist request until that specific segment becomes available in the playlist. The segment to be awaited for is specified using a query parameter on the playlist request.

    Rendition Reports

    When playing at low latencies, fast bitrate adaptation is crucial to avoid playback interruptions due to buffer underruns. To save round-trips during playlist switching, playlists must contain rendition reports via a new EXT-X-RENDITION-REPORT tag that informs about the most recent segment and part in the respective rendition.
    - Bitmovin

    Conclusion

    For more detailed information on Apple’s low-latency HLS take a look at the Preliminary Specification and the latest IEFT draft containing low-latency extensions for HLS.
    We can conclusively say that low-latency HLS increases complexity quite significantly compared to standard HLS. The server will have its responsibilities expanded, from simply serving segments to supporting several additional mechanisms that clients use to save network round-trips and speed up segment delivery which ultimately enables lower end-to-end latency. Considering that the specification remains subject to change and is yet to be finalized, it might still take a while until streaming vendors pick it up and we finally see low-latency HLS in the wild. In short, live low latency streaming using HLS is possible, but at a large cost to server complexity, there are measures being developed to reduce complexity and server load, but it’ll take wider spread adoption by major stream providers for this to happen.

    The post Video Tech Deep-Dive: Live Low Latency Streaming Part 3 – Low-Latency HLS appeared first on Bitmovin.

    ]]>
    Video Tech Deep-Dive: Live Low Latency Streaming Part 2 https://bitmovin.com/blog/live-low-latency-streaming-p2/ Thu, 25 Jun 2020 12:42:01 +0000 https://bitmovin.com/?p=118091 This blog post is continuation of an ongoing blog and webinar technical deep series. You can find the first blog post here. The first post covered the fundamentals of live low latency and defined chunked delivery methods with CMAF. This blog post expands on chunked CMAF delivery by explaining it’s application with MPEG-DASH to achieve low...

    The post Video Tech Deep-Dive: Live Low Latency Streaming Part 2 appeared first on Bitmovin.

    ]]>
    - Bitmovin
    This blog post is continuation of an ongoing blog and webinar technical deep series. You can find the first blog post here. The first post covered the fundamentals of live low latency and defined chunked delivery methods with CMAF.
    This blog post expands on chunked CMAF delivery by explaining it’s application with MPEG-DASH to achieve low latency. We’ll lay some foundations and cover the basic approaches behind low-latency DASH, then look into what future developments are expected as low-latency streaming is a heavily researched subject and is quickly becoming a media industry standard.

    Basics of MPEG-DASH Live Streaming

    Before diving into how Low Latency Streaming works in MPEG-DASH we first need to understand some basic stream mechanics of DASH live streams, most importantly, the concept of segment availability.
    The DASH Media Presentation Description (MPD) is an XML document containing essential metadata of a DASH stream. Among many other things, it describes which segments a stream consists of and how a playback client can obtain them. The main difference between on-demand and live stream segments within DASH is that all segments of the stream are available at all times for on-demand; whereas the segments are produced continuously one after another as time progresses for live-streams. Every time a new segment is produced, its availability is signaled to playback clients through the MPD. It is important to note that a segment is only made available once it is fully encoded and written to the origin.

    Live Low Latency-Segment Availability:Time
    Fig. 1 Live stream with template-based addressing scheme (simplified)

    The MPD would specify the start of the stream availability (i.e. the Availability Start Time) and a constant segment duration, e.g. 2 seconds. Using these values the player can calculate how many segments are currently in the availability window and also their individual availability start time. For example, the segment availability start time for the second segment would be AST + segment_duration * 2.

    Low Latency Streaming with MPEG-DASH

    In the first part of this blog post series, we described how chunked encoding and transfer enables partial loads and consumption of segments that are still in the process of being encoded. To make a player aware of this action, the segment availability in the MPD is adjusted to signal an earlier availability, i.e. when the first chunk is complete. This is done using the availabilityTimeOffset in the MPD. As a result, the player will not wait for a segment to be fully available and will load and consume it earlier.
    Consider the example of Fig.1 with a segment duration of 2 seconds and a chunk duration of 0.033 seconds (i.e. one video frame duration with 29.97 fps). To signal the segment availability once the first chunk is completed we would set the availabilityTimeOffset to 1.967 seconds (segment_duration – chunk_duration). This would signal the greyed-out segment in Fig. 1 to become partially available.
    The below MPD represents this example:

    <?xml version="1.0" encoding="utf-8"?>
    <MPD
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
      xmlns="urn:mpeg:dash:schema:mpd:2011"
      xmlns:xlink="http://www.w3.org/1999/xlink"
     xsi:schemaLocation="urn:mpeg:DASH:schema:MPD:2011 http://standards.iso.org/ittf/PubliclyAvailableStandards/MPEG-DASH_schema_files/DASH-MPD.xsd"
      profiles="urn:mpeg:dash:profile:isoff-live:2011"
      type="dynamic"
      minimumUpdatePeriod="PT500S"
      suggestedPresentationDelay="PT2S"
      availabilityStartTime="2019-08-20T05:00:03Z"
      publishTime="2019-08-20T12:42:07Z"
      minBufferTime="PT2.0S">
      <Period start="PT0.0S">
        <AdaptationSet
          contentType="video"
          segmentAlignment="true"
          bitstreamSwitching="true"
          frameRate="30000/1001">
          <Representation
           id="0"
           mimeType="video/mp4"
           codecs="avc1.64001f"
           bandwidth="2000000"
           width="1280"
           height="720"
            <SegmentTemplate
             timescale="1000000"
             duration="2000000"
             

    availabilityTimeOffset=”1.967″

             initialization="1566277203/init-stream$RepresentationID$.m4s"
             media="1566277203/chunk-stream_t_$RepresentationID$-$Number%05d$.m4s"
             startNumber="1">
            </SegmentTemplate>
          </Representation>
        </AdaptationSet>
      </Period>
    </MPD>

    To recap, for low-latency DASH we are mainly doing two things:

    • Chunked encoding and transfer (i.e. chunked CMAF)
    • Signaling early availability of in-progress segments

    While the previous approach enables a basic low-latency DASH setup, there are additional considerations to be made to further optimize and stabilize streaming experience. The DASH Industry Forum is working on guidelines for low-latency DASH to be released in the next version of the DASH-IF Interoperability Points (DASH-IF IOP) – expected in early July 2020. The change request for that can be found here. The following will explain key parts of these guidelines. Please note that some features were not officially finalized and standardized at the time of this post’s publication (June 2020).

    Wallclock Time Mapping

    For the purpose of measuring latency, a mapping between the media’s presentation time and the wall-clock time is needed. This is so that for any given presentation time of the stream the corresponding wall-clock time is known. The latency for a given playback position can then be calculated by determining the corresponding wall-clock time and subtracting it from the current wall-clock time.
    This mapping can be achieved by specifying a so-called Producer Reference Time either in the segments (i.e. inband as prft box) or in the MPD. It essentially specifies the wallclock time at which the respective segment/chunk was produced. (as seen below)

    <ProducerReferenceTime
      id="0"
      type="encoder"
      presentationTime="538590000000"
      wallclockTime="2020-05-19T14:57:45Z">
    </ProducerReferenceTime>

    The type attribute specifies whether the reference time was set by the capturing device or the encoder. Allowing for calculation of the End-to-End Latency (EEL) or Encoder-Display Latency (EDL), respectively.

    Client Time Synchronization

    A precise time/clock at the playback client is necessary for calculations that involve the client’s wallclock time such as segment availability calculations and latency calculations. It is recommended for the MPD to include a UTCTiming element which specifies a time source that can be used to adjust for any drift of the client clock. (as seen below)

    <UTCTiming
    
      schemeIdUri="urn:mpeg:dash:utc:http-iso:2014"
    
      value="https://time.akamai.com/?iso"
    
    />

    Low Latency Service Description

    A ServiceDescription element should be used to specify the service provider’s desired target latency and minimum/maximum latency boundaries in milliseconds. Furthermore, playback rate boundaries may be specified that define the allowed range for playback acceleration/deceleration by the playout client to fulfill the latency requirements.

    <ServiceDescription id="0">
      <Latency target="3500" min="2000" max="10000" referenceId="0"/>
      <PlaybackRate min="0.9" max="1.1"/>
    </ServiceDescription>

    In most player implementations such parameters are provided externally using configurations and APIs.

    Resynchronization Points

    The previous post pointed out that chunked delivery decouples the achievable latency from the segment durations and enables us to choose relatively long segment durations to maintain good video encoding efficiency. In turn, this prevents fast quality adaptation of the player as quality switching can only be done on segment boundaries. In a low-latency scenario with low buffer levels, fast adaptation — especially down-switching — would be desirable to avoid buffer underruns and consequently playback interruptions.
    To that end, Resync elements may be used that specify segment properties like chunk duration and chunk size. Playback clients can utilize them to locate resync point and

    • Join streams mid-segment, based on latency requirements
    • Switch representations mid-segment
    • Resynchronize at mid-segment position after buffer underruns

    The previous was a glimpse of what to expect in the near future and shows the great effort of the media industry put into kick-starting low-latency streaming with MPEG-DASH and getting it ready for production services. 
    Want to learn more? Check out Part 3: Video Tech Deep-Dive: Live Low Latency Streaming Part 3 – Low-Latency HLS
    … or take a look at some of the supporting documentation below:
    [Tool] DASH-IF Conformance Tool 
    [Blog Post] Video Tech Deep-Dive: Live Low Latency Streaming Part 1 
    [Demo] Low Latency Streaming with Bitmovin’s Player 

    The post Video Tech Deep-Dive: Live Low Latency Streaming Part 2 appeared first on Bitmovin.

    ]]>
    Video Tech Deep-Dive: Live Low Latency Streaming Part 1 https://bitmovin.com/blog/live-low-latency-streaming-p1/ Wed, 22 Apr 2020 13:49:39 +0000 https://bitmovin.com/?p=112080 What is Live Low Latency? Low Latency in live streaming is the time delay between an event’s content being captured at one end of the media delivery chain and played out to a user at the other end. Consider a goal scored at a football game: Live latency is the delay in time between the...

    The post Video Tech Deep-Dive: Live Low Latency Streaming Part 1 appeared first on Bitmovin.

    ]]>
    What is Live Low Latency?

    Low Latency in live streaming is the time delay between an event’s content being captured at one end of the media delivery chain and played out to a user at the other end. Consider a goal scored at a football game: Live latency is the delay in time between the moment a goal is scored and captured by a camera until the moment that a viewer sees the goal on their own device. There are a few different terms that effectively define the same experience: end-to-end latency, hand-waving latency, or glass-to-glass latency.

    End-to-end video encoding workflow illustrated
    End-to-end video encoding workflow (where latency matters)

    In our most recent developer report, low latency was identified as one of the biggest challenges for the media industry. This blog series will take an in-depth look into why that’s the case, welcome to our Live Latency Deep Dive series!
    Low-Latency-Dev-Report-Graph

    Why care about Low Latency? 

    Most use cases where live latency is crucial can be categorized into the following:

    Live content delivered across multiple distribution channels

    high live latency in comparison to traditional linear broadcast delivery via satellite, terrestrial or cable services.  Over-the-top (OTT) delivery methods like MPEG-DASH and Apple HLS have become the defacto standard for delivering video to audiences using mobile devices such as smartphones, tablets, laptops, and Smart TVs. Live network content, like sports or news, drive the need for low live latency as these networks attempt to deliver content simultaneously over various distribution means (e.g. OTT vs Cable). 
    Picture a scenario where you are streaming your favorite football team playing in the global final, your neighbor and equal fan (with incredibly thin walls) has traditional linear cable. It’s the final moments of the game, but you hear the neighbor cursing loudly, despite the fact that there is well over 1 minute left in the game. The thrill is spoiled and you know your team certainly lost. The need for faster live latency becomes clear, the difference between broadcast and streaming is unacceptable in today’s digital world. But a lot of factors affect how quickly content will appear on a viewer’s screen. Aside from infrastructural issues (like not being optimized for low latency), modern streaming methods may suffer latency delays from additional factors like social media feeds, push notifications, and second-screen experiences running in parallel to the live event.

    Interactive live content

    Whenever audience interaction is involved, live latency should be as low as possible to ensure a good quality of experience (QoE). Such use cases include webinars, auctions, user-generated content where the broadcaster interacts with the audience (e.g. Twitch, Periscope, Facebook Live, etc.) and more. Latency is often measured on a spectrum, where high latency is the least sought after delay, and Real-Time is the most sought after. See the Latency Spectrum below (including the latency types, delay time, and streaming formats):

    Live Low Latency Deep Dive_Latency Spectrum-graph
    Latency Spectrum in Video Streaming

    The latency spectrum shows that unoptimized OTT delivery accounts for around 30+ seconds of delay while cable broadcast TV clocks in at around 5 seconds – give or take. Furthermore, sub-second latencies may not be achievable with OTT methods and require other protocols like WebRTC.

    Where does live latency come from?

    First, a slightly more technical definition of live latency: It’s the time difference between a video frame being captured and the moment it’s presented to the playback client. In other words, it’s the time that a video frame spends in the media processing and delivery chain. Every component in the chain introduces a certain amount of latency and eventually accumulates to what is considered live latency. 
    Let’s have a look at the main sources of live latency:

    Buffering ahead for playback stability at the player-level

    Low-latency-Livestream-timeline-illustrated
    Live stream timeline

    A video player will aim to maintain a pre-defined amount of buffered data ahead of its playback position. The standard value is about 30 seconds of buffer loaded ahead at all times during playback. One of the reasons behind this is the cause is that if network bandwidth drops during playback there would still be 30 seconds of data to be played out without interruption. During this time the player can react to new bandwidth conditions appropriately, thereby buying the player some time to adapt. Buffer time also typically influences the bitrate adaptation decisions as low buffer levels may imply more aggressive downwards adaptations.
    However, when aiming for 30 seconds of buffer with a live stream, the player must stay at least 30 seconds behind the live edge (the most recent point) of the stream with its playback position; this would result in a live latency of 30 seconds. Conversely, this means that aiming for a low latency would require being even closer to the live edge and implies having a minimum buffer. If we aim for 5 seconds of latency, the player would have 5 seconds of buffer at most. Thus, the difficult decision of trading off between latency and playback stability must be made.

    Segments are produced, transferred and consumed in their entirety

    Live streams are encoded in real-time. This means that if a segment duration is 6 seconds it will take the encoder 6 seconds to produce one full segment. Additionally, if fragmented MP4 is used as the container format, encoders can only write a segment to the desired storage once it’s encoded completely, i.e. 6 seconds after starting the encode of the segment. So once a segment is transferred to the storage its oldest frame is already 6 seconds old. On the other side of the delivery chain, the player can only decode an fMP4 segment in its entirety and therefore needs to download a segment fully before it can process it. Network transfers: like uploading a video to a CDN origin server, transferring the content within the CDN, and downloading from the CDN edge server to the client can add to the overall latency to a lower degree.
    In summary, the fact that segments are only processed and transferred in their entirety results in latency being correlated directly to segment duration.

    Low Latency Data Segments in the Encoding Workflow Illustrated
    Data Segments in the Encoding Workflow

    What can we do?

    Naive approach: Short segments

    As latency is correlated to segment duration, a simple way to decrease latency would be to use short segments, e.g. 1-second duration. However, this comes with negative side effects such as:

    • Video coding efficiency suffers: The requirement of each video segment starting with a key frame implies having small groups of pictures (GOPs). This in turn, causes the efficiency of differential/predictive coding to suffer. With short segments, you’d have to spend more bits if you’re aiming for the same perceptual quality as longer segments with the same content.
    • More network requests and everything negative associated with them, e.g. time to first byte (TTFB) wasted on every request.
    • Increased number of segments may decrease CDN caching efficiency.
    • Buffer at the player grows in a jumpy fashion which increases the risk of playback stalls due to rebuffering.

    Chunked encoding and transfer

    To solve the problem of segments being produced and consumed only in their entirety, we can make use of the chunked encoding scheme specified in the MPEG-CMAF (Common Media Application Format) standard. CMAF defines a container format based on the ISO Base Media File Format (ISO BMFF), similar to the MP4 container format, which is already widely supported by browsers and end devices. Within its chunked encoding feature, CMAF introduces the notion of CMAF chunks. Compared to an “ordinary” fMP4 segment that has its media payload in a single big mdat box, chunked CMAF allows segments to consist of a sequence of CMAF chunks (moof+mdat tuples). In extreme cases, every frame can be put into its own CMAF chunk. This enables the encoder to produce and the player’s decoder to consume segments in a chunk-by-chunk fashion instead of limiting use to entire segment consumption. Admittedly, the MPEG-TS container format offers similar properties as chunked CMAF, but it’s fading as a format for OTT due to the lack of native device and platform support that fMP4 and CMAF provide.

    Low Latency data segments illustrated
    6s fMP4 segment compared to chunked CMAF

    Chunked encoding on its own does not help us decrease the latency but is a key ingredient. To capitalize on chunked encodes, we need to combine the process with HTTP 1.1 chunked transfer encoding (CTE). CTE is a feature of HTTP that allows resource transfers where size is unknown at the time of transfer. It does so by transferring resources chunk-wise and signaling the end of a resource with a chunk of length 0. We can utilize CTE at the encoder to write CMAF chunks to the storage as soon as they are being produced without waiting for the encode of the full segment to finish. This enables the player to request (also using CTE) available CMAF chunks of a segment that is still being encoded and forward them as fast as possible to the decoder for playout. Therefore allowing playback as soon as the first CMAF chunk is received.
    Chunked CMAF Data Segment in storage illustrated

    Implications of low latency chunked delivery

    … besides enabling low latency:

    • Smoother and less jumpy client buffer levels from the constant flow of CMAF chunks received. Thus lowering the risk of buffer underruns and improves playback stability.
    • Faster stream startup (time to first frame) and seeking at the client due to being able to decode and playout segments partially during their download.
    • Higher overhead in segment file size compared to non-chunked segments as a result of the additional metadata (moof boxes, mdat headers) introduced with chunked encodes.
    • Low buffer levels at the client impact playback stability. A low live latency implies the client is playing close to the live edge and has a low buffer level. Therefore the longest achievable buffer level is limited by the current live latency. It’s a QoE tradeoff: low latency vs. playback stability.
    • Bandwidth estimation for adaptive streaming at the client is hard. When loading a segment at the bleeding live edge, the download rate will be limited by the source/encoder. As content is produced in real-time it takes, for example, 6 seconds to encode a 6-second long segment. So the download rate/time for segments is no longer limited by networks but by encoders. This causes a problem in bandwidth estimation methods that are currently commonplace in the industry and based on the download duration. The standard formula to calculate bandwidth estimation is:

    estimatedBW = segmentSize / downloadDuration
    E.g.: estimatedBW = 1MB / 2s = 4mbit

    As download duration roughly equals the segment duration when loading at the bleeding live edge using CTE, it can no longer be used to estimate client bandwidth. Bandwidth estimation is a crucial part of any adaptive streaming player and the lack of estimated bandwidth must be addressed. Research for better ways to estimate bandwidth in chunked low-latency delivery scenarios is ongoing in academia and throughout the streaming industry, e.g. ACTE.
    Did you enjoy this post? Want to learn more? Check out Part two of the Low Latency series: Video Tech Deep-Dive: Live Low Latency Streaming Part 2
    …or if you want to jump ahead, take a look at Part three: Video Tech Deep-Dive: Live Low Latency Streaming Part 3 – Low-Latency HLS

    The post Video Tech Deep-Dive: Live Low Latency Streaming Part 1 appeared first on Bitmovin.

    ]]>
    Low Latency Streaming: What is it and How can it be solved? https://bitmovin.com/blog/cmaf-low-latency-streaming/ Fri, 26 Oct 2018 08:18:05 +0000 http://bitmovin.com/?p=24688 Latency is a major challenge for the online video industry. This article takes us through what latency is, why it’s important for streaming and how CMAF low latency streaming can help to solve the problems. Live stream “latency” is the time delay between the transmission of actual live content from the source to when it...

    The post Low Latency Streaming: What is it and How can it be solved? appeared first on Bitmovin.

    ]]>
    - Bitmovin

    Latency is a major challenge for the online video industry. This article takes us through what latency is, why it’s important for streaming and how CMAF low latency streaming can help to solve the problems.

    Live stream “latency” is the time delay between the transmission of actual live content from the source to when it is received and displayed by the playback device. Or to put it another way, the difference between the moment when the actual event is captured on camera or the live feed comes out of a playout server, and the time when the end user actually sees the content on their device’s screen.
    Typical broadcast linear stream delay ranges anywhere from 3-5 seconds whereas online streaming has historically been anywhere from 30 seconds to over 60 seconds depending on the viewing device and the video workflow used.
    The challenge for the online streaming industry is to reduce this latency to a range closer to linear broadcast signal latency (3-5 sec) or even lower, depending on the application needs. Therefore, many video providers have taken steps to optimize their live streaming workflows by rolling out new streaming standards like the Common Media Application Format (CMAF) and making changes to encoding, CDN delivery, and playback technologies to close the latency gap and to provide near real-time streaming experience for end-users. This reduced latency for online linear video streaming is commonly referred to as “Low Latency”.

    Streaming Latency Continuum
    Streaming Latency Continuum

    Linear stream/signal latency represents a continuum, as indicated in the diagram above. This diagram illustrates the historic reality of online streaming protocols such as HLS and DASH exhibiting higher latency, and nonadaptive bitrate protocols like RTP/RTSP and WebRTC exhibiting much lower sub-second latency. The discussion here is based on the adaptive bitrate protocols, HLS and MPEG-DASH.

    Why is this important for me?

    The main goal of Low Latency streaming is to keep playback as close as possible to real-time broadcasts so users can engage and interact with content as it’s unfolding. Typical applications include sports, news, betting, and gaming. Another class of latency-sensitive applications includes feedback data as part of the interactive experience – an example is the ClassPass virtual fitness class, as announced by Bitmovin here.
    Other interactive applications include game shows and social engagement. In these use-cases, synchronizing latency across multiple devices becomes valuable for viewers to have a similar chance to answer questions, or provide other interactions.

    What is CMAF?

    Common Media Application Format (CMAF) was introduced in 2016 and was co-authored by Apple and Microsoft to create a standardized transport container for streaming VoD and linear media using the MPEG-DASH or HLS protocols.
    The main goals were:
    1) Reduce overhead/encoding and delivery costs through standardized encryption methods
    2) simplify complexities associated with video streaming workflows and integrations (ex: DRM, advertising, closed captioning, caching, etc)
    3) support a single format that can be used to stream across any online streaming device.
    When we originally posted our thoughts on CMAF, adoption was still in its infancy. But, in recent months we have seen increased adoption of CMAF across the video workflow chain and by device manufacturers. As end-user expectations to stream linear content with latency equivalent to traditional broadcast have continued to increase, and content rights to stream real-time have become more and more commonplace, CMAF has stepped in as a viable solution.

    What is CMAF Low Latency?

    When live streaming, the media (video/audio) is sent in segments that are each a few seconds (2-6 sec) long. This inherently adds a few seconds of delay from transmission to playback as the segments have to be encoded, delivered, downloaded, buffered, and then rendered by the player client, all of which is limited at a minimum by the segment size.

    CMAF now comes with a low latency mode where each segment can be split up into smaller units, called “chunks”, greatly reducing latency.

    CMAF now comes with a low latency mode where each segment can be split up into smaller units, called “chunks” where each chunk can be 500 milliseconds or lower depending on encoder configurations. With low latency CMAF or chunked CMAF, the player can now request incomplete segments and get all available chunks to render instead of waiting for the full segment to become available, thereby cutting latency down significantly.

    CMAF Chunks for low latency
    CMAF Chunks for low latency

    As shown in the diagram above, a “chunk” is the smallest referenceable media unit, by definition, containing a “moof” and “mdat” atom. The mdat holds a single IDR (Instantaneous Decoder Refresh) frame, which is required to begin every “segment”.  A “segment” is a collection of one or more “fragments”, and a “fragment” is a collection of one or more chunks. The “moof” box as shown in the diagram, is required by the player for decoding and rendering individual chunks.
    At the transmit end of the chain, encoders can output each chunk for delivery immediately after encoding it, and the player can reference and decode each one separately.

    What are we doing to solve the latency problem?

    The Bitmovin Player has supported CMAF playback for a while now. Recently, we also added support for CMAF low latency playback for HTML5 (web) and native apps (mobile) platforms. The Bitmovin Player can be configured to turn on low latency mode which then enables the player to allow chunk-based decoding and rendering without having to wait for the full segment to be downloaded.
    The Bitmovin Player optimizes start-up logic, determines buffer sizes, and adjusts playback rate to achieve near to real live streaming latency. From our testing, this can go as low as 1.8 seconds while maintaining stream stability and good video quality.
    CMAF low latency is compatible with the rest of the features that Bitmovin Player already supports today. (Ex: ads, DRM, analytics, closed captioning).

    Standard vs Chunked Segmented Streams
    Standard vs Chunked Segmented Streams

    In the diagram shown above, player buffering and decoding behavior is shown, contrasting the standard segment (standard latency) mode with the chunked segment mode, corresponding to low latency streaming.
    The diagram shows that in non-chunked segments, with a segment size of 4xC (where C is the size of the lowest granularity unit, the chunk, measured in milliseconds) and three-segment buffering, a 14xC-second player latency is typically achieved.
    In contrast, chunked segments with CMAF are shown to achieve a 2xC second latency as opposed to a 14xC-second latency, thereby achieving a 7 times improvement in latency.

    Are there any trade-offs?

    In short, yes. There are some considerations, and some tradeoffs when trying to achieve low latency while still providing a high-quality viewing experience.
    Buffer Size: Ideally, we want to render frames as soon as the player receives them. This means we have to maintain a really small buffer size. But, this also introduces instability in the viewing experience especially when the player encounters any unexpected interruptions (like dropped frames or frame bursts) due to network or encoder issues. Without enough locally stored frames, the player stalls or freezes until the buffer refreshes with new frames. This in turn requires the player to re-synch its presentation timing and leads to perceived distortions in the playback experience. Therefore, it’s recommended to maintain at least a 1-second buffer to allow the player to provide a smoother playback experience for viewers that can withstand some network disruptions.
    DRM is another factor that might introduce additional delay in start-up time, the license delivery turnaround time will block content playback even though low latency is turned on. In this case, the player adjusts to the latest live frame upon successful license delivery, and the latency is consistent with the set low latency value.

    How can I monitor these tradeoffs?

    For all of the above reasons, balancing a robust, scalable online streaming platform with minimal re-buffering and stream interruptions against the time-sensitive behavior of low latency CMAF streaming can be challenging. The solution is a holistic view of the streaming experience, provided by Bitmovin Analytics.
    Bitmovin Analytics provides insights into session quality so customers can monitor the performance of low latency streaming sessions and make real-time decisions to adjust player and encoding configurations to improve the experience. Bitmovin offers all existing video quality metrics (e.g. Startup time, Buffer Rate) and a few additional metrics to specifically monitor low latency streaming at a content level, such as:

    • Target Latency
    • Observed Latency
    • Playback Rate
    • Dropped Frames
    • Bandwidth Used

    Besides the player, what else causes latency?

    Chunked CMAF streams and low latency-enabled players are key elements in reducing latency in online streaming. However, there are other components in the video delivery chain that introduce latency at each step that need to be considered for further optimization:

    • Encoder: The encoder needs to be able to ingest live streams as quickly as possible with the encoding configuration optimized to produce the right size of chunks and segments that can then be uploaded to the Origin Server for delivery.
    • First Mile Upload: The upload time depends on the connection type at the upload facility (wired, wireless) and affects overall latency.
    • CDN: The CDN technologies need to allow for chunk-based transfers and to adopt the right caching strategies to propagate chunks across the different delivery nodes in a time-sensitive fashion.
    • Last Mile: The end user’s network conditions also influence overall latency i.e. if the user is on a wired or WiFi or cellular connection. It also depends on how close the user is to the CDN edge.
    • Playback: As discussed earlier, the player needs to optimize start behavior and balance buffering and playback rate to enable quick download and rendering to always be as close as possible to live time.

    These steps are shown below in the end-to-end video flow diagram.

    Chunked encoding flow
    Chunked encoding flow

    With chunked segments, from our testing, we’ve seen end-to-end latency as low as 1.8 seconds. However, the customer needs to consider their entire workflow set up to ensure latency is optimized along the full chain to achieve the lowest latency achievable with their specific workflow and network.

    In conclusion …

    As viewers migrate from a large screen TV by appointment experience to a time-shifted, place-shifted experience with multi-device online streaming, content producers and rights holders have responded by getting more premium content available online, as well as brand new classes of media experiences online involving interactivity and an emphasis on low latency delivery and playback.
    The Bitmovin low latency solution was shown here to consist of the Bitmovin Player and Bitmovin Analytics products working together to balance the needs of low latency live streaming on multi-devices while providing the level of insights needed to proactively determine the viewers’ quality of experience, and to take action in case undesired consequences appear as a result of low latency streaming.

    Video technology guides and articles

    The post Low Latency Streaming: What is it and How can it be solved? appeared first on Bitmovin.

    ]]>
    360 degree (live) adaptive streaming with RICOH THETA S and Bitmovin https://bitmovin.com/blog/360-degree-live-adaptive-streaming-with-ricoh-theta-s-and-bitmovin/ Thu, 30 Mar 2017 13:29:09 +0000 https://bitmovin.com/?p=18960 Recently I got the RICOH THETA S 360-degree camera and I asked myself how to setup a (live) adaptive streaming session using Bitmovin cloud encoding and HTML5 player. I quickly found some general guidelines on the internet but before providing step-by-step instructions one has to consider the following: Update the firmware of your Ricoh Theta S by downloading...

    The post 360 degree (live) adaptive streaming with RICOH THETA S and Bitmovin appeared first on Bitmovin.

    ]]>
    Recently I got the RICOH THETA S 360-degree camera and I asked myself how to setup a (live) adaptive streaming session using Bitmovin cloud encoding and HTML5 player. I quickly found some general guidelines on the internet but before providing step-by-step instructions one has to consider the following:

    • Update the firmware of your Ricoh Theta S by downloading the basic app, start it (while the camera is connected via USB) and go to File -> Firmware Update… and follow the steps on the screen. It’s pretty easy and mine got updated from v1.11 to v1.82.
    • Think about a storage solution for your files generated by the Bitmovin cloud encoding and possible options are FTP, Amazon S3, Google Cloud Storage, and Dropbox. I used Amazon S3 for this setup which provides a bucket name, “AWS Access Key”, and “AWS Secret Key”.
    • Setup a basic web site and make sure it works with the Bitmovin HTML5 player for video on demand services with the content hosted on the previously selected storage solution (i.e., avoid any CORS issues). In my setup I used WordPress and the Bitmovin WordPress plugin which makes it very easy…

    Step 1: Follow steps 1-4 from here.

    Follow steps 1-4 from the general guidelines. Basically, install the live-streaming app, register the device, and install/configure OBS. Enable the live streaming on theRICOH THETA S and within OBS use the “Custom Streaming Server” of the “Stream” settings. That basically connects the RICOH THETA S with OBS on your local computer. The next step is forwarding this stream to the Bitmovin cloud encoding service for DASH/HLS streaming.

    Step 2: Create a new Bitmovin Output

    1. Login to the Bitmovin portal and go to Encoding -> Outputs -> Create Output
    2. Select Amazon S3 and use any “Output Profile name”, e.g., ricoh-livestream-test
    3. Enter the name of your Bucket from Amazon S3
    4. The prefix is not needed
    5. Select any “Host-Region” (preferably one close to where you are)
    6. Enter the ”AWS Access Key” and the “AWS Secret Key” from Amazon S3
    7. Make sure the “Create Public S3 URLs” checkbox is enabled

    An example screenshot is shown below.
    - Bitmovin
    Finally, click the “+” sign to create it and if everything is correct, the output will be created, otherwise an error message will be shown. In such a case, make sure the bucket name and keys are correct as provided when creating a bucket on Amazon S3.

    Step 3: Create a new Bitmovin Livestream

    1. Login to the Bitmovin portal and go to Live (beta) -> Create Livestream
    2. Select “Encoding-Profile”: bitcodin fullHD is sufficient (4K not needed as the device provides only fullHD)
    3. Select “Output-Profile”: select the output you’ve created in previous step (ricoh-livestream-test)
    4. Add a “Livestream-Name” (any string works here), e.g., ricoh-livestream-test
    5. Add a “Stream-Key” (any string works here), e.g., ricohlivestreamtest
    6. Click “Create Live Stream”, an “Important Notice” shows up & click “Create Live Stream”
    7. Wait (could take some time, you may reload the page or go to the “Overview”) for RTMP PUSH URL to be used in OBS

    An example screenshot is shown below which displays the RTMP PUSH URL, Stream Key, MPD URL, and HLS URL to be used in the next steps.
    - Bitmovin

    Step 4: Start Streaming in OBS

    1. Go to OBS -> Settings
    2. In section “Stream”, select “Custom Streaming Server”
    3. Enter the RTMP PUSH URL from Bitmovin in the “URL” field of OBS
    4. Enter the Stream Key from Bitmovin in the “Stream key” field of OBS
    5. Click “OK” and then click “Start Streaming” in OBS

    An example screenshot is shown below and if everything works fine OBS will stream to the Bitmovin cloud encoding service.

    - Bitmovin

    Step 5: Setup the HTML5 Player

    Basically follow the instructions here or in my case I simply used WordPress and the Bitmovin WordPress plugin. In such a case…

    1. Within WordPress, create a post or page and go to the Bitmovin WP plugin
    2. Select “Add New Video”
    3. Enter any name/title of the new video
    4. In the “Video” section, enter the “DASH URL” and “HLS URL” from the Bitmovin livestream provided in step 3 (i.e., the MPD URL and the HLS URL)
    5. In the “Player” section, select latest stable (in my case this was latest version 7)
    6. In the “VR” section, select startup mode “2d” and leave the rest as is

    An example screenshot is shown below.

    - Bitmovin

    Finally, click on “Publish” in WordPress which will give you a shortcut code to be placed (copy/paste) into your site or post and you’re done…!
    A similar approach can be used for video on demand content but in such a case you don’t need OBS as you simply encode your content using the Bitmovin cloud encoding and the HTML5 player for the actual streaming.
    More Resources:

    The post 360 degree (live) adaptive streaming with RICOH THETA S and Bitmovin appeared first on Bitmovin.

    ]]>
    MPEG-DASH Live Streaming with Puls 4 https://bitmovin.com/blog/mpeg-dash-live-streaming-puls-4/ Wed, 11 Nov 2015 07:12:18 +0000 http://bitmovin.com/?p=6671 PULS 4 Success Story PULS 4 is the leading private TV broadcaster in Austria. They offer linear TV as well as live streams at www.puls4.com. PULS 4 is part of the ProSiebenSat.1 Media group. Bitmovin provides cloud-based encoding services as well as the HTML5 and Flash video player ensuring a reliable webcast of the PULS...

    The post MPEG-DASH Live Streaming with Puls 4 appeared first on Bitmovin.

    ]]>
    PULS 4 Success Story

    PULS 4 is the leading private TV broadcaster in Austria. They offer linear TV as well as live streams at www.puls4.com. PULS 4 is part of the ProSiebenSat.1 Media group. Bitmovin provides cloud-based encoding services as well as the HTML5 and Flash video player ensuring a reliable webcast of the PULS 4 live streaming events.
    MPEG-DASH live streaming - puls4

    PULS 4’s Core Challenges of Live Streaming Events

    • need to live stream several times a month for a couple of hours with a cost-efficient infrastructure that is available when needed
    • cover all devices and platforms – desktop, tablets, smartphones
    • deliver highest video (HD/Full HD) and audio quality without buffering
    • possibility to use their own Content Delivery Network.

    “We stream a lot of unsteady live events. For us it’s most important to work with a partner where we can use live streaming capabilities as we need it. We have no fixed infrastructure costs and we need no dedicated team to stream the live events due to the usage of Bitmovin’s cloud-based encoding service”, said Andreas Zierhofer, Innovation Manager at PULS 4.

    Bitmovin’s Solution: MPEG-DASH Live Streaming

    How did Bitmovin Address the Requirements of PULS 4?

    1. Bitmovin’s cloud-based encoding service, processes a 10Mbps RTMP input stream and encodes the origin content to several qualities – 250kbps to 3Mbps – with quality levels for all devices – 240p, 480p, 576p, 720p, 1080p. This platform delivers high definition MPEG-DASH live streaming as well as adaptive Video on Demand streaming files.
    2. Flexible distribution using scalable HTTP Content Delivery Networks. PULS 4 uses their own CDN for distributing the live events.
    3. Bitmovin’s HTML5 MPEG-DASH and HLS player based on HTML5 and Flash ensures the perfect video and audio playback of the encoded content on all devices and platforms.

    The post MPEG-DASH Live Streaming with Puls 4 appeared first on Bitmovin.

    ]]>