Revenir au blog

Follow and Subscribe

A closer look at low latency delivery

John Agger

Principal Industry Marketing Manager, Media & Entertainment, Fastly

The perceived quality of any live streaming experience depends to a large extent on driving latency out of the network. Latency — the lag between when the packet leaves the streaming source and when it arrives at the consumer’s device — takes many forms, with the most common being lag, dropped frames, buffering, and with that reduced video quality.

Minimizing or reducing delivery latency requires planning and optimization, but also an acceptance of tradeoffs between latency and cost. While building an efficient method of recording, encoding, and initially transmitting content can help remove inefficiencies and latency from early in the process, much of the actual latency occurs during delivery. Content companies need to find solutions for both the front end of the system and the network delivery components to achieve the lowest latencies possible. In a 2021 survey, 42% of respondents considered high-quality video to be the number one priority for user experience. Low latency came in a close second, with 32% of respondents considering responsive video to be the top priority (https://www.wowza.com/blog/2021-video-streaming-latency-report)

“Defining” latency, low latency, and ultra-low latency

The pipeline of content from creation to transmission to eventual reception by the consumer device requires processing and bandwidth, and time. It can often take up to 10 seconds for a live event to display on a consumer device. While the average HD cable broadcaster experiences 4 to 5 seconds of latency, about a quarter of content networks are challenged by anywhere from 10 to 45 seconds of latency. With no current standard, typically, low-latency delivery means that the video is delivered to a consumer’s screen less than 4 to 5 seconds after the live action, while ultra-low latency is less than that. So-called “glass-to-glass” latency — from the camera lens to the viewer’s screen — is often around 20 seconds, while high-definition cable TV content is the benchmark for low latency, at about 5 seconds of latency.

Common causes of latency?

There are many causes of latency in broadcast and delivery networks. The mere act of encoding a (live) video stream into packets to be sent over a network introduces delays into the video stream. Add to that the delivery through a variety of third-party networks to the end user’s device, and the latency grows longer. In addition, different protocols have different strengths and weaknesses, and the primary consideration may not always be to reduce latency.

Apple’s original HTTP Live Streaming protocol, introduced in 2009, for example, has a default of 30 seconds of latency because it prioritizes existing infrastructure (HTTP) for delivery instead of  more efficient but less adopted protocols. Other protocols might optimize the network route to the destination, minimize delays from encryption and other secondary network functions, and choose optimized encoding and decoding techniques. We should add that in 2020 Apple introduced a low latency version – LL-HLS –of their HLS protocol, where smaller segments, among other technological advances, drive down latency.

Some common areas for reducing latency

Latency can be reduced by tuning the encoding workflow for faster processing. However, doing so will cause inefficiencies — and higher costs — elsewhere. Smaller network packets and video segments amount to more overhead and less bandwidth but will reduce latency, while larger segments increase the overall bandwidth and efficiency at the cost of a real-time experience.

The workflow of capturing and encoding media is a good place to look for opportunities to reduce latency. A well-tuned workflow can quickly deliver encoded video segments but focusing on minimizing processing time is not the only goal. Spending more time processing can often produce more compact data streams, reducing the overall network latency. Thus, there is a dial between processing efficiency and network-transport efficiency, and content publishers must find the right balance.

(Ultra-) Low latency enables interactivity and user engagement

The experience of watching live content online can be greatly improved when the latency is minimized. However, eliminating latency in real-time applications is another important and attractive segment. Users don’t just want to hold video conferences for a small group; they want live presentations with hundreds of participants allowing real-time feedback for audience members. They don’t just want to watch a live game; they want to engage with events in real-time, taking quizzes, placing bets, and participating in the experience of the game. If latency intrudes on those experiences, customers will turn to other providers, watch different events, or engage with other people and businesses. The real benefit of ultra-low latency, then, is that content, user experiences, and data are delivered in near real-time, forming the basis of better user experiences and enabling new business models. Guaranteeing low latency can set your content and business apart from your competition, driving adoption and minimizing customer churn.

For a more in-depth analysis and suggestions as to how to reduce latency for your content and turn ultra-low latency delivery into a competitive advantage, check out this solutions brief. Also, look out for upcoming blog posts that will discuss emerging technologies that can help set your business apart from the competition.


Deliver fast, personalized experiences globally

Fastly CDN is fully configurable and designed to work seamlessly with your technology stack to enable scale, improve reliability and delivery performance. Learn more