Sycamore Networks has an interesting back-haul compression solution called IQStream. Earlier this year they published some interesting stats in their weblog on the increasing concentration of video traffic, see below. The reason behind this concentration is partially mass-market adoption drives concentration as people tend to ‘follow the herd’ so viral videos have a much bigger hit. As well as popular websites adding more video capabilities with “most popular” lists.
As an example of one use case driving concentration. My 22 month old son has been an avid YouTube user since he was 12 months. Give him any Apple device (and now and Kindle Fire to that list) and he’s good to go. His favorites are Sesame Street, The Wiggles, and Laurie Berkner; which he watches over, and over, and over again. He got hold of my iPhone once and bust the data plan feeding his YouTube habit. I’m sure there are many other parents out there doing the same thing of using videos to keep their kids occupied while they get the chores done. This is just one of the many thousands of use cases that are driving concentration of video traffic.
2009: 10% of video objects were responsible for 40% of video traffic
2010: 10% of video objects were responsible for 65% of video traffic
2009: 1% of website domains were responsible for 60% of video traffic
2010: 1% of website domains were responsible for 90% of video traffic
Another factor to consider beyond concentration is video abandonment, that is we start watching a video, it stalls or the quality drops, or the content is just boring and we give up watching. Sycamore’s analysis shows an average video session wastes almost 1 MB because of poor matching between the encoding rate and the transport rate and early abandonment of the session. This significant inefficiency leads to higher levels of congestion and a degraded user experience. There are many options to better match encoding rate to available transport rate, two notable approaches are:
- Video Pacing – This involves the use of various techniques to better match transport rates to the encoding rates of streaming videos. By more closely managing the user buffer depth, less instantaneous transport bandwidth is required and fewer bytes are wasted if a user abandons the video. Video pacing techniques include buffering and rate-shaping in the network, split-session video servers, and TCP session optimization to rate shape the streams. These techniques require specific knowledge of the video meta-data, client and server capabilities, and network conditions in order to prevent negative user impact (for instance, choppy or stalled video resulting from buffers not deep enough to accommodate network variability).
- Adaptive Streaming – This is separate and distinct from video pacing in that it attempts to provide the best user experience (highest video quality with lowest stalling/stopping of videos) by adaptively changing the encoding rate based on the available transport rate to the client. This approach is being actively pursued by the dominant video player vendors (Microsoft Silverlight, Adobe Flash, and Apple QuickTime) and standards bodies (HTML5). Major Internet video sources such as Netflix.com and Hulu.com already support adaptive streaming. In its most basic form, the adaptive streaming video client monitors its buffers to determine if the available network bandwidth (transport rate) is sufficient to support the video stream encoding rate. If the transport rate is low (i.e., the buffer is draining too quickly), the client requests a lower quality stream from the server to better match the rates. Correspondingly, if the transport rate is high (buffer filling too quickly), the client requests a higher quality stream.
In some respects, these two schemes are in conflict. Video pacing takes the network perspective and attempts to optimize the network resources needed to deliver a video stream whereas adaptive streaming takes the user perspective and attempts to deliver the highest bit-rate quality for a given network condition. Nevertheless, the two techniques can be used together to provide the best overall user experience for a collection of users (not just a single user) in the face of constrained network bandwidth. Video pacing and adaptive streaming represent just a couple of the many tools and techniques operators can use to better manage the unpredictable data behavior in their networks. Without content and flow optimization at critical points in the network, congestion and its resulting user impact will occur more frequently and lead to expensive over-building of the network and costly subscriber churn.
Content delivery networks to some extent mitigate this problem, but the operators’ networks remain in some cases a closed where rates vary widely based on radio access conditions and network congestion. At present there’s a dichotomy in the industry, where the content owners and content consumers have both paid for internet access and content delivery and expect the content to just work, while the operator is trying to change the game and claim ensuring it works requires that content owners and customers to pay. Individually operators have no chance at making this work, their best chance would be to strike deals with the main content delivery networks, or buy them as Level 3 did.
Other options like Turbo button has been tried in the cable industry for many years thanks to PCMM (Packet Cable MultiMedia), and customers do not like it because they paid for internet access, just like they paid for voice service, and do not expect to be ‘nickel and dimed’ for what should just work. Another option is to tier internet access plans like Verizon FiOS, and if the customer is buying the top tier the service just works, while if they’re buying a lower rate, then the service may not work and its up to the customer to buy the premium package. The key to success in mass market adoption is simplicity, Turbo buttons, per content charges, QoS APIs are complex from either a business or user experience perspective and apply to niche markets not the mass market.