How much do you know about packet loss? Here’s how it works. You send packets, then measure how much was received versus how much was sent. Bam, packet loss. Yet, there is something that every unified communications video collaboration administrator is missing.

Packet Loss % = (Packets Sent – Packets Received) / Packets Sent

Why is it important? Well, it is one of the best ways to understand the dependability and quality of your network. For video, it is very closely tied to the video call quality seen by the participants in a meeting. In practice, packet loss equals frozen screens, delays, and artifacts in the image. Video is very tough traffic for a network, so packet loss is noticeable to humans at 0.5% and annoying at anything greater than 2%. In fact, if packet loss is any greater than 2%, a call might fail completely.

Packet loss is also used in quality functions and algorithms to drive changes such as lowering resolution or bandwidth (AKA down speeding).

You Might Be Wrong About Packet Loss

Packet loss, as it is presented in most video systems today, is very misleading.  Most call reporting mechanisms focus on total packet loss, which for most calls is about as useful as reporting how long the call was.

The reason is that packet loss for a one hour call could be 0.1%, yet for 5 minutes that call experienced 5% packet loss.  The person responsible for delivering a high-quality video experience wanted to know that for 5 minutes, someone missed out on an otherwise great collaboration experience.

bad_video_call_quality

Rethinking Packet Loss

A more useful metric is current packet loss, or for historical analysis – peak packet loss. When you look at total packet loss, you drastically slant your results in favor of showing all calls having a pleasant experience. This is because the majority of calls lose zero packets most of the time since the network is usually doing a great job of transmitting packets.

Thinking about total packet loss per call is fine if you are perfectly happy with most calls being an ok experience most of the time, but you have to ask yourself, “why is it that no other metric is measured this way?” You don’t measure “total jitter” or “average resolution!” The reason is that people didn’t think of the user when setting packet loss up as a quality metric. They came up with the idea of packet loss while thinking of the network administrator as the end customer. An Admin cares about packet loss for most applications because it means packets have to be resent, which slows down the network. In video, there is no resending. Packet loss means low-quality video, which means frustrated users. Therefore, in video, packet loss should be thought of as a user-based metric. This means we should measure current packet loss, not total.

Pin It on Pinterest

Share This