Hi Jonathan,
On May 7, 2015, at 09:18 , Jonathan Morton <[email protected]> wrote: > It may depend on the application's tolerance to packet loss. A packet delayed > further than the jitter buffer's tolerance counts as lost, so *IF* jitter is > randomly distributed, jitter can be traded off against loss. For those > purposes, standard deviation may be a valid metric. All valid, but I think that the diced latency is not a normal distribution, it has a lower bound the min RTT caused by the “speed of light” (I simplify), but no real upper bound (I think we have examples of several seconds), so standard deviation or confidence intervals might not be applicable (at least not formally). Best Regards Sebastian > > However the more common characteristic is that delay is sometimes low (link > idle) and sometimes high (buffer full) and rarely in between. In other words, > delay samples are not statistically independent; loss due to jitter is > bursty, and real-time applications like VoIP can't cope with that. For that > reason, and due to your low temporal sampling rate, you should take the peak > delay observed under load and compare it to the average during idle. > > - Jonathan Morton > _______________________________________________ > Bloat mailing list > [email protected] > https://lists.bufferbloat.net/listinfo/bloat _______________________________________________ Bloat mailing list [email protected] https://lists.bufferbloat.net/listinfo/bloat
