Hi Jonathan,
On May 6, 2015, at 22:25 , Jonathan Morton <[email protected]> wrote: > So, as a proposed methodology, how does this sound: > > Determine a reasonable ballpark figure for typical codec and jitter-buffer > delay (one way). Fix this as a constant value for the benchmark. But we can do better, assuming captive de-jitter buffers (and they better are), we can take the induced latency per direction as first approximation of the required de-jitter buffer size. > > Measure the baseline network delays (round trip) to various reference points > worldwide. > > Measure the maximum induced delays in each direction. > > For each reference point, sum two sets of constant delays, the baseline > network delay, and both directions' induced delays. I think we should not count the de-jitter buffer and the actually PDV twice, as far as I understand the principle of the de-jittering is to introduce a buffer deep enough to smooth out the real variable packet latency, so at best we should count max(induced latency per direction, de-jitter buffer depth per direction), so the induced latency (or a suitable high percentile if we aim for good enough instead of perfect) is the best estimator we have for the jitter-induced delay. But this is not my line of work so I could b out to lunch here... > > Compare these totals to twice the ITU benchmark figures, rate accordingly, > and plot on a map. I like the map idea (and I think I have seen something like this recently, I think visualizing propagation speed in fiber). Now any map just based on actual distance on the earth’s surface is going to give a lower bound, but that should still be a decent estimate (unless something nefarious like http://research.dyn.com/2013/11/mitm-internet-hijacking/ is going on then all bets are off ;) ) Best Regards Sebastian > > - Jonathan Morton _______________________________________________ Bloat mailing list [email protected] https://lists.bufferbloat.net/listinfo/bloat
