On Fri, 8 May 2015, jb wrote:

I've made some changes and now this test displays the "PDV" column as
simply the recent average increase on the best latency seen, as usually the
best latency seen is pretty stable. (It also should work in firefox too
now).

In addition, every 30 seconds, a grade is printed next to a timestamp.
I know how we all like grades :) the grade is based on the average of all
the PDVs, and ranges from A+ (5 milliseconds or less) down to F for fail.

I'm not 100% happy with this PDV figure, a stellar connection - and no internet congestion - will show a low number that is stable and an A+ grade. A connection with jitter will show a PDV that is half the average jitter amplitude. So far so good.

But a connection with almost no jitter, but that has visibly higher than minimal latency, will show a failing grade. And if this is a jitter / packet delay variation type test, I'm not sure about this situation. One could say it is a very good connection but because it is 30ms higher than just one revealed optimal ping, yet it might get a "D". Not sure how common this state of things could be though.

this is why the grade should be based more on the ability to induce jitter (the additional latency under load) than the absolute number

a 100ms worth of buffer induced latency on a 20ms connection should score far worse than 20ms worth of induced latency on a 100ms connection.

David Lang

Also since it is a global test a component of the grade is also internet
backbone congestion, and not necessarily an ISP or equipment issue.


On Fri, May 8, 2015 at 9:09 AM, Dave Taht <[email protected]> wrote:

On Thu, May 7, 2015 at 3:27 PM, Dave Taht <[email protected]> wrote:
On Thu, May 7, 2015 at 7:45 AM, Simon Barber <[email protected]>
wrote:
The key figure for VoIP is maximum latency, or perhaps somewhere around
99th
percentile. Voice packets cannot be played out if they are late, so how
late
they are is the only thing that matters. If many packets are early but
more
than a very small number are late, then the jitter buffer has to adjust
to
handle the late packets. Adjusting the jitter buffer disrupts the
conversation, so ideally adjustments are infrequent. When maximum
latency
suddenly increases it becomes necessary to increase the buffer fairly
quickly to avoid a dropout in the conversation. Buffer reductions can be
hidden by waiting for gaps in conversation. People get used to the
acoustic
round trip latency and learn how quickly to expect a reply from the
other
person (unless latency is really too high), but adjustments interfere
with
this learned expectation, so make it hard to interpret why the other
person
has paused. Thus adjustments to the buffering should be as infrequent as
possible.

Codel measures and tracks minimum latency in its inner 'interval' loop.
For
VoIP the maximum is what counts. You can call it minimum + jitter, but
the
maximum is the important thing (not the absolute maximum, since a very
small
number of late packets are tolerable, but perhaps the 99th percentile).

During a conversation it will take some time at the start to learn the
characteristics of the link, but ideally the jitter buffer algorithm
will
quickly get to a place where few adjustments are made. The more
conservative
the buffer (higher delay above minimum) the less likely a future
adjustment
will be needed, hence a tendency towards larger buffers (and more
delay).

Priority queueing is perfect for VoIP, since it can keep the jitter at a
single hop down to the transmission time for a single maximum size
packet.
Fair Queueing will often achieve the same thing, since VoIP streams are
often the lowest bandwidth ongoing stream on the link.

Unfortunately this is more nuanced than this. Not for the first time
do I wish that email contained math, and/or we had put together a paper
for this containing the relevant math. I do have a spreadsheet lying
around here somewhere...

In the case of a drop tail queue, jitter is a function of the total
amount of data outstanding on the link by all the flows. A single
big fat flow experiencing a drop will drop it's buffer occupancy
(and thus effect on other flows) by a lot on the next RTT. However
a lot of fat flows will drop by less if drops are few. Total delay
is the sum of all packets outstanding on the link.

In the case of stochastic packet-fair queuing jitter is a function
of the total number of bytes in each packet outstanding on the sum
of the total number of flows. The total delay is the sum of the
bytes delivered per packet per flow.

In the case of DRR, jitter is a function of the total number of bytes
allowed by the quantum per flow outstanding on the link. The total
delay experienced by the flow is a function of the amounts of
bytes delivered with the number of flows.

In the case of fq_codel, jitter is a function of of the total number
of bytes allowed by the quantum per flow outstanding on the link,
with the sparse optimization pushing flows with no queue
queue in the available window to the front. Furthermore
codel acts to shorten the lengths of the queues overall.

fq_codel's delay: when the arriving in new flow packet can be serviced
in less time than the total number of flows' quantums, is a function
of the total number of flows that are not also building queues. When
the total service time for all flows exceeds the interval the voip
packet is delivered in, and AND the quantum under which the algorithm
is delivering, fq_codel degrades to DRR behavior. (in other words,
given enough queuing flows or enough new flows, you can steadily
accrue delay on a voip flow under fq_codel). Predicting jitter is
really hard to do here, but still pretty minimal compared to the
alternatives above.

And to complexify it further if the total flows' service time exceeds
the interval on which the voip flow is being delivered, the voip flow
can deliver a fq_codel quantum's worth of packets back to back.

Boy I wish I could explain all this better, and/or observe the results
on real jitter buffers in real apps.


in the above 3 cases, hash collisions permute the result. Cake and
fq_pie have a lot less collisions.

Which is not necessarily a panacea either. perfect flow isolation
(cake) to hundreds of flows might be in some cases worse that
suffering hash collisions (fq_codel) for some workloads. sch_fq and
fq_pie have *perfect* flow isolation and I worry about the effects of
tons and tons of short flows (think ddos attacks) - I am comforted by
colliisions! and tend to think there is an ideal ratio of flows
allowed without queue management verses available bandwidth that we
don't know yet - as well as think for larger numbers of flows we
should be inheriting more global environmental (state of the link and
all queues) than we currently do in initializing both cake and
fq_codel queues.

Recently I did some tests of 450+ flows (details on the cake mailing
list) against sch_fq which got hopelessly buried (10000 packets in
queue). cake and fq_pie did a lot better.

I am generally sanguine about this along the edge - from the internet
packets cannot be easily classified, yet most edge networks have more
bandwidth from that direction, thus able to fit WAY more flows in
under 10ms, and outbound, from the home or small business, some
classification can be effectively used in a X tier shaper (or cake) to
ensure better priority (still with fair) queuing for this special
class of application - not that under most home workloads that this is
an issue. We think. We really need to do more benchmarking of web and
dash traffic loads.

Simon

Sent with AquaMail for Android
http://www.aqua-mail.com

On May 7, 2015 6:16:00 AM jb <[email protected]> wrote:

I thought would be more sane too. I see mentioned online that PDV is a
gaussian distribution (around mean) but it looks more like half a bell
curve, with most numbers near the the lowest latency seen, and getting
progressively worse with
less frequency.
At least for DSL connections on good ISPs that scenario seems more
frequent.
You "usually" get the best latency and "sometimes" get spikes or fuzz
on
top of it.

by the way after I posted I discovered Firefox has an issue with this
test
so I had
to block it with a message, my apologies if anyone wasted time trying
it
with FF.
Hopefully i can figure out why.


On Thu, May 7, 2015 at 9:44 PM, Mikael Abrahamsson <[email protected]>
wrote:

On Thu, 7 May 2015, jb wrote:

There is a web socket based jitter tester now. It is very early stage
but
works ok.

http://www.dslreports.com/speedtest?radar=1

So the latency displayed is the mean latency from a rolling 60 sample
buffer, Minimum latency is also displayed. and the +/- PDV value is
the mean
difference between sequential pings in that same rolling buffer. It
is quite
similar to the std.dev actually (not shown).


So I think there are two schools here, either you take average and
display + / - from that, but I think I prefer to take the lowest of
the last
100 samples (or something), and then display PDV from that "floor"
value, ie
PDV can't ever be negative, it can only be positive.

Apart from that, the above multi-place RTT test is really really nice,
thanks for doing this!


--
Mikael Abrahamsson    email: [email protected]


_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat


_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat




--
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67



--
Dave Täht
Open Networking needs **Open Source Hardware**

https://plus.google.com/u/0/+EricRaymond/posts/JqxCe2pFr67

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat
_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to