Jim Gettys wrote:
On Thu, May 4, 2017 at 6:22 AM, Andy Furniss <[email protected]> wrote:

Jim Gettys wrote:

On Wed, May 3, 2017 at 5:50 AM, Andy Furniss <[email protected]>
wrote:

Andy Furniss wrote:

Andy Furniss wrote:

b) it reacts to increase in RTT. An experiment with 10 Mbps
bottleneck,

40 ms RTT and a typical 1000 packet buffer, increase in RTT
with BBR is ~3 ms while with cubic it is over 1000 ms.


That is a nice aspect (though at 60mbit hfsc + 80ms bfifo I
tested with 5 tcps it was IIRC 20ms vs 80 for cubic). I
deliberately test using ifb on my PC because I want to pretend
to be a router - IME (OK it was a while ago) testing on eth
directly gives different results - like the locally generated
tcp is backing off and giving different results.


I retested this with 40ms latency (netem) with hfsc + 1000 pfifo
on ifb.


So, as Jonathan pointed out to me in another thread bbr needs fq
and it seems fq only wotks on root of a real eth, which means thay
are invalid tests.


​Specifically, BBR needs packet pacing to work properly: the
algorithm depends on the packets being properly paced.

Today, fq is the only qdisc supporting pacing.

The right answer would be to add packet pacing to cake/fq_codel
directly. Until that is done, we don't know how BBR will work in our
world. - Jim​


I guess you mean so cake could be used on egress of sender (in place of
fq)?


​Yes.
​


That's not really the test that I intend to do, which is more like -

[boxA bbr+fq] -> [boxB simulate ISP buffer] -> [boxC cake ingress shape]
a bit lower than "line" rate and see how much "ISP" buffer gets filled.

Also compare bbr, cubic and netem different rtts etc.


​Ok.  The usual warnings about netem being dangerous apply, though netem
can be useful if run on a separate machine.  Netem is an attractive
nuisance, but has caused lots of results to be ultimately useless....  Be
careful.
                               - Jim

Yea, I saw the warning about netem on the website - tricky as I would
need 4 boxes to really isolate it, and I've only got three, so it's not
ideal.

I tested with it delaying acks on ingress of sender which had fq on egress. Also did some tcpdumps with different setups to see if it was
clumping acks - it seemed smooth.

With this setup my results are much the same as before = bbr is harder
to shape on ingress vs cubic, the longer the rtt to sender the worse it
is.

I was testing 18mbit dsl sim with mainly 16mbit ingress. Repeatedly
grabbing 1 or 2 meg files (like dash) spikes latency every time.
With rtt of 40ms unshaped it will burst to 80+ms, 16mbit behind 18mbit
spikes 50ms. IIRC to get spikes below 20ms I needed 12mbit = too low.

For continuous downloads, after initial spike a single tcp wasn't too bad, even 5 can be controlled latency wise at 16mbit.

The problem with 5 vs 1 is the number of drops, 1 not too bad, but 5
will drop 1/10 so tcp ends up sacking almost per packet, which is not
good for those with limited upstream.

ECN marks more packets than would be dropped, almost all in the 5 case
so also causes many upstream acks.

This is with cakes ingress parameter and default rtt of 100ms.

Changing to 300ms does reduce the drops/marks as in my previous post,
but it makes controlling the startup spikes slightly less effective -
though without going really low they weren't controlled very well anyway.

_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake

Reply via email to