xnor wrote:
On Fri, 28 Apr 2017, xnor wrote:
As I understand it, increase in RTT due to queueing of packets is
the main feedback mechanism for BBR. So dropping packets, which I
already consider harmful, is really harmful with BBR because
you're not telling the sender to slow down.
If BBR does not slow down when packets are dropped, it's too
hostile to use on a public network. The only way for a public
network to respond to a flood of traffic higher than what it can
handle is to drop packets (with a possible warning via ECN shortly
before packets get dropped). If BBR doesn't slow down, it's just
going to be wasting bandwidth.
No it isn't. Packet loss does not equal conguestion - it never did.
Dropping packets to signal congestion is an ugly hack for
implementations that are too dumb to understand any proper congestion
control mechanism.
Hmm, I bet a lot of carrier links are policed rather than smart queue.
It also seems (OK one quick possibly flawed test), that bbr ignores ECN
as well as drops in the sense that marked is just as high as dropped.
Dropping due to queues being full normally doesn't happen before the
RTT has significantly increased.
Not so good on ingress though - where normaly (ie. non bbr) a drop is
handy to signal early to back off - letting a buffer fill up too much is
also somewhat going to fill the remote buffer that you would like to
keep empty.
BBR fixes both of these problems: a) it ignores packet loss on
unreliable links and therefore achieves much higher throughput than
conventional congestion control algorithms (that wrongly assume
congestion on packet loss) An experiment with 10 Gbps bottleneck, 100
ms RTT and 1% packet loss (as described in the net-next commit)
showed ~3000x higher throughput with BBR compared to cubic.
So on a congested policed link it will make matters worse for "normal"
tcp users - maybe everyone will need to use it soon.
b) it reacts to increase in RTT. An experiment with 10 Mbps
bottleneck, 40 ms RTT and a typical 1000 packet buffer, increase in
RTT with BBR is ~3 ms while with cubic it is over 1000 ms.
That is a nice aspect (though at 60mbit hfsc + 80ms bfifo I tested with
5 tcps it was IIRC 20ms vs 80 for cubic). I deliberately test using ifb
on my PC because I want to pretend to be a router - IME (OK it was a
while ago) testing on eth directly gives different results - like the
locally generated tcp is backing off and giving different results.
_______________________________________________
Cake mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/cake