> On 22 Apr, 2015, at 15:02, jb <[email protected]> wrote:
> 
> ...data is needed to shrink the window to a new setting, instead of slamming 
> it shut by setsockopt

I believe that is RFC-compliant behaviour; one is not supposed to renege on an 
advertised TCP receive window.  So Linux holds the rwin pointer in place until 
the window has shrunk to the new setting.

> By the way, is there a selectable congestion control algorithm available that 
> is sensitive to an RTT that increases dramatically? 

Vegas and LEDBAT do this explicitly; Vegas is old, and LEDBAT isn’t yet 
upstream but can be built against an existing kernel.  Some other TCPs 
incorporate RTT into their control law (eg. Westwood+, Illinois and Microsoft’s 
CompoundTCP), but won’t actually stop growing the congestion window based on 
that; Westwood+ uses RTT and bandwidth to determine what congestion window size 
to set *after* receiving a conventional congestion signal, while Illinois uses 
increasing RTT as a signal to *slow* the increase of cwnd, thus spending more 
time *near* the BDP.

Both Vegas and LEDBAT, however, compete very unfavourably with conventional 
senders (for Vegas, there’s a contemporary paper showing this against Reno) 
sharing the same link, which is why they aren’t widely deployed.  LEDBAT is 
however used as part of uTP (ie. BitTorrent’s UDP protocol) specifically for 
its “background traffic” properties.

Westwood+ does compete fairly with conventional TCPs and works well with AQM, 
since it avoids the sawtooth of under-utilisation that Reno shows, but it has a 
tendency to underestimate the cwnd on exiting the slow-start phase.  On extreme 
LFNs, this can result in an extremely long time to converge on the correct BDP.

Illinois is also potentially interesting, because it does make an effort to 
avoid filling buffers quite as quickly as most.  By contrast, CUBIC sets its 
inflection point at the cwnd where the previous congestion signal was received.

CompoundTCP is described roughly as using a cwnd that is the sum of the results 
of running Reno and Vegas.  So there is a region of operation where the Reno 
part is increasing its cwnd and Vegas is decreasing it at the same time, 
resulting in a roughly constant overall cwnd in the vicinity of the BDP.  I 
don’t know offhand how well it works in practice.

The fact remains, though, that most servers use conventional TCPs, usually 
CUBIC (if Linux based) or Compound (if Microsoft).

One interesting theory is that it’s possible to detect whether FQ is in use on 
a link, by observing whether Vegas competes on equal terms with a conventional 
TCP or not.

 - Jonathan Morton

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to