Hi,

Wasn't there a burst limit included in freebsd tcp at some point in time?

Today I observed the following scenario: client window host, with receive 
window autotuning enabled; server freebsd 10.4; latency around 20ms 
(bufferbloated; unloaded latency around 1-2ms), the link stretches about 5 
links with 10G,25G,40G each (server at 10G), client is 100Mbit.

The client signals for the longest time a receive window of around 350kB, which 
the server hogs for transmitting data (effective throughput < 100Mbit), after 
growth of cwnd hits rwnd. Then, after a couple seconds, the client signals an 
increase of the receive window by a factor of ~2 (700kB), about where the cwnd 
has grown in the background if the transmission rate weren't clamped by rwnd.

Thus the server sends a line rate (10G) burst of about 300kB (cwnd has not 
increased to completely include the new rwnd), which is queued up (and the tail 
dropped...), increasing the effective latency to about 40ms...


Just looked at rfc5681, 2861 - but apparently receive window autoscaling is too 
new a technology for those;

Linux does have a burst clamping heuristic implemented, to prevent humongous 
burst of data to be sent, when for whatever reason cwnd would allow it (rwnd 
clamping and increasing, application limitated operation, ...) But that 
apparently never made it into any spec?

IMHO, limiting the maximum burst size per RTT, or doing slow start when cwnd 
suddenly allows for an "excessive" amount of data to be sent, would be the 
sensible thing to do - but whats your view on this?

Best regards,

Richard Scheffenegger

_______________________________________________
freebsd-transport@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-transport
To unsubscribe, send any mail to "freebsd-transport-unsubscr...@freebsd.org"

Reply via email to