FWIW, we recently noticed a similar issue with our CUBIC senders after
we've enabled FQ on them (4.15 kernel.)

Disabling train detection in hystart did fix the problem:

# echo 2 > /sys/module/tcp_cubic/parameters/hystart_detect

[1]
https://github.com/torvalds/linux/blob/master/net/ipv4/tcp_cubic.c#L76-L77

> The receiver (as in the nginx proxy in the dumps) is actually running fq
> qdisc with BBR on kernel 4.9. Could that explain what you are seeing?
> 
> Changing it to cubic does not change the resulting throughput though, and
> it also was not involved at all in the Windows 10 -> linux router -> Apache
> server tests which also gives the same 23-ish MB/s with pacing.
> 
> 
> On 26 January 2017 at 21:54, Eric Dumazet <eric.dumazet at gmail.com> wrote:
> 
>> It looks like the receiver is seriously limiting receive window in the
>> "pacing" case.
>>
>> Only after 30 Mbytes are transfered, it finally increases it, from
>> 359168 to 1328192
>>
>> DRS is not working as expected. Again maybe related to HZ value.

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Bloat mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to