On Sat, Jul 21, 2018 at 10:28 AM Georgios Amanakis <[email protected]> wrote: > > The previous one was with: > net.ipv4.tcp_congestion_control=cubic > > I retried with: > net.ipv4.tcp_congestion_control=reno > > Georgios
In the fast test this has no effect on the remote server's tcp, it's always going to be reno. Trying to cross-check behavior using our tests... There isn't a specific reno setting test in flent for tcp_download as best as I recall, so I was just calling netperf -H wherever -l 60 -- -K reno,reno then running the flent ping test as previous mentioned. (flent-fremont.bufferbloat.net and flent-newark both support reno bbr and cubic, I haven't checked the others) PS A side note is that we are not fully successfully moving the inbound bottleneck to cake (at least in the cable case), as we do get quite a bit of queuing delay even with linux tcp driving the tests. I'd long written this off as inevitable, due to the bursty cable mac but I'm grumpy this morning. 0 delay via fq would be better than even the 15-40ms I'm getting now with linux flows..... reno bbr cubic -- Dave Täht CEO, TekLibre, LLC http://www.teklibre.com Tel: 1-669-226-2619 _______________________________________________ Cake mailing list [email protected] https://lists.bufferbloat.net/listinfo/cake
