W dniu 1.02.2023 o 20:33, Paul Mather pisze:
It looks like we may have a winner, folks. I built and enabled the extra TCP stacks and for the first time was able to max out my connection to the remote FreeBSD system. I get consistently higher throughput over the 15-hop WAN path to the remote FreeBSD system when using the RACK TCP stack than when using the default "freebsd" stack.Although the speeds are consistently higher when using the setting "net.inet.tcp.functions_default=rack", they are still variable. However, rather than the 3--4 MB/s I saw that kicked off this thread, I now average over 10 MB/s. I actually get the best results with "net.inet.tcp.functions_default=bbr" (having loaded tcp_bbr). That behaves very much like the Linux hosts in that speeds climb very quickly until it saturates the WAN connection. I get the same high speeds from the remote FreeBSD system using tcp_bbr as I do to the Linux hosts. I will stick with tcp_bbr for now as the default on my remote FreeBSD servers. It appears to put them on a par with Linux for this WAN link.
Thanks for the feedback Paul. Please bear in mind that BBR 1 which is implemented in FreeBSD is not a fair[1] congestion control algorithm. Maybe in the future, we will have BBR v2 in the stack, but for now, I don't recommend using BBR, unless you want to act slightly as a hm.... network leecher. Maybe Linux hosts behave this way, maybe they have implemented BBR v2, I am not familiar with Linux TCP stack enhancements. On the other hand, tcp_rack(4) is performant, well-tested in the FreeBSD stack, considered fair and more acceptable for a fileserver, though not ideal, ie. probably more computationally expensive and still missing some features like TCP-MD5.
[1] https://www.mdpi.com/1424-8220/21/12/4128 Cheers -- Marek Zarychta
OpenPGP_signature
Description: OpenPGP digital signature
