On Feb 1, 2023, at 3:14 PM, Marek Zarychta <[email protected]> 
wrote:

> W dniu 1.02.2023 o 20:33, Paul Mather pisze:
>> It looks like we may have a winner, folks.  I built and enabled the extra 
>> TCP stacks and for the first time was able to max out my connection to the 
>> remote FreeBSD system.  I get consistently higher throughput over the 15-hop 
>> WAN path to the remote FreeBSD system when using the RACK TCP stack than 
>> when using the default "freebsd" stack.
>> 
>> Although the speeds are consistently higher when using the setting 
>> "net.inet.tcp.functions_default=rack", they are still variable.  However, 
>> rather than the 3--4 MB/s I saw that kicked off this thread, I now average 
>> over 10 MB/s.
>> 
>> I actually get the best results with "net.inet.tcp.functions_default=bbr" 
>> (having loaded tcp_bbr).  That behaves very much like the Linux hosts in 
>> that speeds climb very quickly until it saturates the WAN connection.  I get 
>> the same high speeds from the remote FreeBSD system using tcp_bbr as I do to 
>> the Linux hosts.  I will stick with tcp_bbr for now as the default on my 
>> remote FreeBSD servers.  It appears to put them on a par with Linux for this 
>> WAN link.
> Thanks for the feedback Paul. Please bear in mind that BBR 1 which is 
> implemented in FreeBSD is not a fair[1] congestion control algorithm. Maybe 
> in the future, we will have BBR v2 in the stack, but for now, I don't 
> recommend using BBR, unless you want to act slightly as a hm.... network 
> leecher. Maybe Linux hosts behave this way, maybe they have implemented BBR 
> v2, I am not familiar with Linux TCP stack enhancements.
> On the other hand, tcp_rack(4) is performant, well-tested in the FreeBSD 
> stack, considered fair and more acceptable for a fileserver, though not 
> ideal, ie. probably more computationally expensive and still missing some 
> features like TCP-MD5.
> 
> 
> [1] https://www.mdpi.com/1424-8220/21/12/4128
> 

That is a fair and astute observation, Marek.  I am also not familiar with 
Linux TCP stack implementations but it had occurred to me that maybe Linux was 
not being an entirely good netizen whereas FreeBSD was behaving with impeccable 
net manners when it came to congestion control and being fair to others, and 
that is why Linux was getting faster speeds for me.  Then again, perhaps not. 
:-)

In the case of the remote FreeBSD hosts I use at $JOB, they have low numbers of 
users and so are more akin to endpoints than servers, so I'm not worried about 
"leeching" from them.  Also, my ISP download bandwidth is 1/5th of each FreeBSD 
system, so hopefully there is still plenty to go around after I max out my bulk 
downloads.  (Plus, I believe $JOB prefers my downloads to take half [or less] 
the time.) :-)

Hopefully we will get BBR v2 (or something even fairer) at some point.  IIRC, 
the FreeBSD Foundation has been highlighting some of this network stack work.  
It would be a pity for it not to be enabled by default so more people could use 
it on -RELEASE without building a custom kernel.  I'm just glad right now I'm 
not stuck with 3--4 MB/s downloads any more.

Cheers,

Paul.

Reply via email to