On Jan 30, 2023, at 5:25 PM, [email protected] <mailto:[email protected]> wrote:

> On 1/30/23 14:17, Paul Mather wrote:
>> TL;DR: When working from home, I can max out my residential 200 Mbit network 
>> connection when downloading from remote Linux hosts at $JOB but only manage 
>> about 20% of my max residential connection speed when downloading from 
>> remote FreeBSD hosts at $JOB.  When at $JOB, both FreeBSD and Linux hosts 
>> have no problem saturating their GbE connections transferring between each 
>> other.  Why is this and how can I debug and fix it?
>> I have a 200 Mbit residential cable connection (Xfinity, 200 Mbit down/~10 
>> Mbit up).  I've noticed recently that I can easily get 10--20 MB/s download 
>> speeds when transferring data from Linux hosts at work but when I try to 
>> download that same data from the FreeBSD hosts I use the speed usually tops 
>> out at 3--4 MB/s.  These are Linux and FreeBSD hosts that are on the same 
>> subnet at work.  Transfers from the FreeBSD hosts at work (within-subnet and 
>> within-site) are fine and match those of the Linux hosts---often 112 MB/s.  
>> So, it just appears to be the traffic over the WAN to my home that is 
>> affected.  The WAN path from home to this subnet is typically 15 hops with a 
>> typical average ping latency of about 23 ms.
>> The FreeBSD hosts are a mixture of -CURRENT, 13-STABLE, and 13.1-RELEASE.  I 
>> had done some TCP tuning based upon the calomel.org <http://calomel.org/> 
>> <http://calomel.org/> tuning document 
>> (https://calomel.org/freebsd_network_tuning.html), but removed those tuning 
>> settings when I noticed the problem but the problem still persists.  The 
>> only remaining customisation is that the 13-STABLE has 
>> "net.inet.tcp.cc.algorithm=cubic".  (I notice that -CURRENT now has this as 
>> default so wanted to try that on 13-STABLE, too.)  The FreeBSD systems are 
>> using either igb or em NICs.  The Linux systems are using similar hardware.  
>> None has a problem maintaining local GbE transfer speeds---it's only the 
>> slower/longer WAN connections that have problems for the FreeBSD hosts.
>> It seems that Linux hosts cope with the WAN path to my home better than the 
>> FreeBSD systems.  Has anyone else noticed this?  Does anyone have any idea 
>> as to what is obviously going wrong here and how I might debug/fix the 
>> FreeBSD hosts to yield faster speeds?  My workaround at the moment is to 
>> favour using the remote Linux hosts for bulk data transfers.  (I don't like 
>> this workaround.)
>> Any help/insight is gratefully appreciated.
>> Cheers,
>> Paul.
> 
> sysctl net.inet.tcp.cc.algorithm=htcp
> 
> I would set "htcp" on the server and home computer to improve through in your 
> type of situation.


I did not mention this explicitly but part of the "some TCP tuning based upon 
the calomel.org <http://calomel.org/> tuning document" I mention having done 
(and then removed) was to use the "htcp" congestion control algorithm.  I 
restored the use of "htcp" at your suggestion and notice it does improve 
matters slightly, but I still get nowhere near maxing out my download pipe as I 
can when downloading from Linux hosts at $JOB.  Switching back to "htcp" on the 
FreeBSD servers improves matters from 3--4 MB/s for bulk downloads to 5--6 MB/s 
(with some variability) based upon several test downloads.

The clients at home are a mixture but typically are macOS and FreeBSD.  My home 
setup uses OPNsense 23.1 as a gateway, using NAT for IPv4 and Hurricane 
Electric for IPv6.  (I'm using "htcp" CC on OPNsense.  I'm also using the 
Traffic Shaper on OPNsense and have a FQ_CoDel setup defined that yields an 
A/A+ result on BufferBloat tests.)  The remote servers at $JOB (both Linux and 
FreeBSD) are on the same subnet as each other and not behind a NAT.  I have 
been doing the download tests over IPv4 ("curl -v -4 -o /dev/null ...").

Cheers,

Paul.

Reply via email to