On Jan 31, 2023, at 10:39 AM, David <[email protected]> wrote:

> On 1/30/23 16:30, Matt Garber wrote:
>>     > Any help/insight is gratefully appreciated.
>>     >
>>     > Cheers,
>>     >
>>     > Paul.
>>     >
>>    sysctl net.inet.tcp.cc.algorithm=htcp
>>    I would set "htcp" on the server and home computer to improve
>>    through in
>>    your type of situation.
>> There may be other FreeBSD sysctls that have bad defaults in this scenario 
>> and could be better tuned, but I doubt changing the CC algorithm at this 
>> point is the problem — at least not so much a problem that’s causing 
>> throughput to be reduced so drastically. Happy to be wrong if that does help 
>> things quickly and easily, though.
>> (Since OP mentioned that FreeBSD CC was set to CUBIC, that would match what 
>> the Linux boxes are using by default, too, unless they’ve been changed to 
>> something newer like BBR… so seems like CUBIC *should* be performing fine on 
>> this WAN link, and the difference is something else.)
>> —Matt
> 
> I love FreeBSD and very much appreciate the efforts of those people much 
> smarter. But... I don't think the defaults get enough testing in real world 
> conditions.
> 
> I came across Paul's issue several years ago and spent a few a hours testing 
> and found the defaults performed very well on a LAN but could perform 
> terribly on a many hop WAN. HTCP performs marginally worse on a LAN or close 
> WAN connection, but much much better on a many hop WAN connection.
> 
[[...]]

> In my opinion HTCP is a better default for the current state of the internet.


It looks like they already changed the default from NewReno to CUBIC in 
FreeBSD-CURRENT.

I agree with your observation about the defaults vs. real world conditions.  As 
you observed, I also get great performance in a high-speed LAN, but not so much 
in a many-hop WAN to an asymmetric ISP connection.

I actually started down this rabbit hole when I noticed I couldn't manage more 
than about 3--4 MB in a single stream and thought my ISP was throttling me.  
But then I noticed I would actually get fast/maximum speeds, e.g., when doing 
"brew upgrade -v" and Homebrew would be downloading packages, and so then 
wondered whether they were throttling non-HTTP traffic.  That led me to 
discover that even HTTP downloads were slow to the FreeBSD servers I use 
remotely at $JOB and, furthermore, that all traffic to Linux systems I use at 
$JOB didn't seem to be throttled or incapable of getting maximum single stream 
bandwidth matching my ISP's quoted speeds. :-\

I accept that this may just be a peculiarity of my local and remote setup, and 
so appreciate the help and suggestions people have offered in trying to debug 
the issue.

Cheers,

Paul.

Reply via email to