On 3/8/26 22:44, Michael van Elst wrote: > You should have seen faster congestion recovery.
Oh, in that case, I definitely did. The cases where it collapses for a long time are almost nonexistent, and dips often recover quicker than before. > I don't have a very fast WAN connection for testing, but in > a simulated WAN with 100ms RTT, the patch allows me to get > close to 1Gbit/s. I get 300-455Mbit/s in iperf3 which is actually pretty good from what I usually see from servers in these locations. > The actual speed of both methods is the same, this is just about > what you see in the progress report. I see, that makes sense. I was watching wireshark and I could see no errors; if it was just transferring slowly then that makes total sense. > When legacy SCP is still slower than wire speed, then it's either > CPU limitation, or the fact that ssh uses it's own kind of buffering > that limits what TCP can do. There is a "high performance networking" > patch to ssh (that the NetBSD ssh partially includes) that could > help, but since it had even a negative effect in recent versions > of ssh, it has been disabled. What's weird is that this is happening only in one direction. Server to client has expected speeds (~7 MiB/s) but client to server it hovers at around 800 KiB/s. > You may play with the "HPNDisabled" > flag and the "HPNBufferSize" value of ssh client and server. I tried setting HPNDisabled to off, and setting HPNBufferSize to 16777216. This definitely makes the speed ramp up much faster but it still ends up hovering around 800 KiB/s. I also tried compiling OpenSSH Portable from the upstream repository with similar results. I also tried rsync (with its own daemon/protocol) and it hits 8 MiB/s fine both ways. So it's really just happening with OpenSSH, even with the upstream version, only when the server is receiving. I'll dig some more.
