Hi, I noticed a curious phenomenon with an application, which works and performs fine on Linux, but showed consistently lower (by a factor of 2-3x) throughput on Windows. It streams data through a channel in 64K chunks - this works splendidly on Linux. After investigating I noticed each individual call to ssh_channel_write blocking for 15-16 ms despite only around 1-2 ms of latency between the hosts.
I dug into the libssh core and noticed that ssh_channel_write in effect sends a bunch of SSH packets (up to 32K, due to the inner packet headers we actually end up with two large and a small 20 byte packet for a 64K write) and then calls ssh_channel_flush, which waits for the send buffer to deplete. It seems like ssh_channel_flush is subject to the low-resolution Windows timer of 15.625 ms and will block for a multiple of that, and it often enough ends up blocking for those 15 ms, reducing throughput. I just increased the buffer size passed to ssh_channel_write, which works around this, as each 15 ms delay is amortized over a much larger buffer. I don't really think a change to libssh is necessary per se (since enabling high-resolution timers on windows still has some drawbacks as far as I know, like worse power consumption), I just wanted to leave the story here in case someone else stumbles over a similar issue. Cheers, Marian