Anybody has thoughts on this? I also faced this issue[ http://www.libssh2.org/mail/libssh2-devel-archive-2014-06/0001.shtml] I think we should have solution on this as Joern suggested.
-Nitin On Fri, Aug 29, 2014 at 2:15 PM, Joern Heissler <[email protected]> wrote: > Hi, > > I'm trying to download a large text file using the sftp protocol. > > The remote server runs on "Maverick SSHD". I'm using libssh2-1.4.3 (debian > unstable). > > I enabled compression and negotiated zlib because it's a text file. > > Next, I compared the speed to what OpenSSH's `sftp' utility achieves, and > libssh2 was just terribly slow. > > Then I increased buffer size for libssh2_sftp_read to a big value. It > helps a little, but the chunks returned by libssh2_sftp_read are exactly > 2000 > bytes, regardless of my setting. > > tcpdump shows that the packets sent by the server are mostly around > 200-300 bytes which obviously is too small. > > I found that when I change MAX_SFTP_READ_SIZE from 2000 to a larger > value, the packet size increases, as does the download speed. > > To me it looks like the server has strange TCP_NODELAY / TCP_CORK > settings. For each request of 2000 bytes, the data is gzipped and gets > sent in > one tcp packet (or multiple if too large). > I found that a chunk size of 13500 bytes gives me a good ratio of > uncompressed_bytes / tcp_packets. > > The optimal value for MAX_SFTP_READ_SIZE heavily depends on the specific > use case, so I ask that it's made a configurable option, please :) > > Thanks, > Joern > _______________________________________________ > libssh2-devel http://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-devel >
_______________________________________________ libssh2-devel http://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-devel
