On Fri, 27 Nov 2009, Peter Stuge wrote:
The first one is simple enough. The failure mode for this problem was that
sometimes libssh2 would call select() but without waiting for any socket to
become writable, and so it would hang forever.
Oh, nice catch!
I've now made libssh2_channel_write_ex() deal with larger than 32K sizes by
simply ignoring everything beyond 32K. Since it returns the number of bytes
it sends it shouldn't cause any particular problems for existing apps.
It turns out to be a big problem.
I'm not sure I understand why this is a big problem. Non-optimal sure, but big
problem?
Since each layer in SSH (SFTP, channel, transport) needs to add some bytes
to every block of data passed in from upper layers, it is actually not
correct for the upper layer (application, SFTP, channel) to use the size of
data that it wants to send as reference for how many bytes need to be sent
by the lower layer - since in fact there are extra bytes added by the lower
layer, an increasing amount with more layers being involved.
I agree. But then there's nothing that prevents the lower layer functions to
just use as much data it can and not to cram in more than it should into the
outgoing packages.
The current way the code only passes on 32K (or 31 or 29, whatever) shouldn't
cause a problem. It will only make the transfer usage and speed less than
optimal.
I don't have a patch to solve this yet. I think the ABI needs to break in
order to fix this, so discussion would be good.
What ABI breakage do you have in mind that would fix this (and how)?
A simple solution would be to not return bytes sent to the upper layer, but
only LIBSSH2_ERROR_EAGAIN or 0 when done. Do you think that is sufficient?
I don't follow here. Why would removing that info solve this problem?
--
/ daniel.haxx.se
_______________________________________________
libssh2-devel http://cool.haxx.se/cgi-bin/mailman/listinfo/libssh2-devel