with code of the format below ( pulled from the examples ) where source
is say 75MB in size.
...snip...
while ((len = sftp_read(source, data, 4096)) > 0) {
if (sftp_write(to, data, len) != len) {
fprintf(stderr, "Error writing %d bytes: %s\n",
len, ssh_get_error(session));
sftp_close(to);
sftp_close(fichier);
goto end;
}
}
...snip...
read/write will successfully transfer the complete file.
if the code is altered to
...snip...
BUF_SIZE=xxxxx
char data[BUF_SIZE] = {0};
...
while ((len = sftp_read(source, data, BUF_SIZE)) > 0) {
if (sftp_write(to, data, len) != len) {
fprintf(stderr, "Error writing %d bytes: %s\n",
len, ssh_get_error(session));
sftp_close(to);
sftp_close(fichier);
goto end;
}
}
...snip...
read/write will succeed until the value of BUF_SIZE is set to 262120 or
greater.
at which point sftp_write will write BUF_SIZE and sftp_packet_read will
fail with "Received EOF while reading sftp packet size" and the SFTP
server will log a bad message error:
"Jul 15 17:28:28 endpoint sftp-server[1537206]: error: bad message from
100.98.195.26 local user rthompso"
Is this expected/is this an internal limit to the amount that can be
written per invocation?
Is there a way to increase the amount of data that can be successfully
written per invocation?
Thanks,
Reid