Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Jan Ehrhardt via curl-library
Gisle Vanem via curl-library (Thu, 9 Aug 2018 18:31:12 +0200):
>Jan Ehrhardt wrote:
>
>>> Wow dude! 2 times faster than FileZilla now.
>>>
>>> Time decreased from 33.153s to 6.4 sec (same random 10 MByte file).
>>> Versus approx. 5.3 sec for curl/FTP.
>> 
>> Using SFTP?
>
>Yes:
>curl.exe -k -# --write-out "speed: %%{speed_upload} bytes/sec, total-time: 
>%%{time_total}" ^
>  sftp://xyz -T c:\TEMP\curl-test.file
>speed: 1649348,000 bytes/sec, total-time: 6,063000

Can you make your compiled version available somewhere? I tried different
combinations of VC9, VC11, VC14 and x86 / x64 and cannot get it higher than 300k
at the moment.
-- 
Jan

---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Issues uploading file to NextCloud server

2018-08-09 Thread Daniel Stenberg via curl-library

On Mon, 6 Aug 2018, Andy Pont wrote:

The debug output on the console when I run the script manually returns with 
"HTTP/1.1 400 Bad Request”.  The logging in the Nextcloud instance shows the 
request being submitted but fails with the error “BadRequest: expected file 
size X got 0”.


I can’t work out where to go next to determine whether this is a 
configuration issue with Nextcloud or me being dumb in the request I am 
sending.


Are you supposed to be able to PUT to that location?

--

 / daniel.haxx.se---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Gisle Vanem via curl-library

Jan Ehrhardt wrote:


Wow dude! 2 times faster than FileZilla now.

Time decreased from 33.153s to 6.4 sec (same random 10 MByte file).
Versus approx. 5.3 sec for curl/FTP.


Using SFTP?


Yes:
curl.exe -k -# --write-out "speed: %%{speed_upload} bytes/sec, total-time: 
%%{time_total}" ^
 sftp://xyz -T c:\TEMP\curl-test.file
speed: 1649348,000 bytes/sec, total-time: 6,063000


--
--gv
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: multi_socket and epoll example

2018-08-09 Thread James Read via curl-library
On Thu, Aug 9, 2018 at 7:56 AM, Daniel Stenberg  wrote:

> On Thu, 9 Aug 2018, James Read wrote:
>
> Everything seems to work fine. With a single URL and with multiple URLs.
>> The only issue I have is the throughput.
>>
>
> Could be vphiperfifo.c example issues.
>
> For example, I don't see how it handles API timeouts properly: the timeout
> that libcurl tells the CURLMOPT_TIMERFUNCTION callback should fire after
> that time and curl_multi_socket_action( ... CURL_SOCKET_TIMEOUT ...) should
> be called.
>
>
As far as I can see curl_multi_socket_action is called.

/* Update the timer after curl_multi library does it's thing. Curl will
 * inform us through this callback what it wants the new timeout to be,
 * after it does some work. */
static int multi_timer_cb(CURLM *multi, long timeout_ms, GlobalInfo *g)
{
  struct itimerspec its;
  CURLMcode rc;

  fprintf(MSG_OUT, "multi_timer_cb: Setting timeout to %ld ms\n",
timeout_ms);

  timerfd_settime(g->tfd, /*flags=*/ 0, , NULL);
  if(timeout_ms > 0) {
its.it_interval.tv_sec = 1;
its.it_interval.tv_nsec = 0;
its.it_value.tv_sec = timeout_ms / 1000;
its.it_value.tv_nsec = (timeout_ms % 1000) * 1000;
timerfd_settime(g->tfd, /*flags=*/ 0, , NULL);
  }
  else if(timeout_ms == 0) {
rc = curl_multi_socket_action(g->multi,
  CURL_SOCKET_TIMEOUT, 0,
>still_running);
mcode_or_die("multi_timer_cb: curl_multi_socket_action", rc);
  }
  else {
memset(, 0, sizeof(struct itimerspec));
timerfd_settime(g->tfd, /*flags=*/ 0, , NULL);
  }
  return 0;
}
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Jan Ehrhardt via curl-library
Gisle Vanem via curl-library (Thu, 9 Aug 2018 17:48:49 +0200):
>Daniel Stenberg wrote:
>
>>   /* The upload buffer size, should not be smaller than CURL_MAX_WRITE_SIZE, 
>> as
>>      it needs to hold a full buffer as could be sent in a write callback */
>> -#define UPLOAD_BUFSIZE CURL_MAX_WRITE_SIZE
>> +#define UPLOAD_BUFSIZE (512*1024)
>
>Wow dude! 2 times faster than FileZilla now.
>
>Time decreased from 33.153s to 6.4 sec (same random 10 MByte file).
>Versus approx. 5.3 sec for curl/FTP.

Using SFTP?
-- 
Jan

---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Jan Ehrhardt via curl-library
Daniel Stenberg via curl-library (Thu, 9 Aug 2018 16:55:38 +0200 (CEST)):
>On Thu, 9 Aug 2018, Jan Ehrhardt via curl-library wrote:
>
>> curl plain ftp patched   41 seconds
>> curl patched sftp  1925 seconds
>
>Oh what a sad number there... =(
>
>A quick little experiment could be to try upping the upload buffer size 
>significantly:
snip patch

Patch applied. On another connection (slower than back at home) it now reports

  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  274M0 0  100  274M  0   871k  0:05:22  0:05:22 --:--:--  900k
100  274M0 0  100  274M  0   871k  0:05:22  0:05:22 --:--:--  871k

322 seconds. Lot faster.
-- 
Jant

---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Gisle Vanem via curl-library

Daniel Stenberg wrote:


  /* The upload buffer size, should not be smaller than CURL_MAX_WRITE_SIZE, as
     it needs to hold a full buffer as could be sent in a write callback */
-#define UPLOAD_BUFSIZE CURL_MAX_WRITE_SIZE
+#define UPLOAD_BUFSIZE (512*1024)


Wow dude! 2 times faster than FileZilla now.

Time decreased from 33.153s to 6.4 sec (same random 10 MByte file).
Versus approx. 5.3 sec for curl/FTP.

--
--gv
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Daniel Stenberg via curl-library

On Thu, 9 Aug 2018, Jan Ehrhardt via curl-library wrote:


curl plain ftp patched   41 seconds
curl patched sftp  1925 seconds


Oh what a sad number there... =(

A quick little experiment could be to try upping the upload buffer size 
significantly:


diff --git a/lib/urldata.h b/lib/urldata.h
index 7d396f3f2..ccfe83b3b 100644
--- a/lib/urldata.h
+++ b/lib/urldata.h
@@ -142,11 +142,11 @@ typedef ssize_t (Curl_recv)(struct connectdata *conn, /* 
connection data */

 #include 
 #endif /* HAVE_LIBSSH2_H */

 /* The upload buffer size, should not be smaller than CURL_MAX_WRITE_SIZE, as
it needs to hold a full buffer as could be sent in a write callback */
-#define UPLOAD_BUFSIZE CURL_MAX_WRITE_SIZE
+#define UPLOAD_BUFSIZE (512*1024)

 /* The "master buffer" is for HTTP pipelining */
 #define MASTERBUF_SIZE 16384

 /* Initial size of the buffer to store headers in, it'll be enlarged in case


--

 / daniel.haxx.se
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Jan Ehrhardt via curl-library
Gisle Vanem via curl-library (Thu, 9 Aug 2018 13:51:50 +0200):
>Jan Ehrhardt wrote:
>
>>> 33.153s vs 5.4s for a 10 MByte file.
>> 
>> Did you time how long Filezilla takes for the same action? Filezilla
>> squeezes quite a lot over sftp-connections...
>
>11.4 sec!!
snip
>I must be using the wrong compiler and SSL-lib :-)
>But, it's certainly possible to make SFTP faster.

Same results here. Yesterday I had a discussion with a customer, uploading
a 274MB file from an iPad over sftp. That took 2:27 minutes. I repeated the
upload now with Filezilla, lftp and curl

TL;DR

iPad sftp (NMSSH2)  147 seconds
Filezilla sftp   67 seconds
lftp sftp70 seconds
curl plain ftp vanilla   56 seconds
curl plain ftp patched   41 seconds (the 2nd is my own compiled version: 42s)
curl patched sftp  1925 seconds
-- 
Jan

Microsoft Windows [Version 6.1.7601]
running lftp

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes 

===  E:\utils\bash.exe /utils/bash.sh ===

Execution time: 70.587 s
running vanilla...

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes 

===  curl -w"start:%{time_starttransfer} total:%{time_total}\n" removed
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  274M0 0  100  274M  0  4982k  0:00:56  0:00:56 --:--:-- 4960k
start:0,374000 total:56,394000

Execution time: 56.547 s
running patched...

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes 

===  curl -w"start:%{time_starttransfer} total:%{time_total}\n" removed
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  274M0 0  100  274M  0  6812k  0:00:41  0:00:41 --:--:-- 6420k
start:0,375000 total:41,247000

Execution time: 41.408 s
running curl patched...

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes 

===  curl -w"start:%{time_starttransfer} total:%{time_total}\n" removed
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  274M0 0  100  274M  0  6617k  0:00:42  0:00:42 --:--:-- 6923k
start:0,374000 total:42,463000

Execution time: 42.699 s
running patched sftp

ptime 1.0 for Win32, Freeware - http://www.pc-tools.net/
Copyright(C) 2002, Jem Berkes 

===  curl -w"start:%{time_starttransfer} total:%{time_total}\n" removed
  % Total% Received % Xferd  Average Speed   TimeTime Time  Current
 Dload  Upload   Total   SpentLeft  Speed
100  274M0 0  100  274M  0   145k  0:32:05  0:32:05 --:--:--  150k
100  274M0 0  100  274M  0   145k  0:32:05  0:32:05 --:--:--  145k
start:0,421000 total:1925,349000

Execution time: 1925.451 s

---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: a URL API ?

2018-08-09 Thread Daniel Jeliński via curl-library
2018-08-09 14:15 GMT+02:00 Daniel Stenberg via curl-library
:
> ... or should it perhaps just skip the *first* '=' ?

I don't think any URL parsing library cares about = beyond the first
one. Which is why = in name may pose a problem, but in value probably
won't. I'd skip all.
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: a URL API ?

2018-08-09 Thread Daniel Stenberg via curl-library

On Thu, 9 Aug 2018, Daniel Stenberg via curl-library wrote:

(replying to myself...9


 /* append to query, ask for encoding */
 curl_url_set(h, CURLUPART_QUERY, "company=AT", CURLU_APPENDQUERY|



- CURLU_URLENCODE with CURLU_APPENDQUERY set, will skip the '=' letter when
 doing the encoding


... or should it perhaps just skip the *first* '=' ?

If we ponder a user wants to add a "name=contents" pair, I figure the first 
assignment is the one that shouldn't be encoded but subsequent ones then 
presumably should?


--

 / daniel.haxx.se
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Gisle Vanem via curl-library

Jan Ehrhardt wrote:


33.153s vs 5.4s for a 10 MByte file.


Did you time how long Filezilla takes for the same action? Filezilla
squeezes quite a lot over sftp-connections...


11.4 sec!!

From the "About" box:
  Version:  3.31.0
  Build information:
Compiled for:   x86_64-w64-mingw32
Compiled with:  x86_64-w64-mingw32-gcc (GCC) 6.3.0 20170516
Compiler flags: -g -O2 -Wall
GnuTLS: 3.5.18
CPU features:   sse sse2 sse3 ssse3 sse4.1 sse4.2 avx avx2 aes pclmulqdq 
rdrnd bmi2 bmi2

I must be using the wrong compiler and SSL-lib :-)
But, it's certainly possible to make SFTP faster.

--
--gv
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: a URL API ?

2018-08-09 Thread Daniel Stenberg via curl-library

On Thu, 9 Aug 2018, Daniel Jeliński via curl-library wrote:


 char *append = "=44";


Well assuming we want to use the API to build URL based on HTML form with 
GET action, curl_url_query_append suggested by Geoff would be much nicer.


Yes, you're right. I've taken a more generic approach that isn't at all aware 
of HTML forms.



In particular, I would expect the API to:
- figure out if it needs to add & or ?
- figure out if it needs to URLEncode the parameter or value (eg. when
setting "company"="AT", we need to escape the ampersand)
- do the appending / memory allocation part on its own
What do you think?


I hear you! How about...

A dedicated feature bit to append the string to the query?

  /* append to query, ask for encoding */
  curl_url_set(h, CURLUPART_QUERY, "company=AT", CURLU_APPENDQUERY|
   CURLU_URLENCODE);

  /* append to query, already encoded */
  curl_url_set(h, CURLUPART_QUERY, "company=AT%26T", CURLU_APPENDQUERY);


- CURLU_APPENDQUERY makes it also add a '&' before the string if there's
  already contents in the query.
- CURLU_URLENCODE with CURLU_APPENDQUERY set, will skip the '=' letter when
  doing the encoding

--

 / daniel.haxx.se---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Jan Ehrhardt via curl-library
Gisle Vanem via curl-library (Thu, 9 Aug 2018 01:42:18 +0200):
>Jan Ehrhardt wrote:
>
>> I ended up with a Windows port of lftp, launched from a bash script. Curl 
>> sftp
>> did resume, but was terribly slow. 
>
>I also just tested with 'curl sftp//:' with the latest libssh2
>and the new 'SIO_IDEAL_SEND_BACKLOG_QUERY' option. 'sftp://' is
>still 6 times slower than ftp against the same server in Denmark.
>
>33.153s vs 5.4s for a 10 MByte file.

Did you time how long Filezilla takes for the same action? Filezilla
squeezes quite a lot over sftp-connections...
-- 
Jan

---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Pipelining is a pain, can we ditch it now?

2018-08-09 Thread Daniel Stenberg via curl-library

On Thu, 9 Aug 2018, Radu Hociung via curl-library wrote:

I have looked at every one of those reports, and I found no unresolved bugs. 
Care to list the remaining unaddressed issues?


Off the top of my head, here are a few things I think are important to get 
fixed to increase my confidence-level in pipelining:


1. the flaky tests suggests things are not stable

2. we have too few tests for such a complicted feature so clearly we're not 
testing enough. In particular we need more tests for broken connections with 
several unfinished requests on the connection.


3. stopping a pipelined request before its done causes segfaults. It'd be 
interesting with test cases that remove the handles at various stages during a 
pipelined transfer.



I noted that some are looking for functionality which does not exist in
the protocol (like cancelling already issued requests).


I don't think that's a correct reading of the problem.

libcurl offers an API that makes generic transfers. In the multi API you can 
do many parallell transfers and they may or may not use pipelning. The API has 
*always* said that you can just remove an easy handle from the multi when 
you're done with it (you want to abort it) and want to take it out of the 
transfer loop. A typical application doesn't know nor does it have to care if 
that particular transfer uses pipelining or not at that point. When it wants 
to stop one of the transfers, it removes the easy handle.


The fact that it works to remove the handle in all cases *except* for some 
situations when the transfer happens to do pipelining is a bug. A pretty 
serious bug too since it might cause a segfault.



It's not clear to me what those devs expect would happen in that scenario;


We (should) expect libcurl to deal with it appropriately. Not crashing is a 
required first step.


I would imagine that depending on the state of the transfer, we'd have to 
somehow mark it as pretend-its-removed for the user (and stop delivering data 
to the callback etc) until the transfer has proceeded far enough so that we 
can remove it for real.



Also I don't know why they expect this functionality to exist.


Because it is documented to work like that? Because users want to stop 
transfers.



the messy data organization would just cause new bugs to pop up


So maybe start there and clean up? Or work on improving the test suite perhaps 
to make sure it catches remaining issues.


I offered to rewrite multi.c and contribute it back, but there is no 
interest from you apparently.


Nobody here need rewrites. We need iterative improvements. Grab one 
issue/problem, fix it and land it. One at a time. Over time that equals the 
same thing as a rewrite, but it gets done by taking many small steps.


But if you truly think a larger take is necessary for something specific, then 
do that larger take as small as possible and explain to us why we want to do 
it like that. And a word of advice: try not to be rude and sprinkle the 
explanation with insults, it doesn't get received as good then.


So please, do help us out!

There are forks of libcurl being maintained and used, just nobody has yet 
volunteered to maintain their own fork as a lightweight, packageable 
alternative to libcurl yet.


While I think it is sad that we can't all work together on a common goal in a 
single project instead of spreading the effort and developers on several 
forks, they're all of course entitled to do that and so are you if you prefer.



URL parsing is not apparently RFC compliant, but WHATWG compliant


I think you shuld go back and read my posts and blogs again on this topic 
because that's... wildly incorrect.



Your libcurl release schedule is fast and furious, like the browsers.


Allow me to point out that I did "fast and furious" releases many many years 
before the browsers figured out it was a good idea.


Frequent releases is good for the community.


I'm sorry if I seem rude to you


You don't "seem" rude to me. You are, and deliberately so too.

In bug 2701 which you closed, I offered to contribute back my rewritten 
multi.c, that would guarantee order of execution. I provided a test case, 
but I don't have time at the moment to work on multi.c. You closed it.


There was no code in issue 2701; the issue was a bug report about transfer 
ordering - something that libcurl doesn't guarantee (and then it side-tracked 
out on a long rant about libcurl internals that was clearly off-topic). The 
bug report was closed, as I don't see a bug there. I didn't feel you cared 
about the actual bug much anymore at that point.


If you provide a patch/PR with actual code, I would review and discuss it.

Like we *could* discuss what tranfer ordering libcurl could guarantee and how 
we could go about and make it work. If we could stay on topic and be civil.



In bug 2782 I pointed out the examples with external event loops contain
a use-after-free/segfaulting bug (which took me at least 1 day of hair
pulling to 

Re: a URL API ?

2018-08-09 Thread Daniel Jeliński via curl-library
2018-08-09 10:48 GMT+02:00 Daniel Stenberg via curl-library
:
> Say we want to append this to the query:
>
>  char *append = "=44";

Well assuming we want to use the API to build URL based on HTML form
with GET action, curl_url_query_append suggested by Geoff would be
much nicer. In particular, I would expect the API to:
- figure out if it needs to add & or ?
- figure out if it needs to URLEncode the parameter or value (eg. when
setting "company"="AT", we need to escape the ampersand)
- do the appending / memory allocation part on its own
What do you think?
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: a URL API ?

2018-08-09 Thread Daniel Stenberg via curl-library

On Thu, 2 Aug 2018, Geoff Beier wrote:

The setters would be important to us. I might be bikeshedding here, but the 
ability to add to the query would be very nice. So something like 
curl_url_query_append(urlp, "numitems", 3)


Returning to this, as I've polished the API a bit over the last few days. The 
wiki page has been updated to reflect the changes I've done.


As the curl URL API works now, this is how you append a string to the query 
of a URL.


First, create a handle and pass it a full URL:

 CURLU *h = curl_url();
 curl_url_set(h, CURLUPART_URL, "https://example.com/foo?askforthis;, 0);

Say we want to append this to the query:

 char *append = "=44";

We extract the query part

 char *q;
 curl_url_get(h, CURLUPART_QUERY, , 0);

Make space for the new enlarged query doing regular memory management and 
create the updated querty there. The 'q' pointer points to memory managed by 
libcurl so it can't be realloc'ed.


 char *newptr = malloc(strlen(q) + strlen(append) + 1);
 strcpy(newptr, q);
 strcat(newptr, append);

Then replace the former query part in the URL by setting this new one:

 curl_url_set(h, CURLUPART_QUERY, newptr, 0);

Free the data

 curl_free(q);
 free(newptr);

... and now we can extract the full URL again and it will have the updated 
query part:


 char *url;
 curl_url_get(h, CURLUPART_URL, , 0);

--

 / daniel.haxx.se
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Pipelining is a pain, can we ditch it now?

2018-08-09 Thread Kamil Dudka via curl-library
On Thursday, August 9, 2018 9:08:17 AM CEST Radu Hociung via curl-library 
wrote:
> On 06/08/2018 4:53 PM, Daniel Stenberg wrote:
> > Right, quite ironic I think that it wasn't a pipelining bug that
> 
> finally triggered me to put pipelining up for deprecation...
> 
> You keep saying there are lots of bugs and missing test cases.

The upstream test-suite includes some (8 according to my quick grep) tests for 
HTTP pipelining but most of them have never worked reliably enough.  4 tests 
are already marked as flaky by upstream.  I had to disable another one a week 
ago because it was causing random build failures on the Fedora build service:

https://src.fedoraproject.org/cgit/rpms/curl.git/commit/?id=3fb6e235

I tried to debug the test failures several times but have never been able to 
figure out whether the bug was in libcurl or whether the failing test-cases 
were just verifying something that was not guaranteed to hold.

On top of that, I have been maintaining curl in Fedora and RHEL since 2009
and never seen a single bug report on the HTTP pipelining, which suggests
that not many people actually use curl's pipelining on these distributions.

Kamil


---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Daniel Stenberg via curl-library

On Thu, 9 Aug 2018, Daniel Jeliński via curl-library wrote:

There's no way for curl/libssh2 to put more than 16 KB on the wire. It's 
right there in libssh2 docs [1]: [libssh2_sftp_write] will return a positive 
number as soon as the first packet is acknowledged from the server.


That's incorrect. libssh2 sends off many separate packets at the same time (if 
given the opportunity) and will return data as soon as the *first* packet is 
acked, so if you keep sending large data buffers there will be more 
outstanding and you can get much faster speeds.


Since sftp packets are up to 32 KB, and curl is using 16KB send buffer, the 
buffer always fits in one packet and the function always waits for 
acknowledgement before returning. So your transfer rate will never exceed 16 
KB / ping.


Yes, and I've actually explained this architecture issue on this list many 
times before. SFTP is a rather particular protocol, and libssh2's API for it 
isn't ideal which makes curl suffer. (I blogged about it once over here: 
https://daniel.haxx.se/blog/2010/12/08/making-sftp-transfers-fast/)


It seems relatively easy to change the procedure to get rid of this 
write-through semantics. Not sure if the project is still actively 
maintained though, its mailing list [2] looks very quiet.


It's active allright (as opposed to dead) but during periods not a lot of 
things happen. Mostly just because of a lack of developers (and partly because 
I can't really keep up being a good maintainer of it)...



I didn't check curl with libssh backend, it may be worth a try.


It may! I haven't looked into the libssh SFTP specifics for a long time. 
Hopefully they've solved these challenges better!


--

 / daniel.haxx.se---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-09 Thread Daniel Jeliński via curl-library
2018-08-09 1:42 GMT+02:00 Gisle Vanem via curl-library
:
> I also just tested with 'curl sftp//:' with the latest libssh2
> and the new 'SIO_IDEAL_SEND_BACKLOG_QUERY' option. 'sftp://' is
> still 6 times slower than ftp against the same server in Denmark.
>
> 33.153s vs 5.4s for a 10 MByte file.

There's no way for curl/libssh2 to put more than 16 KB on the wire.
It's right there in libssh2 docs [1]: [libssh2_sftp_write] will return
a positive number as soon as the first packet is acknowledged from the
server.

Since sftp packets are up to 32 KB, and curl is using 16KB send
buffer, the buffer always fits in one packet and the function always
waits for acknowledgement before returning. So your transfer rate will
never exceed 16 KB / ping.

It seems relatively easy to change the procedure to get rid of this
write-through semantics. Not sure if the project is still actively
maintained though, its mailing list [2] looks very quiet.

I didn't check curl with libssh backend, it may be worth a try.

[1] https://www.libssh2.org/libssh2_sftp_write.html
[2] https://www.libssh2.org/mail.cgi
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html