Gisle Vanem via curl-library (Thu, 9 Aug 2018 18:31:12 +0200):
>Jan Ehrhardt wrote:
>
>>> Wow dude! 2 times faster than FileZilla now.
>>>
>>> Time decreased from 33.153s to 6.4 sec (same random 10 MByte file).
>>> Versus approx. 5.3 sec for curl/FTP.
>>
>> Using SFTP?
>
>Yes:
>curl.exe -k -#
On Mon, 6 Aug 2018, Andy Pont wrote:
The debug output on the console when I run the script manually returns with
"HTTP/1.1 400 Bad Request”. The logging in the Nextcloud instance shows the
request being submitted but fails with the error “BadRequest: expected file
size X got 0”.
I
Jan Ehrhardt wrote:
Wow dude! 2 times faster than FileZilla now.
Time decreased from 33.153s to 6.4 sec (same random 10 MByte file).
Versus approx. 5.3 sec for curl/FTP.
Using SFTP?
Yes:
curl.exe -k -# --write-out "speed: %%{speed_upload} bytes/sec, total-time:
%%{time_total}" ^
On Thu, Aug 9, 2018 at 7:56 AM, Daniel Stenberg wrote:
> On Thu, 9 Aug 2018, James Read wrote:
>
> Everything seems to work fine. With a single URL and with multiple URLs.
>> The only issue I have is the throughput.
>>
>
> Could be vphiperfifo.c example issues.
>
> For example, I don't see how
Gisle Vanem via curl-library (Thu, 9 Aug 2018 17:48:49 +0200):
>Daniel Stenberg wrote:
>
>> /* The upload buffer size, should not be smaller than CURL_MAX_WRITE_SIZE,
>> as
>> it needs to hold a full buffer as could be sent in a write callback */
>> -#define UPLOAD_BUFSIZE
Daniel Stenberg via curl-library (Thu, 9 Aug 2018 16:55:38 +0200 (CEST)):
>On Thu, 9 Aug 2018, Jan Ehrhardt via curl-library wrote:
>
>> curl plain ftp patched 41 seconds
>> curl patched sftp 1925 seconds
>
>Oh what a sad number there... =(
>
>A quick little experiment could be to try
Daniel Stenberg wrote:
/* The upload buffer size, should not be smaller than CURL_MAX_WRITE_SIZE, as
it needs to hold a full buffer as could be sent in a write callback */
-#define UPLOAD_BUFSIZE CURL_MAX_WRITE_SIZE
+#define UPLOAD_BUFSIZE (512*1024)
Wow dude! 2 times faster than
On Thu, 9 Aug 2018, Jan Ehrhardt via curl-library wrote:
curl plain ftp patched 41 seconds
curl patched sftp 1925 seconds
Oh what a sad number there... =(
A quick little experiment could be to try upping the upload buffer size
significantly:
diff --git a/lib/urldata.h
Gisle Vanem via curl-library (Thu, 9 Aug 2018 13:51:50 +0200):
>Jan Ehrhardt wrote:
>
>>> 33.153s vs 5.4s for a 10 MByte file.
>>
>> Did you time how long Filezilla takes for the same action? Filezilla
>> squeezes quite a lot over sftp-connections...
>
>11.4 sec!!
snip
>I must be using the wrong
2018-08-09 14:15 GMT+02:00 Daniel Stenberg via curl-library
:
> ... or should it perhaps just skip the *first* '=' ?
I don't think any URL parsing library cares about = beyond the first
one. Which is why = in name may pose a problem, but in value probably
won't. I'd skip all.
On Thu, 9 Aug 2018, Daniel Stenberg via curl-library wrote:
(replying to myself...9
/* append to query, ask for encoding */
curl_url_set(h, CURLUPART_QUERY, "company=AT", CURLU_APPENDQUERY|
- CURLU_URLENCODE with CURLU_APPENDQUERY set, will skip the '=' letter when
doing the encoding
Jan Ehrhardt wrote:
33.153s vs 5.4s for a 10 MByte file.
Did you time how long Filezilla takes for the same action? Filezilla
squeezes quite a lot over sftp-connections...
11.4 sec!!
From the "About" box:
Version: 3.31.0
Build information:
Compiled for:
On Thu, 9 Aug 2018, Daniel Jeliński via curl-library wrote:
char *append = "=44";
Well assuming we want to use the API to build URL based on HTML form with
GET action, curl_url_query_append suggested by Geoff would be much nicer.
Yes, you're right. I've taken a more generic approach that
Gisle Vanem via curl-library (Thu, 9 Aug 2018 01:42:18 +0200):
>Jan Ehrhardt wrote:
>
>> I ended up with a Windows port of lftp, launched from a bash script. Curl
>> sftp
>> did resume, but was terribly slow.
>
>I also just tested with 'curl sftp//:' with the latest libssh2
>and the new
On Thu, 9 Aug 2018, Radu Hociung via curl-library wrote:
I have looked at every one of those reports, and I found no unresolved bugs.
Care to list the remaining unaddressed issues?
Off the top of my head, here are a few things I think are important to get
fixed to increase my
2018-08-09 10:48 GMT+02:00 Daniel Stenberg via curl-library
:
> Say we want to append this to the query:
>
> char *append = "=44";
Well assuming we want to use the API to build URL based on HTML form
with GET action, curl_url_query_append suggested by Geoff would be
much nicer. In particular, I
On Thu, 2 Aug 2018, Geoff Beier wrote:
The setters would be important to us. I might be bikeshedding here, but the
ability to add to the query would be very nice. So something like
curl_url_query_append(urlp, "numitems", 3)
Returning to this, as I've polished the API a bit over the last few
On Thursday, August 9, 2018 9:08:17 AM CEST Radu Hociung via curl-library
wrote:
> On 06/08/2018 4:53 PM, Daniel Stenberg wrote:
> > Right, quite ironic I think that it wasn't a pipelining bug that
>
> finally triggered me to put pipelining up for deprecation...
>
> You keep saying there are
On Thu, 9 Aug 2018, Daniel Jeliński via curl-library wrote:
There's no way for curl/libssh2 to put more than 16 KB on the wire. It's
right there in libssh2 docs [1]: [libssh2_sftp_write] will return a positive
number as soon as the first packet is acknowledged from the server.
That's
2018-08-09 1:42 GMT+02:00 Gisle Vanem via curl-library
:
> I also just tested with 'curl sftp//:' with the latest libssh2
> and the new 'SIO_IDEAL_SEND_BACKLOG_QUERY' option. 'sftp://' is
> still 6 times slower than ftp against the same server in Denmark.
>
> 33.153s vs 5.4s for a 10 MByte file.
20 matches
Mail list logo