Re: TLS session ID re-use broken in 7.77.0
Simple repro: >curl -vI --http1.1 https://example.com/[1-3] -H"Connection:close" output of the old CURL version contains "* SSL re-using session ID"; output of 7.77.0 does not. Wireshark confirms that the old version sent PSK in client hello, the new version did not. curl 7.77 downloaded here: https://curl.se/windows/dl-7.77.0_2/curl-7.77.0_2-win64-mingw.zip Regards, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.se/mail/etiquette.html
Re: Considering a version 8 at some point...
Hello, Would it be a good time to start a stable (long-term support) version? Like in, version 7 would still get bug fixes, but no new features, and would be maintained until version 9 (or 10) goes out. Regards, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: a HTTP/2 window sizing dilemma!
Well then, I'd vote in favor of not allowing pause when CURLMOPT_PIPELINING is set to anything other than CURLPIPE_NOTHING. I'm not a fan of changes that may negatively impact performance, as you may guess. And I never used pause anyway. Regards, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: a HTTP/2 window sizing dilemma!
Sounds like the problem HPN-SSH tried to solve: https://www.psc.edu/index.php/hpn-ssh/638 I like their approach, but I'm not sure how they got hold of the current TCP receive window size; the only value I could find was receive buffer size, which is not necessarily relevant here. If you decide to go with a static value, the default max TCP window size on most machines is 32MB or less, and it is enough to saturate 1GBit link on the longest wired network (300 ms RTT, according to https://wondernetwork.com/pings). Other than that, I'd love to know who is using curlopt_pause and how; as far as I can tell, handling pause is only a problem on multiplexed connections where we can't let the data rest in system buffers. Correct? Regards, Daniel pt., 21 lut 2020 o 08:28 Daniel Stenberg via curl-library napisał(a): > > On Thu, 20 Feb 2020, Daniel Stenberg via curl-library wrote: > > > But even so, the buffer size might very well be set to smaller sizes than > > you'd want the HTTP/2 window size to be. Can we avoid a new option for > > window size without having users suffer? > > Jay brought the suggestion [1] that we could just have it set fixed to (the > much more sensible) 1MB as a middle ground - and I like the simplicity of > that... > > [1] https://github.com/curl/curl/issues/4939#issuecomment-589383895 > > -- > > / daniel.haxx.se | Commercial curl support up to 24x7 is available! >| Private help, bug fixes, support, ports, new features >| https://www.wolfssl.com/contact/ > --- > Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library > Etiquette: https://curl.haxx.se/mail/etiquette.html --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: FTP error 426 not handled?
wt., 18 lut 2020 o 15:43 Christian Schmitz via curl-library napisał(a): > First case wit error 426: What Curl version? Quick googling reveals that there was a bug [1] in FTP handling, which was fixed in 7.45 > Second log with no response: > > > We are completely uploaded and fine > Remembering we are in dir "files/" > TLSv1.3 (OUT), TLS alert, close notify (256): > FTP response timeout > control connection looks dead > Closing connection 0 > TLSv1.3 (OUT), TLS alert, close notify (256): > > > Has someone seen this? This is a recurring theme, I've seen reports from 2009 [2] regarding this issue. > Any idea how to fix it? Well there are 2 possible explanations for this; one is that the connection is really dead, in which case there is nothing Curl can do. Other is that Curl simply doesn't wait long enough; we currently do not wait for the server to acknowledge that it received all data on the data connection, we only wait on the control connection. Can you provide steps to reproduce this issue? Regards, Daniel [1] https://github.com/curl/curl/issues/405 [2] https://curl.haxx.se/mail/archive-2009-08/0057.html --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: SSL session ID reuse - clarification needed
Thanks guys! That's exactly what I need. --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
SSL session ID reuse - clarification needed
Hi all, I see that libcurl supports SSL session ID cache already, unless CURLOPT_SSL_SESSIONID_CACHE is cleared. However, I'm having a hard time finding information about the scope of session ID reuse: - Are session IDs reused only within an easy handle or globally for all handles within the application? - Does libcurl keep a mapping between host names and session IDs? As far as I can tell, openSSL does not. Thanks, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: How to do optimal FTP upload for multiple files?
pt., 19 lip 2019 o 00:14 Daniel Stenberg napisał(a): > > On Thu, 18 Jul 2019, Daniel Jeliński via curl-library wrote: > > > As for the connection timeout, it appears to be a well known problem with > > FTP on slow connections with oversized buffers. I just found a 10 year old > > message describing what looks like the same problem: > > https://curl.haxx.se/mail/archive-2009-08/0060.html > > I believe that's a misunderstanding. > > That's a known problem with long-going FTP transfers - totally independent of > buffer size. Since FTP uses two connections, sometimes when a transfer takes > more than a certain time, a network equipment like a NAT or a firewall closes > the control connection due to inactivity before the transfer is done. But that > doesn't seem to be what Taras has described here. > > Since libcurl uses non-blocking sockets internally I can't see any reason why > a larger upload buffer would cause a greater risk for any timeout. What am I > missing? > Here's what we found in the logs: 18:54:41.002 T#12216 Connectivity::my_trace - "== Info: We are completely uploaded and fine" 18:54:41.002 T#12216 Connectivity::my_trace - "== Info: Remembering we are in dir \"\"" 18:54:51.012 T#12216 Connectivity::my_trace - "== Info: FTP response timeout" 18:54:51.012 T#12216 Connectivity::my_trace - "== Info: control connection looks dead" 18:54:51.012 T#12216 Connectivity::my_trace - "== Info: Closing connection 0" The problem is that the first line above (completely uploaded and fine) is logged when the OS accepts the last application buffer into OS buffer. And Windows accepts buffers whole - send never returns a partial result, it's either all or nothing. So we log that we are finished while we still have 1MB outstanding on data connection. Curl FTP expects a response on control connection within 10 seconds after it sends the last data buffer, and then declares the connection dead. While we could probably modify that timeout, we have no way to tell how much time is enough. Does that make sense? --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: How to do optimal FTP upload for multiple files?
7.65 will be fine. Actually the first version with that feature was 7.61.1, but any version since then should be good. --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: How to do optimal FTP upload for multiple files?
Taras, Thanks for pointing that out. The function looks good. Buffer autotuning was only introduced in curl 7.61, so the app using 7.57 will use the default (slow) buffer sizes. You shouldn't need to set UPLOAD_BUFFERSIZE to get good upload speeds on 7.65.1. As for the connection timeout, it appears to be a well known problem with FTP on slow connections with oversized buffers. I just found a 10 year old message describing what looks like the same problem: https://curl.haxx.se/mail/archive-2009-08/0060.html I checked your upload_failure log, specifically the connection logged under T#12216. This connection appears to have ~100ms RTT, throughput ~5KB per RTT (50KB/s), for which the default 16KB buffer looks like overkill. In that environment sending the last ~700KB buffer would take about 15 seconds; curl timed out after 10. So yes, this problem was indeed caused by oversized UPLOAD_BUFFERSIZE. Back to slowness topic, can you reproduce the slow uploads using curl binary version 7.61 or newer? Regards, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: How to do optimal FTP upload for multiple files?
> Neigher users nor I run the app in VirtualBox. This is an ordinary desktop > application being run on desktop operating systems running on "bare metal". I see. What's the performance of curl on the same connection? I took a quick look at your code; noticed that you use CURLOPT_READDATA without CURLOPT_READFUNCTION; this is not exactly a reliable combination under Windows where mixing compilers and standard libraries is pretty common. It might be responsible for the timeouts you saw. Regards, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: How to do optimal FTP upload for multiple files?
> Slowness was reported by few users on Windows 10 x64. Slowness (and, in other > case, timeouts) were reported against couple of totally different FTP servers > around the globe. At least 5 different (from my user's logs) servers run by > different companies, so unfortunately there's no way to blame them here. Only > my use of libcurl. Hi Taras, Long time ago when I was tuning curl behavior on Windows, we found that Windows running on Virtualbox with network adapter mode set to NAT (the default) does not provide us with the optimal buffer size, and upload speeds suffer because of that [1]. Back then, switching Virtualbox to bridged adapter mode allowed curl to work at expected speeds. This problem only happens for programs using non-blocking sockets, which may explain why other programs manage to run fast. Let me know if that helps. Daniel [1] https://curl.haxx.se/mail/lib-2018-08/0073.html --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: libssh2 optimization [was: Re: Windows users! Help us test upload performance tuning?]
niedz., 2 wrz 2018 o 13:33 Jan Ehrhardt via curl-library napisał(a): > > Do you have a compiled version somewhere? I'm hacking this on Linux, I don't have a proper testing environment on my Windows machine. > I tried to build my own with the 3 patches: > > 1. winsock > 2. oploadbuffer 512 KB > 3. sftp writeback in libssh2 > > Dissappointing results for a 274MB sftp upload against a remote CoreFTP > server: > - bash / ssh / lftp: 68 seconds > - openssh portable sftp: 50 seconds > - curl triple patched: 734 seconds > ping of the remote machine: 3-4 ms. That's like 5MB/s best case, 300KB/s worst case; what's your limiting factor? Network? Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: libssh2 optimization [was: Re: Windows users! Help us test upload performance tuning?]
niedz., 26 sie 2018 o 03:36 Jan Ehrhardt via curl-library napisał(a): > Do you have any stats about the performanceimprovement? Workbench: I am running curl against openssh 7.2 shipped with Ubuntu. The server is running on the same machine as the client. I am uploading 1GB file to /dev/null on the server. My laptop is running a 2nd gen i3 CPU without hardware AES support. I'm tweaking network latency using tc. Curl speed was taken from curl-reported average when the transfer took less than a minute. For slower transfers I took a representative value from the momentary upload speed. Results with 16kB curl upload buffer: no delay added: original 12MB/sec, patched 33 MB/sec added 20 ms: original 390kB/sec, patched 26.6 MB/sec added 100ms: original 81 kB/sec, patched 9700 kB/sec Results with 64kB curl upload buffer: no delay added: original 27MB/sec, patched 35 MB/sec added 20 ms: original 1400kB/sec, patched 30MB/sec added 100ms: original 310kB/sec, patched 9900kB/sec The patched version is CPU-bound on lower latencies; when the latency goes higher, SSH window becomes the limiting factor. I read that HPN SSH should do better on high latency links, didn't try it out. Separately I have a work-in-progress patch that improves the CPU usage; I was able to reach 80MB/s transfers with that patch applied. You can see it here: https://github.com/djelinski/libssh2/commit/c321b5d3a0ed964b291c710179dde7385e514ef7 It's not ready for inclusion; it requires cleanup, and breaks SSL backends other than openSSL with AES-CTR support. If you have openSSL with AES support, it might be worth a try. --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
libssh2 optimization [was: Re: Windows users! Help us test upload performance tuning?]
sob., 11 sie 2018 o 01:05 Daniel Stenberg napisał(a): > It would require that libssh2 provides such an API, which it currently doesn't > (and I don't know anyone working on it). Sent a PR to libssh2 for that: https://github.com/libssh2/libssh2/pull/264 > I still haven't checked what libssh provides or how it performs in comparison. Just checked, looks about the same here. --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Using A Different Socket For Requests
W dniu środa, 22 sierpnia 2018 Isaiah Banks via curl-library < curl-library@cool.haxx.se> napisał(a): > What I'd like to do is create a custom socket for all curl requests to go through within a web application. > I'm creating this socket within Python application but would like an app written in PHP to send request through it. I'm not sure if I understand your request correctly. Do you want to send requests from curl in PHP to some remote server, but also capture all data going through in your python application? If you just want to capture traffic, check out CURLOPT_DEBUGFUNCTION, Fiddler or Wireshark. If indeed you want to have all traffic in your python application, CURLOPT_PROXY is probably what you're after. --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
W dniu sobota, 18 sierpnia 2018 Jan Ehrhardt via curl-library < curl-library@cool.haxx.se> napisał(a): > We had a real oops when we tested the same 178 MB over plain FTP: > > curl x64 512KB upload buffer FTP: 16 seconds > https upload php uploadprogress : 489 seconds I don't believe encryption is to blame here, at least not on curl side. I was able to upload 40 MB/s over https on plain 7.60 with 4MB socket buffer. --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
pt., 10 sie 2018 o 23:08 Daniel Stenberg napisał(a): > [...] libssh2 could offer a better API that's more suited to send (and > receive) SFTP data. I like that; if we had the sftp_write function acknowledge data as soon as it is put in socket buffer, we could get much faster transfers. In order to avoid errors we would need to wait for all outstanding acks on file close. We would lose the opportunity to retry errors without rewinding, but according to [1] the only error we would want to retry is lost connection, and other protocols already rewind in that case. What do you think? [1] https://winscp.net/eng/docs/sftp_codes --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
Ok, so let's put Linux to a test. 0. Patch CURL: #define UPLOAD_BUFSIZE (1<<19) 1. Create a large file. 1 TB looks good: dd if=/dev/zero of=testfile bs=1 count=0 seek=1T The file is sparse, so disk operations won't block us. 2. Upload to /dev/null: curl -u user:pass -k sftp://127.0.0.1/dev/null -T testfile % Total% Received % Xferd Average Speed TimeTime Time Current Dload Upload Total SpentLeft Speed 0 1024G0 00 576M 0 34.5M 8:26:28 0:00:16 8:26:12 31.6M^C Interrupted after a few sec. 30-40MB/sec, decent speed. Curl uses 100% CPU, SSHD uses 60%. 3. Add some latency: sudo tc qdisc add dev lo root netem delay 40ms (note: this latency is only half RTT, so in this case RTT=80ms 4. Repeat test: % Total% Received % Xferd Average Speed TimeTime Time Current Dload Upload Total SpentLeft Speed 0 1024G0 00 608M 0 14.8M 19:36:56 0:00:40 19:36:16 15.1M^C Very stable speed, 15.1-15.3MB/s. Curl uses 60% CPU, SSHD uses 30%. 4a. optionally inspect traffic: sudo tcpdump -i lo -w dump tcp port 22 View results: tcpdump -r dump | less note: tcpdump doesn't play well with emulated delays, so reading the dump is tricky. 5. Clean up sudo tc qdisc del dev lo root netem TCP window is not a problem, and SSH window doesn't seem to be the problem either. There is definitely still some room for improvement. --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: a URL API ?
2018-08-09 14:15 GMT+02:00 Daniel Stenberg via curl-library : > ... or should it perhaps just skip the *first* '=' ? I don't think any URL parsing library cares about = beyond the first one. Which is why = in name may pose a problem, but in value probably won't. I'd skip all. --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: a URL API ?
2018-08-09 10:48 GMT+02:00 Daniel Stenberg via curl-library : > Say we want to append this to the query: > > char *append = "=44"; Well assuming we want to use the API to build URL based on HTML form with GET action, curl_url_query_append suggested by Geoff would be much nicer. In particular, I would expect the API to: - figure out if it needs to add & or ? - figure out if it needs to URLEncode the parameter or value (eg. when setting "company"="AT", we need to escape the ampersand) - do the appending / memory allocation part on its own What do you think? --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
2018-08-09 1:42 GMT+02:00 Gisle Vanem via curl-library : > I also just tested with 'curl sftp//:' with the latest libssh2 > and the new 'SIO_IDEAL_SEND_BACKLOG_QUERY' option. 'sftp://' is > still 6 times slower than ftp against the same server in Denmark. > > 33.153s vs 5.4s for a 10 MByte file. There's no way for curl/libssh2 to put more than 16 KB on the wire. It's right there in libssh2 docs [1]: [libssh2_sftp_write] will return a positive number as soon as the first packet is acknowledged from the server. Since sftp packets are up to 32 KB, and curl is using 16KB send buffer, the buffer always fits in one packet and the function always waits for acknowledgement before returning. So your transfer rate will never exceed 16 KB / ping. It seems relatively easy to change the procedure to get rid of this write-through semantics. Not sure if the project is still actively maintained though, its mailing list [2] looks very quiet. I didn't check curl with libssh backend, it may be worth a try. [1] https://www.libssh2.org/libssh2_sftp_write.html [2] https://www.libssh2.org/mail.cgi --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Include CA Bundle at Build Time on Windows
2018-08-07 20:16 GMT+02:00 Dillon Korman via curl-library : > How do you specify the location of a CA bundle at build time on Windows? Windows builds use schannel (Windows SSL implementation) by default; with that you don't need CA bundle. Are you building with OpenSSL? --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
2018-08-05 10:01 GMT+02:00 Andy Pont : > VirtualBox 5.0.16 was released in March 2016 and so it two years out of date > and is unsupported by Oracle. The current version is 5.2.16. If possible > you should try upgrading and seeing if you get the same results. Ok, so I installed Win 10 on Virtualbox 5.2.16 on my machine and I can confirm Jan's findings; in NAT mode the ideal backlog stays flat at 64 KB, and the patch provides no benefits. Once I switched to "Bridged adapter", I got normal results (as in, similar to what other people reported): Microsoft Windows [Version 10.0.17134.112] generating test file... running vanilla... start:0.687000 total:6.062000 running patched... start:0.625000 total:2.407000 Looks like a limitation of VirtualBox's NAT mode. Regards, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
2018-08-04 15:55 GMT+02:00 Jan Ehrhardt : > Virtualbox 5.0.16. Network adapter screenshot here: > https://phpdev.toolsforresearch.com/win7x64.png Thanks. Do you happen to limit allowed bandwidth of these VMs? VirtualBox allows this, as described here: https://stackoverflow.com/a/8124491/7707617 Your results for Windows 10 on these VMs are almost the same as older Windows versions. Your non-VM test reports roughly doubled speed on Windows 10 compared to older systems, which leads me to suspect that something else might be limiting your speed there. --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
2018-08-04 18:14 GMT+02:00 Jan Ehrhardt : > curl was consistently a little bit faster than bash/lftp, but that may be > related to the sftp encryption (curl ran plain ftp, port 21). > curl vanilla and curl patched did not seem to differ. Sometimes patched was > faster than vanilla, sometimes the other way around. > > @Daniel Jelinski: could you compile a version with sftp support? I haven't figured out yet how to build libssh2, and I don't need it at the moment. Out of curiosity, I run some tests of SFTP using curl binaries by Viktor Szakats [1] and some ancient CopSSH server. Uploads to localhost were capped at ~300 KB/s. WinSCP was able to upload to the same server at a rate of 4MB/s, maxing out one entire CPU in the process (server used about 15% of another one), so Curl seems a little unimpressive here. On linux Curl uploaded to localhost (OpenSSH + sftp) at a rate of 15 MB/s, using 60% of a single CPU (server used ~30% of another one). Tried both libssh and libssh2, didn't notice a difference. This result is much better, but there may still be some room for improvement. Regards, Daniel [1] https://bintray.com/vszakats/generic/curl/ --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
2018-08-04 8:06 GMT+02:00 Daniel Jeliński : > 2018-08-03 22:47 GMT+02:00 Daniel Stenberg : >> On Fri, 3 Aug 2018, Ray Satiro wrote: > > This is strange. I see Ray's mail in the archives [1] but not in my > mailbox. On the other hand, I don't see Rickard Alcock's reply in the > archives, but I have it in my mailbox. What gives? > > [1]https://curl.haxx.se/mail/lib-2018-08/ Found Ray's mail. GMail sent it to spam complaining about missing yahoo authorization. I guess I need to check spam folder more often... --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
2018-08-03 22:47 GMT+02:00 Daniel Stenberg : > On Fri, 3 Aug 2018, Ray Satiro wrote: This is strange. I see Ray's mail in the archives [1] but not in my mailbox. On the other hand, I don't see Rickard Alcock's reply in the archives, but I have it in my mailbox. What gives? [1]https://curl.haxx.se/mail/lib-2018-08/ --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
2018-08-03 18:11 GMT+02:00 Daniel Stenberg : > https://docs.google.com/spreadsheets/d/1xAntVPAggz9gvx7TI_vUF6G6titRAX5tBOZv0JCXkNY/edit?usp=sharing Thank you everyone for providing results, thank you Daniel for keeping tabs on them. Most of the results are in line with my expectations: - results from vanilla under Windows 10 are better than older systems, which is in line with what Christian reported about larger default send buffer. It's nice to see that the patch still improves the outcome there. - Results on XP and 2003 are almost the same with and without patch, which was to be expected as ideal buffer query was only implemented in Vista SP1. I'm a little concerned about Gisle's FTP results and Jan's results on Virtualbox. I don't think they should block this patch, but they may justify some further enhancements. Thanks again! Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
2018-08-02 21:33 GMT+02:00 Gisle Vanem : > Testing with the same > file via ftp to my server in Denmark (11 hops) shows a > bit worse result: > running vanilla... > start:0,406000 total:1,219000 > running patched... > start:0,359000 total:1,531000 > > Seems ftp is maxing my line here. Maybe little to do with > Winsock tuning. FTP is a tricky beast. If the results are repeatable, I'd love to take another look. > Besides the 'test' data-file gives the wrong picture under > a VPN-connection. With PPTP/L2TP compression I believe VPN operates on a lower level than TCP, so send buffer usage will not be affected by VPN compression. But you could see speeds faster than what your ISP advertises, which is kind of nice. Thanks for your testing! Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Windows users! Help us test upload performance tuning?
2018-08-03 4:07 GMT+02:00 Jan Ehrhardt : >>I also have a Windows 8.1 64-bits running in a Virtualbox on the Wondows >>2008 R2 server. No speed improvement. Most of the times the patched >>version is a little bit faster, but sometimes even slower. Typical >>result below. >> > The same happens with other Windows versions in a Virtualbox (on the > Win2k8 server). That one is disturbing. Wonder if this happens because ideal send buffer query fails under virtualbox, or if it returns bogus values (like 8 KiB). Which version of Virtualbox is that? How did you set up the network adapter? I'd like to try and reproduce this here. Thank you, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Slow Windows uploads (with patch)
2018-07-30 18:09 GMT+02:00 Daniel Stenberg : > The tool could try upload with and without "the patch" to one/two/three > places and report the results with the exact Windows version used. We could > ask curl users to report *their* results and we collect the results in a > sheet somewhere. How about a CURL build? I built curl binaries with and without the patch, which users could use for testing on their machines. These are available for download here: https://www.dropbox.com/s/5i2a2bth2ea28ys/curl.zip Running test is as simple as unpacking and launching testcurl.bat, with possible extra step of installing MSVC 2010 redistributable. My results from a few machines: 1. Windows 2008R2, Europe: Microsoft Windows [Version 6.1.7601] generating test file... running vanilla... start:0,592000 total:11,263000 running patched... start:0,593000 total:3,432000 2. Windows 2012R2, USA: Microsoft Windows [Version 6.3.9600] generating test file... running vanilla... start:0.296000 total:2.921000 running patched... start:0.297000 total:1.297000 3. Windows 2003, USA: Microsoft Windows [Version 5.2.3790] generating test file... running vanilla... start:0.281000 total:5.031000 running patched... start:0.281000 total:5.00 As expected, there's almost no difference on 2003. In both other cases tested the difference was more than 100%. Binaries were built with 32bit VS2010, using command line: # nmake /f Makefile.vc mode=dll GEN_PDB=yes They will likely require MSVC redistributable, available here: https://www.microsoft.com/en-us/download/details.aspx?id= I tried linking MSVC statically (RTLIBCFG=static), but resulting binary kept crashing on -w. For those who don't feel comfortable running binaries downloaded from the net, the sources are available on git (branches: master & Jelinski/buffer-tuning-under-Windows), and here's the testcurl script: @echo off ver echo generating test file... echo test >test for /l %%i in (1,1,17) do ( type test>>test;) echo running vanilla... pushd curl_vanilla curl -w"start:%%{time_starttransfer} total:%%{time_total}\n" http://uploadjp.openspeedtest.com/upload -T ..\test popd echo running patched... pushd curl_patched curl -w"start:%%{time_starttransfer} total:%%{time_total}\n" http://uploadjp.openspeedtest.com/upload -T ..\test popd del test pause --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Slow Windows uploads (with patch)
Hi Christian, Will you be running any more tests? I'd love to see the patch accepted, but right now it seems that it's only helping me, and that might not be enough to get it in the next release. Regards, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Slow Windows uploads (with patch)
2018-07-25 14:07 GMT+02:00 Christian Hägele : > For me the SO_SNDBUF was 64KiB from beginning on and > worked out well for my test-case. For me the send-ahead (number of > unacknowledged bytes on the wire) was also fine. > In both cases the throughput was similar in my quick tests, but I did not > test it with high bandwidth and high latency, yet. Thanks for your testing. It would be nice to know if 64KiB is the new default in Windows 10, or if the initial buffer size is somehow determined based on other connection properties. Please let me know what you find. > I still don't think that the SO_SNDBUF is the limiting factor here. Maybe > that's because in my case the SO_SNDBUF is 64KiB by default and this is > enough. I am really surprised that this change give you a significant > improvement in performance. The only thing I can think of is that the > 16KiB CURL_MAX_WRITE_SIZE and the 8KiB SO_SNDBUF don't work together very > well. In my case SO_SNDBUF was the limiting factor; I was able to get 200x faster uploads just by increasing the buffer size to 4 MiB. But then, different Windows versions may have different characteristics here. Regards, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Slow Windows uploads (with patch)
2018-07-24 13:47 GMT+02:00 Christian Hägele : > I think the correct solution would be to not set the socket-option SO_SNDBUF > or SO_RCVBUF at all on Windows-Vista and newer. That's what CURL does since mid-2013, yet there were at least 3 issues reporting slow uploads on Windows since then. Could you please run an experiment of your own? If you decide to use my code, you need to pick a different URL, I used a server from private network that you will not have access to. I'm very interested in your findings. Regards, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Slow Windows uploads (with patch)
2018-07-20 15:37 GMT+02:00 Daniel Stenberg : > On Fri, 20 Jul 2018, Daniel Jeliński wrote: >> So in its current form the patch won't help MinGW. Other MinGW projects >> worked around that problem by creating that #define in their own code > > > I wouldn't mind doing that for the sake of increased performance for those > who build curl with mingw. I added a patch to the pull request, available here: https://github.com/curl/curl/files/2218943/0002-windows-define-constant-for-compilers-that-do-not-re.zip. Next time I'll fork, I promise :) Thanks, Daniel --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Re: Slow Windows uploads (with patch)
2018-07-19 18:45 GMT+02:00 Daniel Stenberg : > https://github.com/curl/curl/pull/2762 Thank you! I see that the test builds succeeded. What's next? Meanwhile I went to check if this will benefit MinGW users. I found that: - MinGW headers do not define SIO_IDEAL_SEND_BACKLOG_QUERY - MinGW policy [1] suggests that they may be unwilling to accept a patch with that define, as its value is only available in MS headers and not in documentation So in its current form the patch won't help MinGW. Other MinGW projects worked around that problem by creating that #define in their own code [2]. Barring that, the next best available option for MinGW users is to determine optimal buffer size using trial and error, or just set it as large as possible. Should we put that in the documentation? Thanks, Daniel [1] http://www.mingw.org/wiki/SubmitPatches [2] https://trac.torproject.org/projects/tor/ticket/22798#comment:60 --- Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library Etiquette: https://curl.haxx.se/mail/etiquette.html
Slow Windows uploads (with patch)
Hello Curl hackers, I've been working with Curl library in C for some time, and I love it. Everything just works. Recently I started using Curl to upload files from Windows machines to Amazon cloud using HTTP POST. The uploads don't perform well. They are limited by the send buffer used by Windows. On Windows 2008R2 (and probably on more recent versions as well) the system only puts on the wire the amount of bytes it has buffered. This is either what is set by SO_SNDBUF (default 8KB) or what was passed to send (CURL_MAX_WRITE_SIZE, 16KB), whichever is higher. In case of CURL uploads, the system sends 16KB, then waits for acknowledgement from the other end before asking the application for the next batch. This is readily visible in Wireshark traces. Starting with Windows 7 / 2008R2, Windows implements send buffer autotuning. This is well described here: https://msdn.microsoft.com/en-us/library/windows/desktop/bb736549(v=vs.85).aspx Theoretically on these systems the send buffer should be automatically adjusted to optimize throughput. I run a couple experiments to confirm that, and found that the send buffer is only adjusted if the socket is blocking and application buffer size is reasonably large (experiment1.c), and buffer stays at 8192 if the socket is nonblocking regardless of the application buffer size (experiment2.c). Since Curl is using nonblocking sockets, and switching to blocking would break existing functionality, an alternative approach would be to periodically run SIO_IDEAL_SEND_BACKLOG_QUERY and pass the result to SO_SNDBUF. This will work on Windows Vista SP1 / 2008 SP1 and newer, and fail silently on older Windows versions. I attached a proof-of-concept patch (autotuning.patch), which requires a reasonably recent SDK. With this patch in place I am no longer seeing upload pauses, and Windows is able to saturate the link even on high-bandwidth high-latency networks. Disclaimer: I only tested this patch on 32bit build under Visual Studio 2010 (I had to #define _WIN32_WINNT 0x0601 in order to get access to SIO_IDEAL_SEND_BACKLOG_QUERY). 64bit build currently fails on my machine (..\builds\libcurl-vc-x64-release-dll-ipv6-sspi-winssl-obj-lib/file.obj : fatal error LNK1112: module machine type 'X86' conflicts with target machine type 'x64'), I don't think this is related to my patch, but I'll keep digging. Let me know if there are chances for this patch to make its way to the next release. Thanks, Daniel #define WIN32_LEAN_AND_MEAN #include #include #include #include #include // Need to link with Ws2_32.lib, Mswsock.lib, and Advapi32.lib #pragma comment (lib, "Ws2_32.lib") #pragma comment (lib, "Mswsock.lib") #pragma comment (lib, "AdvApi32.lib") //81920 is the default buffer size used by .NET 4.5.1 // with buffers of size 16KB and less autotuning did not work #define SEND_BUFFER_SIZE 81920 //Output with 80KB buffer: /* 8192<-current 131072<-current 262144<-current 524288<-current 1048576<-current 2097152<-current */ // code taken from https://docs.microsoft.com/en-us/windows/desktop/winsock/complete-client-code // adapted to showcase automatic buffer tuning int main(int argc, char* argv[]) { static int prev = 0; int curval=0; int curlen=sizeof(curval); char tmp[11]; WSADATA wsaData; SOCKET ConnectSocket = INVALID_SOCKET; struct addrinfo *result = NULL, *ptr = NULL, hints; char sendbuf[SEND_BUFFER_SIZE] = "PUT /research_w/en/1 HTTP/1.1\nHost: 10.51.4.147:9200\nAccept: */*\nContent-Length: 185829363\n\n"; int iResult; iResult = WSAStartup(MAKEWORD(2,2), ); if (iResult != 0) { printf("WSAStartup failed with error: %d\n", iResult); return 1; } ZeroMemory( , sizeof(hints) ); hints.ai_family = AF_UNSPEC; hints.ai_socktype = SOCK_STREAM; hints.ai_protocol = IPPROTO_TCP; // Resolve the server address and port iResult = getaddrinfo("10.51.4.147", "9200", , ); if ( iResult != 0 ) { printf("getaddrinfo failed with error: %d\n", iResult); WSACleanup(); return 1; } // Attempt to connect to an address until one succeeds for(ptr=result; ptr != NULL ;ptr=ptr->ai_next) { // Create a SOCKET for connecting to server ConnectSocket = socket(ptr->ai_family, ptr->ai_socktype, ptr->ai_protocol); if (ConnectSocket == INVALID_SOCKET) { printf("socket failed with error: %ld\n", WSAGetLastError()); WSACleanup(); return 1; } // Connect to server. iResult = connect( ConnectSocket, ptr->ai_addr, (int)ptr->ai_addrlen); if (iResult == SOCKET_ERROR) { closesocket(ConnectSocket);