Re: Retrieve all addresses mapped to specific host, not just one IP

2018-08-14 Thread Richard Gray via curl-library

myLC--- via curl-library wrote:

On Mon, 13 Aug 2018, Richard Gray wrote:

 > I'm confused about what it is you are trying to do with
 > the list of addresses?
...

If you have a list of IPs/hostnames, this is necessary to
identify duplicate entries (public VPNs or proxies, for
instance).


OK, filtering - got it.


 > It's not clear to me why you are trying to get libcurl to
 > return that address list.
...


You still haven't indicated what kind of access(es) you are trying to perform 
with the potentially multiple addresses.  Are trying to just find the first 
one that works?  Are you trying to actually access more than one of them?  The 
later case might be something like testing the various hosts behind a load 
leveler.




 > If you are on a modern system, you already have a way to
 > do this: getaddrinfo() or equivalent.

That would imply doing it twice – libcurl would do it once
and then you'd do the same afterwards.


No, you would do the resolves then tell libcurl to operate on any returned 
address(es) you are interested in.


If the host(s) don't have a problem with an access using a literal IP for the 
URL, just format URLs with literal IPs instead of host names.


If the host(s) need the actual host name you resolved against for TLS or other 
purposes, will CURLOPT_CONNECT_TO not do what you want by telling libcurl not 
to resolve the host name but instead use the IP you supply??


With either of these options, the host is not redundantly resolved and you are 
in complete control of what IPs are ignored/accessed, if they are accessed 
sequentially or in parallel, etc.   I think this might be what you are after. 
I don't see how an extension to get the full address list would help because 
you'd still have to do something like the above for the rest of any addresses 
you were interested in anyway.


Cheers!
Rich


---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: a URL API ?

2018-08-14 Thread Dan Fandrich via curl-library
On Tue, Aug 14, 2018 at 11:17:08AM +0200, Daniel Stenberg wrote:
> Aha... well even if this is so, the effects of this will at least be
> mitigated by the fact that libcurl will still canonicalize them even if it
> wouldn't be perfect.
> 
> I mean a user who wants to compare two URLs should make sure to canonicalize
> *both* of them before the comparison. Then such suble details such as the
> one mentioned above will actually not matter since the end results from both
> those URLs should be the same. Even if another library with more specific
> domain knowledge possibly would end up with a slightly different output.
> 
> Or am I wrong?

You're right in the case of comparing URLs, but if an app is canonicalizing
them for the purpose of displaying them to the user in a nice format, then it
wouldn't be optimium, although it would still work fine.
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-14 Thread Daniel Stenberg via curl-library

On Tue, 14 Aug 2018, Jan Ehrhardt via curl-library wrote:

Thanks for the stats. It indicates that my choice for 320 KB in the iOS app 
(using sftp) is quite good.


I would like us to...

1. Change to alloc-on-demand for this buffer. It is used for a few non-upload 
things, but it only needs to be this big for actual uploads, and most 
transfers are not. That'll save (almost) 16KB for all download-only handles.


2. Enlarge the default upload buffer size to 64KB.

3. Add a CURLOPT_UPLOADBUFFERSIZE option that allows users to set their 
preferred size, from 16KB up to perhaps a few megabytes.


I remembered reading something about this, but it doesn't seem related (and 
was resolved anyway). But maybe it gives a clue:


https://github.com/icing/mod_h2/issues/130


Okay! I'm also quite sure we have some optimizations left to do in the libcurl 
side of the HTTP/2 transfers...


--

 / daniel.haxx.se
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Retrieve all addresses mapped to specific host, not just one IP

2018-08-14 Thread Gisle Vanem via curl-library

myLC--- wrote:


Prioritization (which IPs libcurl should favor) might become
an issue then. 


"should favour" how? Based on what; that IPv6 is better/speedier
than IPv4, or some addresses based on Geo-location is best?
libcurl knows zero about this. It would be cool if it did though.

In fact IPv6 can be a lot slower than IPv4. My case right now
(with IPv6 over a '6to4' tunnel) is that a:
  curl -6 server-in-Oslo-Norway
goes via California! (3 times slow that with 'curl -4').

So much this hyped-up IPv6 protocol.

--
--gv
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-14 Thread Jan Ehrhardt via curl-library
Daniel Stenberg via curl-library (Tue, 14 Aug 2018 12:02:46 +0200
(CEST)):
>  Size Seconds  Improvement
>
>  16 KB2.522-
>  64 KB1.281x 1.97
>  128 KB   1.095x 2.30
>  256 KB   0.938x 2.69
>  512 KB   0.860x 2.93

Thanks for the stats. It indicates that my choice for 320 KB in the iOS
app (using sftp) is quite good.

>(Amazingly enough, HTTP/1.1 being 2.33 times faster than HTTP/2 ...)

I remembered reading something about this, but it doesn't seem related
(and was resolved anyway). But maybe it gives a clue:

https://github.com/icing/mod_h2/issues/130
-- 
Jan

---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-14 Thread Daniel Stenberg via curl-library

On Tue, 14 Aug 2018, Jan Ehrhardt via curl-library wrote:


I did not test if there is a difference on *nix. Did you?


Here are the results my tests run just now.

Using Linux kernel 4.17. Upload 4GB over plain HTTP to Apache 2.4.34 on 
localhost - so really 0 RTT.


I ran "time curl -sT 4GB localhost -o /dev/null".

The results are clearly saying larger buffers help. Average times over 4 
consecutive runs with each buffer size (using the "real" time from the 
output).


 Size Seconds  Improvement

 16 KB2.522-
 64 KB1.281x 1.97
 128 KB   1.095x 2.30
 256 KB   0.938x 2.69
 512 KB   0.860x 2.93

--

Then I sent 500MB in a PUT to https://daniel.haxx.se, which is really close to 
me RTT wise (average ping 0.931 ms). I have a 1000 mbit connection to the 
Internet. Also using HTTPS.


When using HTTP/2:

This showed no gain at all with a larger buffer, it actually got slightly 
worse. Also shows HTTP/2 uploads need attention and improvements.


 Size Seconds  Improvement

 16KB 13.682   -
 64KB 14.488   x 0.94
 512KB14.306   x 0.96

When I instead did the same upload over HTTPS to the same host but forced 
HTTP/1.1 the speeds were all remarkably similar. 500MB in 5 seconds should be 
just about maximum for 1000mbit...


 Size Seconds  Improvement

 16KB 5.872-
 64KB 5.838x 1
 512KB5.841x 1

(Amazingly enough, HTTP/1.1 being 2.33 times faster than HTTP/2 ...)

--

 / daniel.haxx.se
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Retrieve all addresses mapped to specific host, not just one IP

2018-08-14 Thread myLC--- via curl-library

On Mon, 13 Aug 2018, Richard Gray wrote:

> I'm confused about what it is you are trying to do with
> the list of addresses?
...

If you have a list of IPs/hostnames, this is necessary to
identify duplicate entries (public VPNs or proxies, for
instance).


> It's not clear to me why you are trying to get libcurl to
> return that address list.
...
> If you are on a modern system, you already have a way to
> do this: getaddrinfo() or equivalent.

That would imply doing it twice – libcurl would do it once
and then you'd do the same afterwards.


> I guess I'm wondering if it makes more sense for your
> application to get the list of addresses itself and then
> tell libcurl what to do with them.

Yes, but libcurl already does the same. Of course, you can
do it yourself and hand the list of IPs to libcurl.
Prioritization (which IPs libcurl should favor) might become
an issue then. This way, you'd end up having to mimic just
about everything libcurl normally does. Furthermore, you'd
have to devise a way to hand over that information in a
proper form. This would be essentially the same in reverse.
It would result in having the same code twice in your
(static) binaries, though.


---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: a URL API ?

2018-08-14 Thread Daniel Stenberg via curl-library

On Mon, 13 Aug 2018, Dan Fandrich via curl-library wrote:

I'm not sure I see the difference between these two approaches. Can you 
show them with some example URLs?


For example, + and ! are reserved characters in RFC 3986 but unreserved in 
RFC 2326 (RTSP), so a generic canonicalization might return 
rtsp://example.com/me%2byou%21 whereas an RTSP-specific canonicalization 
would return rtsp://example.com/me+you!  At least, that's my interpretation 
after a quick reading of the RFCs.


Aha... well even if this is so, the effects of this will at least be mitigated 
by the fact that libcurl will still canonicalize them even if it wouldn't be 
perfect.


I mean a user who wants to compare two URLs should make sure to canonicalize 
*both* of them before the comparison. Then such suble details such as the one 
mentioned above will actually not matter since the end results from both those 
URLs should be the same. Even if another library with more specific domain 
knowledge possibly would end up with a slightly different output.


Or am I wrong?

--

 / daniel.haxx.se
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-14 Thread Jan Ehrhardt via curl-library
Daniel Stenberg via curl-library (Tue, 14 Aug 2018 09:39:21 +0200
(CEST)):
>I think its time we run some tests in an orderly fashion with different upload 
>buffer sizes and collect some numbers...

I did not test if there is a difference on *nix. Did you? Anyway, I
agree with the fact that some testing has to be done. By more than 2 or
3 people...
-- 
Jan

---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Windows users! Help us test upload performance tuning?

2018-08-14 Thread Daniel Stenberg via curl-library

On Tue, 14 Aug 2018, Jan Ehrhardt wrote:

my tests indicated that increasing this value also influenced the FTP upload 
speeds. From 10 to 5 seconds on XP in Daniel Jelinski's testcurl uploads. 
And down to less than a second on Win 7 and Win 10.


Right. So yes, there's certainly a valid reason to consider and work on upping 
the upload buffer size for all protocols. I suppose we simply haven't done 
enough upload speed comprisons and tests recently...


If plain FTP uploads can go faster with a larger buffer, that's an indication 
that *all* TCP protocols can be done faster if given larger buffers. (At least 
on Windows.)


How much larger buffer is sensible? If we're talking about changing the 
default size. The current buffer size has been used since basically day 1 and 
both the Internet and devices that run curl have changed a bit since then...


It would probably still be valuable to allow applications to set the upload 
buffer size a similar way it can set download buffer size. Or should we even 
set the upload buffer to the same size as that one?


I think its time we run some tests in an orderly fashion with different upload 
buffer sizes and collect some numbers...


--

 / daniel.haxx.se
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html