Re: Can Curl round-robin IP addresses for successive connects?

2020-10-01 Thread Rainer Canavan via curl-library
On Thu, Oct 1, 2020 at 6:36 PM Jason Proctor via curl-library
 wrote:
>
> Dear Curl,
>
> Our application pulls resources from CloudFront and we noticed some
> significant bandwidth capping. Turns out that for maximum throughput,
> Amazon recommend requests be spread across the IP addresses returned
> by the DNS call.
>
> However from looking at Curl_connecthost() and related functions, it
> seems that Curl only round-robins through cached addresses when there
> is a connect error.

I think you should set up one curl handle for each address using CURLOPT_RESOLVE
and then re-use the handles round-robin, so that you can benefit from
connection
re-use on TCP-connections that are already warmed up, as well as parallelism to
multiple servers.

Rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: CURLOPT_SSL_VERIFYPEER - multiple paths

2020-07-02 Thread Rainer Canavan via curl-library
[...]
> > investigation learns that certificate has 2 paths which 1 of them if valid
> > and other has 'self signed cert'. How can is setup curl lib so that it
> > 'VERIFYPEER' , so that connection succeeds if there is still 'a valid path';
> > despite some that having error?
>
> This sounds like a TLS library problem.

It does indeed sound suspiciously similar to the problems in various TLS
libraries when the "AddTrust External Root CA" expired on May 30th, where
the libraries would fail to construct an alternate, valid trust chain.

rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Rare crashes when working with HTTP/2

2019-10-24 Thread Rainer Canavan via curl-library
>> I'm getting rare crashes inside curl but all of them have the same stack 
>> trace. Curl is 7.66.  libnghttp2 is 1.31 (from epel on CentOS 7).

[...]

> Looked throught libnghttp2 release notes and found that presumable fix was 
> done in libnghttp2 of version 1.32.1. Just for information for anyone who 
> discovers the same problem.
>
> Question to devs: should HTTP/2 be disabled on buggy versions of libnghttp2? 
> I guess it can be done in runtime, since curl already gets libnghttp2 version 
> info.

I'd say no. The fix may get backported by the EPEL maintainers, but
the version number of libnghttp2 will not change in the process,
therefore curl cannot tell from the outside if it is broken. I'm not
sure what the current process is to get bugfixes into EPEL, but you
should pester the relevant Fedora, CentOS or RedHat maintainers to
backport that fix, if you haven't done so already.

rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Curl and SSL in an IMB's OnDemand environment

2019-09-11 Thread Rainer Canavan via curl-library
On Wed, Sep 11, 2019 at 11:22 AM Michael Rellstab via curl-library
 wrote:
[...]
> Do you mean, OnDemand itself has libcurl linked (statically?) into its
> binaries?

If it were statically linked, the symbols of that curl lib would not be visible
to your module when it is loaded.

> And my code uses this binary instead of the libcurl that is
> installed on the Linux?

Your module probably loads the curl library that you have linked it
against, but the symbols (functions) from both the libcurl bundled
with OnDemand and your libcurl are used to resolve the references
in your module. I'm not sure how the runtime linker selects a
symbol if there are multiple candidates.

You could try linking your module with a static libcurl, or link
your libcurl with symbol versioning (see e.g.
https://www.gnu.org/software/gnulib/manual/html_node/LD-Version-Scripts.html
https://www.bottomupcs.com/libraries_and_the_linker.xhtml) and see
to that your module requires those specific versions of the curl
functions. It may also be necessary to link your libcurl to use those specific
versions to ensure that internal function calls from your libcurl don't
end up using the OnDemand libcurl.

Rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Curl and SSL in an IMB's OnDemand environment

2019-09-10 Thread Rainer Canavan via curl-library
[...]
> apparently the curl you're using is compiled with support for dynamic
> ssl backends.
> Try selecting NSS with https://curl.haxx.se/libcurl/c/curl_global_sslset.html

I should have checked before writing. The libcurl that ships with CentOS does
_not_ have support for curl_global_sslset().

rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Curl and SSL in an IMB's OnDemand environment

2019-09-10 Thread Rainer Canavan via curl-library
On Tue, Sep 10, 2019 at 5:19 PM Michael Rellstab via curl-library
 wrote:
>
> Hi there!
>
> Since several days I'm trying out to get my project to work, but I don't have 
> any success.
> Giving a short overview:
> I have to implement a UserExit (callback routine) for the IBM's OnDemand 
> Software. Inside this UserExit I'm using CURL (linked as shared library).
> This works perfectly as long as I don't use an SSL secured communication. As 
> soon as I activate SSL (TLS1.2), there is no communication anymore.
>
> I'm running on a CentOS with the NSS SSL framework compiled into CURL. When I 
> use my UserExit without OnDemand (using the same source code, but executed by 
> my main function),
> CURL runs together with NSS without any problems. As soon as my code runs in 
> the context of OnDemand, SSL is not working anymore. I expect, this has to do 
> with IBM's OnDemand, because they are using their GsKit as SSL framework.
>
> As you can see on my log output:
>
> 2019-09-10 15:11:07 DEBUGCURL version:7.29.0
[...]
> 2019-09-10 15:11:07 DEBUGCURL ssl version:NSS/3.34
[...]
> 2019-09-10 15:11:07 DEBUG== Info:   Trying 192.168.27.108...
> 2019-09-10 15:11:07 DEBUG== Info: Connected to 192.168.27.108 
> (192.168.27.108) port 8443 (#0)
> 2019-09-10 15:11:07 DEBUG== Info: Curl_gskit_connect_nonblocking in

[...]


apparently the curl you're using is compiled with support for dynamic
ssl backends.
Try selecting NSS with https://curl.haxx.se/libcurl/c/curl_global_sslset.html


rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: curl crashes when internet switched off

2019-09-04 Thread Rainer Canavan via curl-library
On Wed, Sep 4, 2019 at 2:34 PM Salman Ahmed via curl-library
 wrote:
>
> CURL VER libcurl/7.47.0 OpenSSL/1.0.2g zlib/1.2.8 libidn/1.32 librtmp/2.3
> Linux VirtualBox 4.15.0-58-generic #64~16.04.1-Ubuntu SMP Wed Aug 7 14:10:35 
> UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
>
> I guess its 3 year old libcurl?I dont remember manually configure/install. I 
> just got it from the ubuntu repos.
> Do I need to build curl myself and have this symbol defined?

since you're probably using ubuntu packages for curl, the buildlog
here 
https://launchpadlibrarian.net/408958147/buildlog_ubuntu-xenial-amd64.curl_7.47.0-1ubuntu2.12_BUILDING.txt.gz
should be close enough to what you're using. If it's something
different, you can pick the relevant version from
https://launchpad.net/ubuntu/+source/curl/+publishinghistory, then the
architecture and finally the buildlog.  That build has

checking for MSG_NOSIGNAL... yes

rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re:

2019-07-31 Thread Rainer Canavan via curl-library
On Tue, Jul 30, 2019 at 8:30 PM Lincoln via curl-library
 wrote:
>
> Hi!
>
> I am trying to compile Curl 7.63.3 on Solaris 10 Sparc and I am getting those 
> errors.  Note I do not have root and so I have to compile and install in area 
> that I have access. Can someone tell me why this issue happens on Solaris 
> servers and is there a way to compile it correctly?
>
> *** Error code 1
> The following command caused the error:
> fail=; \
[...]

The actual error message would have been somewhere above the "Error
code 1".  In addition to that message, the environment variables and
flags you're using for configure may be required to help you. On top
of that: are you using gnu make, solaris make or something else?


rainer

---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Edit cookies easily?

2019-05-22 Thread Rainer Canavan via curl-library
On Tue, May 21, 2019 at 10:45 PM Scott Ellentuch via curl-library
 wrote:
>
> Hi,
>
> Using PHP and curl_init/curl_setopt/curl_exec/curl_close.
>
> My need is that I contact A.example.com, perform a login transaction there,
> and then use the cookies returned for contacting B.anotherexample.com  to 
> continue the transaction. Cookies are scoped to the FQDN.
>
> I could do the first bit, close up shop, readin/edit/write the cookie file, 
> then do my second bit, but that seems SO crude. ;)
>
> Is there a way to cleanly either tell the program to ignore cookie domains 
> completely or be able to gracefully edit the cookie domain without all the 
> extra overhead?


There's a user contributed note about using (reading / altering) the
curl in-memory cookie jar in the php documentation for curl_setopt():

https://www.php.net/manual/de/function.curl-setopt.php#118967

If that doesn't work for you, you can bypass the entire curl cookie
handling by reading the incoming Cookies from the response headers,
and setting them again as request headers for the next request.

rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: curl on docker hub

2019-05-02 Thread Rainer Canavan via curl-library
On Thu, May 2, 2019 at 8:09 AM James Fuller via curl-library
 wrote:
>
> update on curl docker - testing things on my fork here
>   https://github.com/xquery/curl-docker
> and automation with travis
>   https://travis-ci.org/xquery/curl-docker
>
> which will also be responsible for pushing docker images to hub.docker.com
>
> seeking advice/thoughts on a few open questions:
[...]
> * what features to build as default is the biggest burning question
>
> orig poster (Olliver Schinagl) had suggested
>
> ./configure \
> --disable-ldap \
> --enable-ipv6 \
> --enable-unix-sockets \
> --prefix=/usr \
> --with-libssh2 \
> --with-nghttp2 \
> --with-pic \
> --with-ssl \
> --without-libidn \
> --without-libidn2 \
>
> which seems specific to his needs.
>
> put another way - what features do we want enabled by default docker
> image ... we could contemplate additional images that have 'all
> features' enabled.

You could consult the most recent curl survey (at
https://daniel.haxx.se/media/curl-user-survey-2018-analysis.pdf) and
set a simple threshold at e.g. 90% of users and pick the features that
would be sufficient for such a fraction of users. You may need the raw
data, since the graph in the survey result document doesn't specify
whether a significant fraction of users use http/https plus another
protocol such as ftp, sftp etc.

Another interesting question would be which SSL backend you're going
to use and how you keep the CA store up to date.

rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: Limit connection reuse to the first 120 seconds

2019-04-17 Thread Rainer Canavan via curl-library
On Mon, Apr 15, 2019 at 10:39 PM Daniel Stenberg via curl-library
 wrote:
>
> Hi,
>
> I propose we change the connection reuse logic in curl to only ever consider
> connections that have been kept in the connection pool for shorter than 120
> seconds. Connections that have been kept around for longer than this will
> instead get disconnected [1].
>
> The reason is simply that the longer the connection has been idle, the less
> likely it is to a) be useful again and b) to actually work to reuse. Avoiding
> reuse attempts that have a high risk of failing will improve performance and
> behavior.
>
> My PR for this change is here[2]. The max age (120 seconds) in this code is
> currently "hardcoded" but I'm sure there might be use cases for changing it,
> so I'm open for making it possible to set through the API.

In our use of curl, the most annoying problem with no-reusable connections is
if there's basically a race condition between the server closing the connection
and the client "successfully" sending a non-retryable / unsafe request, such as
a http POST.  This could still happen if the server has a timeout of 120s,
therefore I would suggest a slightly shorter timeout, for example 118s.

I would advocate for a configuration option to set the specific timeout, since
the useful lifetime of an idle connection depends on the timeout configured
on the specific server.

rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: CURLOPT_ACCEPT_ENCODING and unknown / unsolicited encodings

2018-08-22 Thread Rainer Canavan via curl-library
On Wed, Aug 22, 2018 at 5:07 PM Patrick Monnerat via curl-library
 wrote:

Thanks for your prompt response.

[...]
> https://github.com/curl/curl/commit/dbcced8e32b50c068ac297106f0502ee200a1ebd#diff-ff9fb98500e598660ec2dcd2d8193aac
> > if I'm not mistaken.
> This would have helped us much to have the curl version rather than the
> Ubuntu's. After grepping Ubuntu's repository, it appears that curl's
> version is 7.58.0.

7.58.0 is indeed correct.

> > In curl versions up to at least 7.56.0, setting
> > CURLOPT_ACCEPT_ENCODING to values other than NULL resulted in curl
> > decoding "gzip" and "deflate" and quietly passing any other Encoding,
> > such as "None", which is mistakenly used by one of our customers.
> "None" is recognized from 7.59.0, containing the commit
> https://github.com/curl/curl/commit/f886cbfe9c3055999d8174b2eedc826d0d9a54f1
> that implements it. Maybe Ubuntu should upgrade ;-)

That would indeed solve the immediate problem. I've opened a bug
report for Ubuntu at https://bugs.launchpad.net/ubuntu/+source/curl/+bug/1788435
although we do have another workaround running already.

thanks,

Rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

CURLOPT_ACCEPT_ENCODING and unknown / unsolicited encodings

2018-08-22 Thread Rainer Canavan via curl-library
Apologies for dredging up an issue that has been apparently been in
published curl versions since at least about a year, but we've only
just encountered it while upgrading a system from Ubuntu 17.10 to
18.04. The relevant commit is
https://github.com/curl/curl/commit/dbcced8e32b50c068ac297106f0502ee200a1ebd#diff-ff9fb98500e598660ec2dcd2d8193aac
if I'm not mistaken.

In curl versions up to at least 7.56.0, setting
CURLOPT_ACCEPT_ENCODING to values other than NULL resulted in curl
decoding "gzip" and "deflate" and quietly passing any other Encoding,
such as "None", which is mistakenly used by one of our customers.
Newer versions of curl return (61) "Unrecognized content encoding
type...". The new behavior is documented in INTERNALS.md (and its
predecessors) since 019c4088cf from April 2003 (with a minor error,
see patch). https://curl.haxx.se/libcurl/c/CURLOPT_ACCEPT_ENCODING.html
on the other hand does not specify how unknown encodings are handled -
I would suggest copying the relevant sentece from INTERNALS.md in
there.

As far as I can see, there are no options or combinations of options
that can be set to restore the old behavior, which, at least for us,
is desirable in that we can handle unknown encodings ourselves, in
most cases by passing the unaltered response to the requestor, or in
the aforementioned case, ignoring "None". Am I overlooking something,
or is there any chance to get the old behavior back in a future
release, e.g. by requiring a specific value for
CURLOPT_ACCEPT_ENCODING, a new option, maybe
CURLOPT_IGNORE_UNKNOWN_CONENT_ENCODING, or possibly a somewhat more
sane method?

Rainer
diff --git a/docs/INTERNALS.md b/docs/INTERNALS.md
index ab04fec7e..944f26e06 100644
--- a/docs/INTERNALS.md
+++ b/docs/INTERNALS.md
@@ -678,7 +678,7 @@ Content Encoding
  understands how to process responses that use the "deflate", "gzip" and/or
  "br" content encodings, so the only values for [`CURLOPT_ACCEPT_ENCODING`][5]
  that will work (besides "identity," which does nothing) are "deflate",
- "gzip" and "br". If a response is encoded using the "compress" or methods,
+ "gzip" and "br". If a response is encoded using "compress" or any other unsupported methods,
  libcurl will return an error indicating that the response could
  not be decoded.  If  is NULL no Accept-Encoding header is generated.
  If  is a zero-length string, then an Accept-Encoding header
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html

Re: should curl_multi_timeout() account for CURLOPT_LOW_SPEED_TIME?

2017-04-10 Thread Rainer Canavan via curl-library
On Thu, Apr 6, 2017 at 9:04 PM, Daniel Stenberg  wrote:
> On Thu, 6 Apr 2017, Rainer Canavan wrote:
>
>> it looks like curl_multi_timeout() doesn't always respect
>> CURLOPT_LOW_SPEED_TIME.
>
> I'm not entirely sure why that is so, but I'd like to mention that the
> CURLOPT_LOW_SPEED_TIME handling and its timeouts were just now changed, in
> commit 29147ade0456a which landed just hours ago.

I can't find that commit anywhere, is that 2d5711dc11 in the github repository?

> When CURLOPT_LOW_SPEED_TIME is set, libcurl should now use no longer than
> 1000ms timeouts.

I've just merged 2d5711dc11 into 7.53.1, and I don't really see an improvement.
curl_multi_timeout() still returns the same erratic values, but
docs/examples/multi-app.c
protects itself by limiting the select timeout to 1s, so I suppose I'll have
to do the same.

The other thing is that I would have expected CURLOPT_LOW_SPEED_TIME to
be the actual interval over which the speed is measured, i.e. if a
server does not
send any data for 2 * CURLOPT_LOW_SPEED_TIME seconds, the transfer should
always fail. The actual implementation uses progress.current_speed, which is
averaged over longer periods with the result that bursty transfers are
kept going.
That's perfectly fine, and probably more useful in reality than how I thought it
would work, but I would say the documentation could be clearer.


Rainer
---
Unsubscribe: https://cool.haxx.se/list/listinfo/curl-library
Etiquette:   https://curl.haxx.se/mail/etiquette.html