Re: [PATCH 0/1] http: add support selecting http version

2018-11-07 Thread Daniel Stenberg

On Wed, 7 Nov 2018, Force.Charlie-I via GitGitGadget wrote:

Normally, git doesn't need to set curl to select the HTTP version, it works 
fine without HTTP2. Adding HTTP2 support is a icing on the cake.


Just a FYI:

Starting with libcurl 7.62.0 (released a week ago), it now defaults to the 
"2TLS" setting unless you tell it otherwise. With 2TLS, libcurl will attempt 
to use HTTP/2 for HTTPS URLs.


--

 / daniel.haxx.se


Re: [PATCH v2 1/2] remote-curl: accept all encodings supported by curl

2018-05-22 Thread Daniel Stenberg

On Wed, 23 May 2018, Junio C Hamano wrote:


-> Accept-Encoding: gzip
+> Accept-Encoding: ENCODINGS


Is the ordering of these headers determined by the user of cURL library 
(i.e. Git), or whatever the version of cURL we happened to link with happens 
to produce?


The point is whether the order is expected to be stable, or we are better 
off sorting the actual log before comparing.


The order is not guaranteed by libcurl to be fixed, but it is likely to remain 
stable since we too have test cases and compare outputs with expected outputs! 
=)


Going forward, brotli (br) is going to become more commonly present in that 
header.


--

 / daniel.haxx.se


Re: [curl PATCH 2/2] ignore SIGPIPE during curl_multi_cleanup

2018-05-22 Thread Daniel Stenberg

On Tue, 22 May 2018, Suganthi wrote:

We may not be able to upgrade to 7.60.0 any soon, Is the fix present in 7.45 
, in this sequence of code. Please let me know.


I don't know.

I can't recall any SIGPIPE problems in recent days in libcurl, which is why I 
believe this problem doesn't exist anymore. libcurl 7.45.0 is 2.5 years and 
1500+ bug fixes old after all. My casual searches for a curl problem like this 
- fixed in 7.45.0 or later - also failed.


--

 / daniel.haxx.se


Re: [curl PATCH 2/2] ignore SIGPIPE during curl_multi_cleanup

2018-05-22 Thread Daniel Stenberg

On Tue, 22 May 2018, curlUser wrote:


Again SIGPIPE is seen with curl version 7.45.0 with multi interface.
Backtrace shows :


...

Looks like SIGPIPE_IGNORE to be added in prune_dead connections or in 
disconnect_if_dead? Can anyone comment on this.


I'm pretty sure this issue isn't present in any recent libcurl versions, but 
if you can reproduce it with 7.60.0, I'll be very interested.


--

 / daniel.haxx.se


Re: [PATCH 1/2] remote-curl: accept all encoding supported by curl

2018-05-22 Thread Daniel Stenberg

On Mon, 21 May 2018, Jonathan Nieder wrote:


Looking at the code here, this succeeds if enough memory is available.
There is no check if the given parameter is part of
Curl_all_content_encodings();


By "this" are you referring to the preimage or the postimage?  Are you 
suggesting a change in git or in libcurl?


Curl_all_content_encodings() is an internal function in libcurl, so I'm 
assuming the latter.


Ack, that certainly isn't the most wonderful API for selecting a compression 
method. In reality, almost everyone sticks to passing on a "" to that option 
to let libcurl pick and ask for the compression algos it knows since both gzip 
and brotli are present only conditionally depending on build options.


I would agree that the libcurl setopt call should probably be made to fail if 
asked to use a compression method not built-in/supported. Then an application 
could in fact try different algos in order until one works or ask to disable 
compression completely.


In the generic HTTP case, it usually makes sense to ask for more than one 
algorthim though, since this is asking the server for a compressed version and 
typically a HTTP client doesn't know which compression methods the server 
offers. Not sure this is actually true to the same extent for git.


--

 / daniel.haxx.se


Re: [PATCH v3] Allow use of TLS 1.3

2018-03-26 Thread Daniel Stenberg

On Mon, 26 Mar 2018, Johannes Schindelin wrote:

Can we *please* also add that OpenSSL 1.1.* is required (or that cURL is 
built with NSS or BoringSSL as the TLS backend)?


We might consider adding a way to extract that info from curl to make that 
work really good for you. There are now six TLS libraries that support TLS 1.3 
and it might be hard for git to figure out the exact situation for each 
library and keep track of these moving targets...


--

 / daniel.haxx.se


Re: [PATCH v2] Allow use of TLS 1.3

2018-03-23 Thread Daniel Stenberg

On Fri, 23 Mar 2018, Loganaden Velvindron wrote:


+#ifdef CURL_SSLVERSION_TLSv1_3
+   { "tlsv1.3", CURL_SSLVERSION_TLSv1_3 }
+#endif


Unfortunately, CURL_SSLVERSION_TLSv1_3 is an enum so this construct won't 
work.


Also, let me just point out that 7.52.0 is 0x073400 in hex and not the one 
used for the first version of this patch.


--

 / daniel.haxx.se


Re: imap-send with gmail: curl_easy_perform() failed: URL using bad/illegal format or missing URL

2017-11-30 Thread Daniel Stenberg

On Thu, 30 Nov 2017, Nicolas Morey-Chaisemartin wrote:

It would make sense to have a way to ask libcurl to URI encode for us. I'm 
guessing there's already the code for that somewhere in curl and we would be 
wise to use it. But to work wqith older version we'll have to do it 
ourselves anyway.


libcurl only offers curl_easy_escape() which URL encodes a string.

But that's not really usably on an entire existing URL or path since it would 
then also encode the slashes etc. You want to encode the relevant pieces and 
then put them together appropriately into the final URL...


--

 / daniel.haxx.se


Re: imap-send with gmail: curl_easy_perform() failed: URL using bad/illegal format or missing URL

2017-11-30 Thread Daniel Stenberg

On Thu, 30 Nov 2017, Nicolas Morey-Chaisemartin wrote:


This is due to the weird "[Gmail]" prefix in the folder.
I tried manually replacing it with:
    folder = %5BGmail%5D/Drafts
in .git/config and it works.

curl is doing some fancy handling with brackets and braces. It make sense 
for multiple FTP downloads like ftp://ftp.numericals.com/file[1-100].txt, 
not in our case. The curl command line has a --globoff argument to disable 
this "regexp" support and it seems to fix the gmail case. However I couldn't 
find a way to change this value through the API...


That's just a feature of the command line tool, "globbing" isn't a function 
provided by the library. libcurl actually "just" expects a plain old URL.


But with the risk of falling through the cracks into the rathole that is "what 
is a URL" (I've blogged about the topic several times in the past and I will 
surely do it again in the future):


A "legal" URL (as per RFC 3986) does not contain brackets, such symbols should 
be used URL encoded: %5B and %5D.


This said: I don't know exactly why brackets cause a problem in this case. It 
could still be worth digging into and see if libcurl could deal with them 
better here...


--

 / daniel.haxx.se

Re: [RFC 0/3] imap-send curl tunnelling support

2017-08-24 Thread Daniel Stenberg

On Thu, 24 Aug 2017, Jeff King wrote:

Oh good. While I have you here, have you given any thought to a curl handle 
that has two half-duplex file descriptors, rather than a single full-duplex 
socket? That would let us tunnel over pipes rather than worrying about the 
portability of socketpair().


I suspect it would be quite complicated, because I imagine that lots of 
internal bits of curl assume there's a single descriptor.


Yeah, it would take quite some surgery deep down in the heart of curl to 
implement something like that. It wouldn't call it impossible but it would 
take a certain level of determination and amount of time. I presume the 
descriptor-pair would be passed in via an API so it wouldn't affect the 
connect phase. We also have decent test coverage, making an overhaul like this 
a less scary thought - as if the existing tests say OK we can be fairly 
certain there aren't any major regressions...


(I may also have forgotten some tiny detail for the moment that makes it very 
hard.)



 / daniel.haxx.se (who landed the IMAP PREAUTH fix in curl)


Don't you land most of the fixes in curl? :)


I do, but I don't expect readers of the git list to know that!

--

 / daniel.haxx.se


Re: [RFC 0/3] imap-send curl tunnelling support

2017-08-24 Thread Daniel Stenberg

On Thu, 24 Aug 2017, Jeff King wrote:

I opened a bug upstream and they already fixed this. 
https://github.com/curl/curl/pull/1820


Cool! That's much faster than I had expected. :)


Your wish is our command! =)

--

 / daniel.haxx.se (who landed the IMAP PREAUTH fix in curl)


Re: [PATCH v2] travis-ci: build and test Git on Windows

2017-03-24 Thread Daniel Stenberg

On Fri, 24 Mar 2017, Lars Schneider wrote:


2. run your own buildbot and submit data using the regular github hook and
  have buildbot submit the results back (it has a plugin that can do that).
  We do solaris-builds in the curl project using that method (thanks to
  opencsw.org) and some additional windows-builds thanks to private
  individuals.


We could do that! However, the idea was to have the entire build status for 
all platforms in one place.


As the status is (or at least can be) passed back to github, all CI jobs are 
shown at the same place: below each commit and each pull-request...


--

 / daniel.haxx.se


Re: [PATCH v2] travis-ci: build and test Git on Windows

2017-03-24 Thread Daniel Stenberg

On Fri, 24 Mar 2017, Lars Schneider wrote:

Most Git developers work on Linux and they have no way to know if their 
changes would break the Git for Windows build. Let's fix that by adding a 
job to TravisCI that builds and tests Git on Windows. Unfortunately, 
TravisCI does not support Windows.


Forgive me for bursting in and possibly repeating what you've already 
discussed. I just read about the limitations for doing windows builds via 
travis so I thought I'd at least let you know that you can avoid those 
limitations without too much work:


Two alternative approaches would be:

1. use appveyor.com, as that is a Travis-like service for Windows. We do our
   windows-builds in the curl project using that.

2. run your own buildbot and submit data using the regular github hook and
   have buildbot submit the results back (it has a plugin that can do that).
   We do solaris-builds in the curl project using that method (thanks to
   opencsw.org) and some additional windows-builds thanks to private
   individuals.

--

 / daniel.haxx.se


Re: Default authentication over https?

2016-04-14 Thread Daniel Stenberg

On Wed, 13 Apr 2016, Jeff King wrote:


However, I don't think even that would give you what you want. Because I
think that even if we provide a credential, curl will make an initial
request (presumably to find out which auth type it should use, but that
is just a guess). I don't know if there is a way to convince curl to
stick the credential in the first request


curl supports this. but then you must do exactly that: tell libcurl to use 
that single auth method only. It will of course make it fail if you select the 
wrong method etc.


The unauthenticated first request is both to probe for which methods the 
server wants, but also works for the case when users provide credentials 
without the server actually ending up asking for them...


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


git master describe segfault

2016-03-21 Thread Daniel Stenberg

Hello good peeps!

I just ran head-first into a segfault that is fully reproducable for me but 
I'm not at all fluent in these internals so I'm not the suitable person to 
offer a fix. Let me instead offer you some fine details:


0. I'm on a Linux box: a reasonably updated Debian unstable.

1. I'm up to date with the latest git master branch of gecko-dev: 
https://github.com/mozilla/gecko-dev (counting a little over 467K commits)


2. I built the current git off the master branch (v2.8.0-rc3-12-g047057b)

3. In the gecko-dev dir, I run 'git describe --contains f495d0cc2'

The outcome is what looks like a fine stack smash due to very very extensive 
recursion:


$ gdb --args ../git/git describe --contains f495d0cc2
(gdb) run
Program received signal SIGSEGV, Segmentation fault.
0x77bccf73 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
#0  0x77bccf73 in ?? () from /lib/x86_64-linux-gnu/libz.so.1
#1  0x77bcc233 in inflate () from /lib/x86_64-linux-gnu/libz.so.1
#2  0x005935d6 in git_inflate (strm=0x7f7ff1d0, flush=4) at 
zlib.c:118
#3  0x0055d2fd in unpack_compressed_entry (p=0x886b00, 
w_curs=0x7f7ff8e8, curpos=83902121, size=242) at sha1_file.c:2087
#4  0x0055dcbb in unpack_entry (p=0x886b00, obj_offset=83902119, 
final_type=0x7f7ffb50, final_size=0x7f7ffb48) at sha1_file.c:2341
#5  0x0055d533 in cache_or_unpack_entry (p=0x886b00, 
base_offset=83902119, base_size=0x7f7ffb48, type=0x7f7ffb50, 
keep_cache=1) at sha1_file.c:2165
#6  0x0055ed88 in read_packed_sha1 (sha1=0x170df24 
"w\367\026,ũD\362\b4\001{\216\b[\255\261", , 
type=0x7f7ffb50, size=0x7f7ffb48) at sha1_file.c:2789
#7  0x0055f01b in read_object (sha1=0x170df24 
"w\367\026,ũD\362\b4\001{\216\b[\255\261", , 
type=0x7f7ffb50, size=0x7f7ffb48) at sha1_file.c:2837
#8  0x0055f0f3 in read_sha1_file_extended (sha1=0x170df24 
"w\367\026,ũD\362\b4\001{\216\b[\255\261", , 
type=0x7f7ffb50, size=0x7f7ffb48, flag=1) at sha1_file.c:2865
#9  0x004b1e2b in read_sha1_file (sha1=0x170df24 
"w\367\026,ũD\362\b4\001{\216\b[\255\261", , 
type=0x7f7ffb50, size=0x7f7ffb48) at cache.h:1008
#10 0x004b2fc9 in parse_commit_gently (item=0x170df20, 
quiet_on_missing=0) at commit.c:383

#11 0x00464627 in parse_commit (item=0x170df20) at ./commit.h:65
#12 0x00464662 in name_rev (commit=0x170df20, tip_name=0x8e9170 
"B2G_1_0_0_20130115070201", generation=87254, distance=87254, deref=0) at 
builtin/name-rev.c:30
#13 0x004647de in name_rev (commit=0x170dee0, tip_name=0x8e9170 
"B2G_1_0_0_20130115070201", generation=87253, distance=87253, deref=0) at 
builtin/name-rev.c:72
#14 0x004647de in name_rev (commit=0x170dea0, tip_name=0x8e9170 
"B2G_1_0_0_20130115070201", generation=87252, distance=87252, deref=0) at 
builtin/name-rev.c:72
#15 0x004647de in name_rev (commit=0x170de60, tip_name=0x8e9170 
"B2G_1_0_0_20130115070201", generation=87251, distance=87251, deref=0) at 
builtin/name-rev.c:72
#16 0x004647de in name_rev (commit=0x170de20, tip_name=0x8e9170 
"B2G_1_0_0_20130115070201", generation=87250, distance=87250, deref=0) at 
builtin/name-rev.c:72
#17 0x004647de in name_rev (commit=0x170dde0, tip_name=0x8e9170 
"B2G_1_0_0_20130115070201", generation=87249, distance=87249, deref=0) at 
builtin/name-rev.c:72


...

#35070 0x004647de in name_rev (commit=0x1172250, tip_name=0x8e9170 
"B2G_1_0_0_20130115070201", generation=52196, distance=52196, deref=0) at 
builtin/name-rev.c:72
#35071 0x004647de in name_rev (commit=0x1172210, tip_name=0x8e9170 
"B2G_1_0_0_20130115070201", generation=52195, distance=52195, deref=0) at 
builtin/name-rev.c:72
#35072 0x004647de in name_rev (commit=0x11721d0, tip_name=0x8e9170 
"B2G_1_0_0_20130115070201", generation=52194, distance=52194, deref=0) at 
builtin/name-rev.c:72


...

and so on, I actually didn't bother to find the end of this but logically I'd 
guess there was about 52194 more stack frames to go!


(gdb) fr 2
#2  0x005935d6 in git_inflate (strm=0x7f7ff1d0, flush=4) at 
zlib.c:118

118 status = inflate(>z,
(gdb) list
113 int status;
114
115 for (;;) {
116 zlib_pre_call(strm);
117 /* Never say Z_FINISH unless we are feeding everything 
*/

118 status = inflate(>z,
119  (strm->z.avail_in != strm->avail_in)
120  ? 0 : flush);
121 if (status == Z_MEM_ERROR)
122 die("inflate: out of memory");

... and the 'strm' struct looks fine to a layman like me:

(gdb) p *strm
$2 = {z = {
next_in = 0x7fffb234bea9 "x\234\245\216\273\n\003!\020E{\277\302 

Re: [PATCH] Implement https public key pinning

2016-02-11 Thread Daniel Stenberg

On Thu, 11 Feb 2016, Christoph Egger wrote:


+#if LIBCURL_VERSION_NUM >= 0x074400


That should probably be 0x072c00 ...

--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] remote-curl: don't fall back to Basic auth if we haven't tried Negotiate

2016-02-06 Thread Daniel Stenberg

On Fri, 5 Feb 2016, Junio C Hamano wrote:

OK, as Brian said, that use case would need to be in the log message, at 
least.  I am curious, though, if you can give just a random string to 
username, or at least that must match what the underlying authentication 
mechanism uses.


Brian, I can see how this would work in that use case, but I haven't 
convinced myself that the change would not affect other existing use cases 
that are supported--do you think of any that would negatively affect the 
user expeerience?


Even more, there is no other way to let libcurl to use GSS-Negotiate 
without username in URL.


Asking lubcurl expert about that might not be a bad idea; Cc'ed Daniel 
Stenberg.


It is correct that libcurl needs a username to trigger the use of HTTP 
authentication - any HTTP authentication - due to how we once designed the 
internals for this - but when using GSS-Negotiate the actually provided user 
name isn't used by libcurl for anything so it could be a fixed string or 
random junk, it doesn't matter as long as a name is provided.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Building Git with HTTPS support: avoiding libcurl?

2015-12-23 Thread Daniel Stenberg

On Tue, 22 Dec 2015, Paul Smith wrote:

I grok that Git doesn't want to re-invent the wheel and that libcurl is 
convenient.  I just wonder if anyone knows of another wheel, that doesn't 
come attached to an entire tractor-trailer, that could be used instead :).


But if you would consider another lib, then you could just rebuild your own 
libcurl instead from source, entirely without any dependencies on other libs! 
That would be similar to finding another lib with less dependencies. (As 
already mentioned, you'd still need crypto and TLS support no doubt.)


That huge dependency collection is there much because your distro decided that 
having a libcurl with all that support is preferable. libcurl itself offers 
lots of customizability at build-time so you can strip out most of that if you 
wanted to.


But why do the distros build and provide a libcurl that can do all this?

I think you can look at this from a slightly higher altitude. By re-using a 
very widely used, well developed and properly documented library (yeah, I 
claim it is but you don't need to take my word for it) that is available 
everywhere - git benefits. By having many projects use the same lib, even if 
no two projects use the exact same feature set, we get more reliable software 
in the entire ecosystem - with less work.


I would guess that switching out libcurl in git would be a not insignificant 
amount of work, as no libcurl alternative I'm aware of is even close to this 
API.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] Case insensitive URL rewrite

2015-12-11 Thread Daniel Stenberg

On Fri, 11 Dec 2015, Lars Schneider wrote:

the "url..insteadOf" git config is case sensitive. I understand 
that this makes sense on case sensitive file systems. However, URLs are 
mostly case insensitive:


Minor detail here perhaps, but...

I would object to URLs being "mostly case insensitive", even if github 
apparently seems to work that way. The path part of URLs are rarely case 
insensitive since it tends to map to a *nix file system, while the host name 
part is insensitive as section 3.2.2 in RFC 3986 says.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] http.c: use CURLOPT_RANGE for range requests

2015-11-01 Thread Daniel Stenberg

On Fri, 30 Oct 2015, Jeff King wrote:

The goal makes sense. Why weren't we using CURLOPT_RANGE before? Did it not 
exist (or otherwise have limitations) in 2005, and if so, when did it become 
usable? Do we need to protect this with an #ifdef for the curl version?


CURLOPT_RANGE existed already in the first libcurl release: version 7.1, 
relased in August 2000.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH v2] http.c: use CURLOPT_RANGE for range requests

2015-11-01 Thread Daniel Stenberg

On Sun, 1 Nov 2015, Jeff King wrote:

While I have your attention, Daniel, am I correct in assuming that 
performing a second unrelated request with the same CURL object will need an 
explicit:


 curl_easy_setopt(curl, CURLOPT_RANGE, NULL);

to avoid using the range twice?


Correct. As the options stick until modified.

--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: http.c (curl_easy_setopt and CURLAUTH_ANY)

2015-08-28 Thread Daniel Stenberg

On Fri, 28 Aug 2015, Stephen Kazakoff wrote:


From:
curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_ANY);

To:
curl_easy_setopt(result, CURLOPT_PROXYAUTH, CURLAUTH_BASIC | CURLAUTH_NTLM);

I did however find the CURL documentation 
(https://secure.php.net/manual/en/function.curl-setopt.php) slightly 
conflicting. On one hand, CURLAUTH_ANY is effectively the same as passing 
CURLAUTH_BASIC | CURLAUTH_NTLM. But the documentation for 
CURLOPT_PROXYAUTH says that only CURLAUTH_BASIC and CURLAUTH_NTLM are 
currently supported. By that, I'm assuming CURLAUTH_ANY is not supported.


That would rather indicate a problem somewhere else.

CURLAUTH_ANY is just a convenience define that sets a bunch of bits at once, 
and libcurl will discard bits you'd set for auth methods your libcurl hasn't 
been built to deal with anyway. Thus, the above two lines should result in 
(almost) exactly the same behavior from libcurl's point of view.


The fact that they actually make a difference is probably because ANY then 
enables a third authentication method that perhaps your server doesn't like? 
Or is it a libcurl bug?


Hard to tell without more info, including libcurl version. But no, the above 
suggested change doesn't really make much sense for the general population.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: error: git-remote-https died of signal 13

2014-04-24 Thread Daniel Stenberg

On Thu, 24 Apr 2014, Jeff King wrote:

Thanks, that's very helpful. I wasn't able to reproduce your problem 
locally, but I suspect the curl patch below may fix it:


...

Daniel, I think the similar fix to curl_multi_cleanup in commit a900d45 
missed this code path, and we need something like the above patch. I know 
you were trying to keep the SIGPIPE mess at the entrance-points to the 
library, and this works against that. But we need a SessionHandle to pass to 
sigpipe_ignore to look at its no_signal flag, and of course in the case of 
multi we may have several such handles. If there's a similar flag we can 
check on the multi handle, we could just cover all of curl_multi_cleanup 
with a single ignore/reset pair.


Thanks a lot for this! I've taken this issue over to the libcurl mailing list 
and we'll work out the best way to address it over there... At least we know 
the patch works as intended.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [BUG?] git http connection reuse

2014-02-18 Thread Daniel Stenberg

On Tue, 18 Feb 2014, Jeff King wrote:

I think there is still an unrelated issue with curl_multi preventing 
connection reuse, but I'm not sure from what you say below...


I'm not clear whether you mean by this that it is _expected_ in my test 
program for curl not to reuse the connection. Or that curl may simply have 
to do a little more work, and it is still a bug that the connection is not 
reused.


Argh, sorry. I thought were still referring to the previous problem. I can 
indeed repeat the problem you talk about with your test code. Thanks! I'll get 
back to you.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [BUG?] git http connection reuse

2014-02-18 Thread Daniel Stenberg

On Tue, 18 Feb 2014, Jeff King wrote:

I'm not clear whether you mean by this that it is _expected_ in my test 
program for curl not to reuse the connection. Or that curl may simply have 
to do a little more work, and it is still a bug that the connection is not 
reused.


Okey, I checked this closer now and this is the full explanation to what 
happens. It seems to work as intended:


It's all about where the connection cache is held by libcurl. When you create 
a multi handle, it will create a connection cache that will automatically be 
shared by all easy handles that are added to it.


If you create an easy handle and make a curl_easy_perform() on that, it will 
create its own connection cache and keep it associated with this easy handle.


When first using an easy handle within a multi handle it will use the shared 
connection cache in there as long as it is in that multi handle family, but as 
soon as you remove it from there it will be detached from that connection 
cache.


Then, when doing a fresh request with easy_perform using the handle that was 
detached from the multi handle, it will create and use its own private cache 
as it can't re-use the previous connection that is cached within the multi 
handle.


We can certainly teach git to use the multi interface, even when doing a 
single blocking request.


For connection re-use purposes, that may make a lot of sense.

--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [BUG?] git http connection reuse

2014-02-17 Thread Daniel Stenberg

On Mon, 17 Feb 2014, Jeff King wrote:

Right; I'd expect multiple connections for parallel requests, but in this 
case we are completing the first and removing the handle before starting the 
second. Digging further, I was able to reproduce the behavior with a simple 
program:


Yeah, given your description I had no problems to repeat it either. Turns out 
we had no decent test case that checked for this so in our eagerness to fix a 
security problem involving over-reuse we broke this simpler reuse case. Two 
steps forward, one step backward... :-/



The manpage for curl_multi_add_handle does say:

 When an easy handle has been added to a multi stack, you can not
 and you must not use curl_easy_perform(3) on that handle!

Does that apply to the handle after it has finished its transaction and been 
removed from the multi object (in which case git is Doing It Wrong)?


No it doesn't. The man page should probably be clarified to express that 
slightly better. It just means that _while_ a handle is added to a multi 
handle it cannot be used with curl_easy_perform().


So yes, you can remove indeed it from the handle and then do 
curl_easy_perform(). It works just fine!


Several internal caches are kept in the multi handle when that's used though, 
so when getting the easy handle out after having used it with the multi 
interface and then using it with the easy interface may cause libcurl to do a 
little more work than if you would be able to re-add it to the same multi 
handle do to the operation with that...


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [BUG?] git http connection reuse

2014-02-16 Thread Daniel Stenberg

On Sat, 15 Feb 2014, Kyle J. McKay wrote:

If pipelining is off (the default) and total connections is not 1 it sounds 
to me from the description above that the requests will be executed on 
separate connections until the maximum number of connections is in use and 
then there might be some reuse.


Not exactly. When about to do a request, libcurl will always try to find an 
existing idle but open connection in its connection pool to re-use. With the 
multi interface you can of course easily start N requests at once to the same 
host and then they'll only re-use connections to the extent there are 
connections to pick, otherwise it'll create new connections.



Daniel Stenberg (7 Jan 2014)
- ConnectionExists: fix NTLM check for new connection


Looks like you're just lucky as that above change first appears in 7.35.0. 
But it seems there are some patches for older versions so they might be 
affected as well [2].


Right, the problem is there to make sure that a NTLM-auth connection with 
different credentials aren't re-used. NTLM with its connection-oriented 
authentication breaks the traditional HTTP paradigms and before this change 
there was a risk that libcurl would wrongly re-use a NTLM connection that was 
done with different credentials!


I suspect we introduced a regression here with that fix. I'll dig into this.

--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [BUG?] git http connection reuse

2014-02-16 Thread Daniel Stenberg

On Sun, 16 Feb 2014, Daniel Stenberg wrote:

Right, the problem is there to make sure that a NTLM-auth connection with 
different credentials aren't re-used. NTLM with its connection-oriented 
authentication breaks the traditional HTTP paradigms and before this change 
there was a risk that libcurl would wrongly re-use a NTLM connection that 
was done with different credentials!


I suspect we introduced a regression here with that fix. I'll dig into this.


Thanks for pointing this out!

I could repeat this problem with a curl test case so I wrote up two, and then 
I made a fix to curl:


  https://github.com/bagder/curl/commit/d765099813f5

--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [curl PATCH 2/2] ignore SIGPIPE during curl_multi_cleanup

2013-11-27 Thread Daniel Stenberg

On Mon, 25 Nov 2013, Jeff King wrote:

This is an extension to the fix in 7d80ed64e43515. We may call 
Curl_disconnect() while cleaning up the multi handle, which could lead to 
openssl sending packets, which could get a SIGPIPE.


Thanks a lot. I'll merge these ones in a second and they will be included in 
the coming 7.34.0 release (due to ship in mid December).


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: error: git-remote-https died of signal 13

2013-11-24 Thread Daniel Stenberg

On Sun, 24 Nov 2013, Jeff King wrote:

Hmm. The fix in curl's 7d80ed64e435155 seems to involve strategically placed 
calls to ignore SIGPIPE. I wonder if there is another spot that needs 
similar treatment. It looks like curl_easy_cleanup is covered, though, and 
that's where I would expect problem to come.


Sounds like a plausible reason.

It would be interesting to see a backtrace from remote-curl when we get the 
SIGPIPE. Doing so would be slightly tricky; instrumenting with the patch 
below may be enough.


Another thought is that the curl fix seems to only kick in when built with 
openssl support.  I'm not sure I understand how ubuntu's packaging of curl 
uses gnutls versus openssl for the shared library. That may be related.


I'm only aware of a SIGPIPE problem with openssl that can make it write to the 
socket in some situations when the remote end is no longer there - something 
we can't prevent it from doing.


I *believe* the problem doesn't exist in the similar way when built to use 
gnutls, but I may of course be wrong.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: error: git-remote-https died of signal 13

2013-11-24 Thread Daniel Stenberg

On Mon, 25 Nov 2013, Jeff King wrote:

Thanks. I'm having trouble reproducing the SIGPIPE locally, but I am able to 
see via strace the write we make in curl_multi_cleanup. The call stack is:


 curl_multi_cleanup
   - close_all_connections
 - Curl_disconnect
   - Curl_ossl_close
 ...

Daniel, does the call to Curl_disconnect need to be wrapped with 
sigpipe_ignore/reset, similar to 7d80ed64e435155?


Yes. It very much looks like that. The SSL closing is what was the problem I 
had to adress.


But I then decided that if a 3rd library has one way to generate SIGPIPE it 
may very well have another in a separate spot so I decided to do the wrap at 
the top level immediately in the entry point when getting called by the 
application. Following that, the SIGPIPE ignore/restore should rather be made 
in curl_multi_cleanup.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] http: enable keepalive on TCP sockets

2013-10-13 Thread Daniel Stenberg

On Sat, 12 Oct 2013, Eric Wong wrote:

This is a follow up to commit e47a8583a20256851e7fc882233e3bd5bf33dc6e 
(enable SO_KEEPALIVE for connected TCP sockets).


Just keep in mind that TCP keep-alive is enabled in awkwardly many different 
ways on different systems and this patch only supports one of them. Feel free 
to take inspiration from libcurl's source code for doing this. See:


  https://github.com/bagder/curl/blob/master/lib/connect.c#L108

--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GSS-Negotiate authentication requires that all data fit into postbuffer

2013-10-06 Thread Daniel Stenberg

On Sun, 6 Oct 2013, Ilari Liusvaara wrote:


GSS-Negotiate authentication always requires a rewind with CURL.


Isn't 'Expect: 100-Continue' meant for stuff like this (not that it is 
always supported properly)?


Yes it is and libcurl uses 100-Continue by default for that purpose. But the 
harsh reality is that lots of (most?) servers just don't care and aren't setup 
to respond properly and instead we end up having to send data multiple times 
in vain.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GSS-Negotiate authentication requires that all data fit into postbuffer

2013-10-06 Thread Daniel Stenberg

On Sun, 6 Oct 2013, brian m. carlson wrote:

If there's a way to make Apache with mod_auth_kerb do that with curl, then 
it doesn't require a change to git, and I'm happy to make it on my end. 
But using the curl command line client, I don't see an Expect: 100-continue 
anywhere during the connection using Debian's curl 7.32.0-1.  Do I need to 
send a certain amount of data to see that behavior?


Correct, curl will enable Expect: 100-continue if the post size is  1024 
bytes.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] http.c: don't rewrite the user:passwd string multiple times

2013-06-19 Thread Daniel Stenberg

On Tue, 18 Jun 2013, Jeff King wrote:

But, I don't know if there is any multi-processing happening within the 
curl library.


I don't think curl does any threading; when we are not inside 
curl_*_perform, there is no curl code running at all (Daniel can correct me 
if I'm wrong on that).


Correct, that's true. The default setup of libcurl never uses any threading at 
all, everything is done using non-blocking calls and state-machines.


There's but a minor exception, so let me describe that case just to be 
perfectly clear:


When you've build libcurl with the threaded resolver backend, libcurl fires 
up a new thread to resolve host names with during the name resolving phase of 
a transfer and that thread can then actually continue to run when 
curl_multi_perform() returns.


That's however very isolated, stricly only for name resolving and there should 
be no way for an application to mess that up. Nothing of what you've discussed 
in this thread would affect or harm that thread. The biggest impact it tends 
to have on applications (that aren't following the API properly or assume a 
little too much) is that it changes the nature of what file descriptors to 
wait for slightly during the name resolve phase.


Some Linux distros ship their default libcurl builds using the threaded 
resolver.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] http.c: don't rewrite the user:passwd string multiple times

2013-06-18 Thread Daniel Stenberg

On Tue, 18 Jun 2013, Jeff King wrote:

TL;DR: I'm just confirming what's said here! =)


My understanding of curl's pointer requirements are:

 1. Older versions of curl (and I do not recall which version off-hand,
but it is not important) stored just the pointer. Calling code was
required to manage the string lifetime itself.

 2. Newer versions of curl will strdup the string in curl_easy_setopt.


That's correct. This new behavior in (2) was introduced in libcurl 7.17.0 - 
released in September 2007 and should thus be fairly rare by now.


I mention this primarily because I think it should be noted that there will 
probably be very little testing by users with such old libcurl versions. It 
may increase the time between a committed change and people notice brekages 
caused by it. Even Debian old-stable has a much newer version.


For older versions, if we were to grow the strbuf, we might free() the 
pointer provided to an earlier call to curl_easy_setopt. But since we are 
about to call curl_easy_setopt with the new value, I would assume that curl 
will never actually look at the old one (i.e., when replacing an old 
pointer, it would not dereference it, but simply overwrite it with the new 
value).


Another accurate description.

--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: DNS issue when cloning over HTTP and HTTPS

2013-06-17 Thread Daniel Stenberg

On Tue, 18 Jun 2013, LCD 47 wrote:


   Cloning with the git protocol works as expected.

A search on the net shows people having the same problem more than an year 
ago, and the solution there seems to imply that Git can't cope with async 
DNS in curl:


http://osdir.com/ml/freebsd-ports-bugs/2012-05/msg00095.html

   Any idea?


It's not a git problem really. When you build libcurl to use c-ares for asynch 
name resolving you unfortunately don't get a really feature complete 
replacement for all stuff the stock synch resolver can do and I believe you 
(and person from that link from last year) experience that.


The solution for you is to:

a) rebuild libcurl with another resolving backend (there's a synch and 
threaded asynch one to choose from)


 or

b) fix c-ares to work properly in this scenario as well

--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SNI (SSL virtual hosts)

2013-06-05 Thread Daniel Stenberg

On Tue, 4 Jun 2013, Janusz Harkot wrote:

valid point, but from what you can find on the web, the only solution 
provided everywhere was to disable certificate checking… so maybe that's not 
me, but this is first time someone spent some time to check whats going on 
:)


I don't disagree with that. You may be right.

But I am the maintainer of libcurl and I have *never* gotten a report about 
this before, and I rather base my actions and assumptions on true reports from 
actual developers with whom I can discuss and delve into details with (like 
you and me right now). Basing decisions on vague statements posted elsewhere 
by unknown people is for sure a road into sadness.


Anyway, now I'm off topic. I'm glad you could fix the problem. Thanks for 
flying git + libcurl! =)


--

 / daniel.haxx.se

Re: SNI (SSL virtual hosts)

2013-06-04 Thread Daniel Stenberg

On Tue, 4 Jun 2013, Janusz Harkot wrote:

Strange was, that initial communication was OK (http GET), but when there 
was http POST - git reported error (incorrect certificate). The only 
workaround was to disable certificate verification.


My question is: does git support SNI on the https? If so - are there 
(undocumented) options to make it work?


It does. git uses libcurl for the HTTPS parts and it has support SNI for a 
long time, assuming you built libcurl with a TLS library that handles it.


Which libcurl version and SSL backend is this? (curl -V usually tells)

If you made it working by disabling certificate verification then it sounds as 
if SNI might still have worked and the problem was rahter something else, as 
without SNI you can't do name-based virtual hosting over HTTPS - but perhaps 
you wanted to communicate with the default server on that IP?


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SNI (SSL virtual hosts)

2013-06-04 Thread Daniel Stenberg

On Tue, 4 Jun 2013, Janusz Harkot wrote:


Which libcurl version and SSL backend is this? (curl -V usually tells)

$ curl -V
curl 7.24.0 (x86_64-apple-darwin12.0) libcurl/7.24.0 OpenSSL/0.9.8r 
zlib/1.2.5


From what I can tell, that OpenSSL version supports SNI fine and libcurl has 

supported it since 7.18.1.


here is a log (with GIT_CURL_VERBOSE=1)

https://gist.github.com/anonymous/8f6533a755ae5c710c75

Initial connection is correct (line 10 - shows that it reads correct 
certificate), but then subsequent call to the server (line 68) shows that 
the defat server certificate is used.


It looks like the second call was without hostname (?).


What makes you suggest that's what's happening? Sure, if it would've sent no 
or the wrong host name it would probably have that effect.


Any chance you can snoop on the network and the SSL handshake to see who's to 
blame? I can't but to think that is is a very common use case!


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: SNI (SSL virtual hosts)

2013-06-04 Thread Daniel Stenberg

On Tue, 4 Jun 2013, Janusz Harkot wrote:


What makes you suggest that's what's happening? Sure, if it would've sent no
or the wrong host name it would probably have that effect.


line:

[36] * Re-using existing connection! (#0) with host (nil)


Ah that. Yes, that's a stupid line to show (that bug has been fixed since). 
But if you look further down your log you see that the connection which is 
re-used according to that log line gets closed anyway.



it looks like it is working


Awesome!

So, the question is still why it is not working with openssl 0.9.8r - this 
version supports SNI by default. This looks like an error in openssl (maybe: 
Only allow one SGC handshake restart for SSL/TLS.)


Right. As you can see in the libcurl code it activates SNI for OpenSSL the 
exact same way independently of what version that's used.


Now is the question, shall this be handled by curl or left alone? (handling 
older version of openssl, and force new ssl session?)


I'm not even completely convinced this is just an old-OpenSSL-problem. If 
that version you're using is the one Apple has provided, there's the risk that 
the problem is rather caused by their changes!


I'm reluctant to globally switch off session-id caching for OpenSSL 0.9.8 
users since that feature has been used for over 8 years in the code and you're 
the first to have a problem with it! =-/


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: segfault in git-remote-http

2013-04-09 Thread Daniel Stenberg

On Tue, 9 Apr 2013, Jeff King wrote:

git-remote-http does not touch the openssl code itself at all. We only talk 
to curl, which handles all of the ssl (and may even be built on gnutls). So 
if that is the problem, then I think it may be a libcurl bug, not a git one.


... and if/when you do make it a libcurl bug, please include more details that 
includes how to repeat the problem and what versions of libcurl and OpenSSL 
you're using.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC] http_init: only initialize SSL for https

2013-03-17 Thread Daniel Stenberg

On Sun, 17 Mar 2013, Antoine Pelisse wrote:


With redirects taken into account, I can't think of any really good way
around avoiding this init...


Is there any way for curl to initialize SSL on-demand ?


Yes, but not without drawbacks.

If you don't call curl_global_init() at all, libcurl will notice that on first 
use and then libcurl will call global_init by itself with a default bitmask.


That automatic call of course will prevent the application from being able to 
set its own bitmask choice, and also the global_init function is not 
(necessarily) thread safe while all other libcurl functions are so the 
internal call to global_init from an otherwise thread-safe function is 
unfortunate.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC] http_init: only initialize SSL for https

2013-03-16 Thread Daniel Stenberg

On Sat, 16 Mar 2013, Jeff King wrote:

But are we correct in assuming that curl will barf if it gets a redirect to 
an ssl-enabled protocol? My testing seems to say yes:


Ah yes. If it switches over to an SSL-based protocol it will pretty much 
require that it had been initialized previously.


With redirects taken into account, I can't think of any really good way around 
avoiding this init...


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC] http_init: only initialize SSL for https

2013-03-15 Thread Daniel Stenberg

On Thu, 14 Mar 2013, Junio C Hamano wrote:

As to ALL vs DEFAULT, given that its manual page is riddled with a scary 
warning:


   This function must be called at least once within a program (a
   program is all the code that shares a memory space) before the
   program calls any other function in libcurl. The environment it sets
   up is constant for the life of the program and is the same for every
   program, so multiple calls have the same effect as one call.  ... In
   normal operation, you must specify CURL_GLOBAL_ALL. Don't use any
   other value unless you are familiar with it and mean to control
   internal operations of libcurl.


(speaking from a libcurl perspective)

The warning is just there to scare people into actually consider what they 
want and understand that removing bits will change behavior. I would say 
that's exactly what you've done and I don't think people here need to be 
scared anymore! :-)


As for how ALL vs DEFAULT will act or differ in the future, I suspect that we 
will end up having them being the same (even when we add bits) as we've 
encouraged ALL in the documentation like this for quite some time.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH/RFC] http_init: only initialize SSL for https

2013-03-15 Thread Daniel Stenberg

On Fri, 15 Mar 2013, Junio C Hamano wrote:

As for how ALL vs DEFAULT will act or differ in the future, I suspect that 
we will end up having them being the same (even when we add bits) as we've 
encouraged ALL in the documentation like this for quite some time.


Thanks, then we should stick to starting from ALL like everybody else who 
followed the suggestion in the documentation.  Do you have recommendations 
on the conditional dropping of SSL?


Not really, no.

SSL initing is as has been mentioned alredy only relevant with libcurl if an 
SSL powered protocol is gonna be used, so if checking the URL for the protocol 
is enough to figure this out then sure that should work fine.


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Using socks proxy with git for http(s) transport

2013-03-06 Thread Daniel Stenberg

On Wed, 6 Mar 2013, Yves Blusseau wrote:

I have try with an old version of curl: 7.15.5 and with the latest in 
development curl 7.29.1-DEV. But it seem that git-remote-http is compile 
with the old one.


libcurl 7.15.5 is over 6 years old.

The support for socks[*]:// prefixes in proxy names was added to libcurl 
7.21.7 (June 23 2011).


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: https_proxy and libcurl

2013-02-22 Thread Daniel Stenberg

On Fri, 22 Feb 2013, Junio C Hamano wrote:


http_proxy=http://proxy.myco.com
https_proxy=https://proxy.myco.com

The problem is that libcurl ignores the protocol part of the proxy
url, and it defaults to port 1080. wget honors the protocol specifier,
but it defaults to port 80 if none is given.


IIRC, the historical norm is to set these to host:port.

So many people mistakenly write them with method:// that some tools over 
time learned to strip and ignore that prefix, though.


You're right, but also what exactly is the https:// _to_ the proxy supposed to 
mean? The standard procedure to connect to a proxy when communicating with 
either HTTP or HTTPS is using plain HTTP.


If you would want port 443 for a HTTPS connection to a proxy, you'd use:

  https_proxy=http://proxy.myco.com:443

Or without the ignore protocol prefix:

  https_proxy=proxy.myco.com:443

--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Fix potential hang in https handshake.

2012-10-19 Thread Daniel Stenberg

On Fri, 19 Oct 2012, Shawn Pearce wrote:

The issue with the current code is sometimes when libcurl is opening a 
CONNECT style connection through an HTTP proxy it returns a crazy high 
timeout (240 seconds) and no fds. In this case Git waits forever.


Is this repeatable with a recent libcurl? It certainly sounds like a bug to 
me, and I might be interested in giving a try at tracking it down...


--

 / daniel.haxx.se
--
To unsubscribe from this list: send the line unsubscribe git in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html