On Mon, Oct 1, 2012 at 3:18 PM, Jeff King <p...@peff.net> wrote:
> On Mon, Oct 01, 2012 at 02:23:06PM -0700, Shawn O. Pearce wrote:
>> When libcurl fails to connect to an SSL server always retry the
>> request once. Since the connection failed before the HTTP headers
>> can be sent, no data has exchanged hands, so the remote side has
>> not learned of the request and will not perform it twice.
> I find this a little distasteful just because we haven't figured out the
> actual _reason_ for the failure. That is, I'm not convinced this isn't
> something that curl or the ssl library can't handle internally if we
> would only configure them correctly. Did you ever follow up on tweaking
> the session caching options for curl?
No. I didn't try because I reproduced the issue on the initial "GET
/.../info/refs?service=git-upload-pack" request with no authentication
required. So the very first thing the remote-https process did was
fail on an SSL error. During this run I was using a patched Git that
had a different version of the retry logic, but it immediately retried
and the retry was successful. At that point I decided the SSL session
cache wasn't possibly relevant since the first request failed and the
immediate retry was OK.
> Have you tried running your fails-after-a-few-hours request with other
> clients that don't have the problem and seeing what they do
This is harder to reproduce than you think. It took me about 5 days of
continuous polling to reproduce the error. And I have thus far only
reproduced it against our production servers. This makes it very hard
to test anything. Or to prove that any given patch is better than a
> thinking a small webkit harness or something would be the most
So I suspect the contrib/persistent-https proxy thing in Go actually
papers over this problem by having the Go SSL client handle the
connection. But this is only based on a test I ran for several days
through that proxy that did not reproduce the bug. This doesn't mean
it doesn't reproduce with the proxy, it just means _I_ didn't get
lucky with an error in a ~48 hour run.
> which means it shouldn't really be affecting the general populace. So
> even though it feels like a dirty hack, at least it is self-contained,
> and it does fix a real-world problem. If your answer to the above
> questions is "hunting this further is just not worth the effort", I can
> live with that.
I am sort of at that point, but the hack is so ugly... yea, we
shouldn't have to do this. Or pollute our code with it. I'm willing to
go back and iterate on this further, but its going to be a while
before I can provide any more information.
>> diff --git a/remote-curl.c b/remote-curl.c
>> index a269608..04a379c 100644
>> --- a/remote-curl.c
>> +++ b/remote-curl.c
>> @@ -353,6 +353,8 @@ static int run_slot(struct active_request_slot *slot)
>> slot->results = &results;
>> slot->curl_result = curl_easy_perform(slot->curl);
>> + if (slot->curl_result == CURLE_SSL_CONNECT_ERROR)
>> + slot->curl_result = curl_easy_perform(slot->curl);
> How come the first hunk gets a nice for-loop and this one doesn't?
Both hunks retry exactly once after an SSL connect error. I just tried
to pick something reasonably clean to implement. This hunk seemed
simple with the if, the other was uglier and a loop seemed the most
simple way to get a retry in there.
> Also, are these hunks the only two spots where this error can come up?
> The first one does http_request, which handles smart-http GET requests.
> the second does run_slot, which handles smart-http POST requests.
Grrr. I thought I caught all of the curl perform calls but I guess I
missed the dumb transport.
> Some of the dumb http fetches will go through http_request. But some
> will not. And I think almost none of dumb http push will.
Well, don't use those? :-)
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html