On Mon, Oct 1, 2012 at 10:38 PM, Shawn Pearce <spea...@spearce.org> wrote:
> On Mon, Oct 1, 2012 at 3:18 PM, Jeff King <p...@peff.net> wrote:
>> On Mon, Oct 01, 2012 at 02:23:06PM -0700, Shawn O. Pearce wrote:
>>> When libcurl fails to connect to an SSL server always retry the
>>> request once. Since the connection failed before the HTTP headers
>>> can be sent, no data has exchanged hands, so the remote side has
>>> not learned of the request and will not perform it twice.
>> I find this a little distasteful just because we haven't figured out the
>> actual _reason_ for the failure.
> No. I didn't try because I reproduced the issue on the initial "GET
> /.../info/refs?service=git-upload-pack" request with no authentication
> required. So the very first thing the remote-https process did was
> fail on an SSL error. During this run I was using a patched Git that
> had a different version of the retry logic, but it immediately retried
> and the retry was successful. At that point I decided the SSL session
> cache wasn't possibly relevant since the first request failed and the
> immediate retry was OK.
>> Have you tried running your fails-after-a-few-hours request with other
>> clients that don't have the problem and seeing what they do
> This is harder to reproduce than you think. It took me about 5 days of
> continuous polling to reproduce the error. And I have thus far only
> reproduced it against our production servers. This makes it very hard
> to test anything. Or to prove that any given patch is better than a
> different version.
The only sure way to make sure your patch works is to get your load
balancers Slashdotted first (reason noted in my previous mail on this
subject). For the sake of your relationship with your networking crew
I'd not advise doing that intentionally.
>> which means it shouldn't really be affecting the general populace. So
>> even though it feels like a dirty hack, at least it is self-contained,
>> and it does fix a real-world problem. If your answer to the above
>> questions is "hunting this further is just not worth the effort", I can
>> live with that.
> I am sort of at that point, but the hack is so ugly... yea, we
> shouldn't have to do this. Or pollute our code with it. I'm willing to
> go back and iterate on this further, but its going to be a while
> before I can provide any more information.
>> How come the first hunk gets a nice for-loop and this one doesn't?
> Both hunks retry exactly once after an SSL connect error. I just tried
> to pick something reasonably clean to implement. This hunk seemed
> simple with the if, the other was uglier and a loop seemed the most
> simple way to get a retry in there.
If indeed the problem you are having is with a load balanced setup
then applying TCP/IP like back-off semantics is the right way to go.
The only reason the network stack isn't doing it for you is because
the load balancers wait for the SSL/TLS start before dumping the
"excess" (exceeding of license) SSL connections.
"As opposed to vegetable or mineral error?"
-John Pescatore, SANS NewsBites Vol. 12 Num. 59
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html