On Fri, May 02, 2014 at 01:32:30PM -0400, Patrick Hemmer wrote:
> >> This only applies if the "Expect: 100-continue" was sent. "Expect:
> >> 100-continue" was meant to solve the issue where the client has a large
> >> body, and wants to make sure that the server will accept the body before
> >> sending it (and wasting bandwidth). Meaning that without sending
> >> "Expect: 100-continue", it is expected that the server will not send a
> >> response until the body has been sent.
> > No, it is expected that it will need to consume all the data before the
> > connection may be reused for sending another request. That is the point
> > of 100. And the problem is that if the server closes the connection when
> > responding early (typically a 302) and doesn't drain the client's data,
> > there's a high risk that the TCP stack will send an RST that can arrive
> > before the actual response, making the client unaware of the response.
> > That's why the server must consume the data even if it responds before
> > the end.
> " A 100-continue expectation informs recipients that the client is
>    about to send a (presumably large) message body in this request and
>    wishes to receive a 100 (Continue) interim response if the request-
>    line and header fields are not sufficient to cause an immediate
>    success, redirect, or error response.  This allows the client to wait
>    for an indication that it is worthwhile to send the message body
>    before actually doing so, which can improve efficiency when the
>    message body is huge or when the client anticipates that an error is
>    likely

Yes exactly. Since there's no way to stop in the middle of a sent body,
when you start you need to complete and the other side needs to drain.
I think we're saying the same thing from two different angles :-)

> While I strongly disagree with your interpretation of "Expect:
> 100-continue", I also don't much care about 100-continue. Hardly anyone
> uses it.

100% of web services I've seen use it in order to maintain connection pools :-)
And that's stupid BTW, because they keep the connections open in order to save
a connect round trip, which is replaced with a longer roundtrip involving half
of the request in the first packet, and keeping large amounts of memory in use!

> I was just using it as documentation that the server should not
> be expected to respond before the entire request has been sent.

I know that you used it for this but I disagree with your conclusion,
based on reality in field and even on what the spec says.

> The main thing I care about is not responding with 504 if the client
> freezes while sending the body. This has been a thorn in our side for
> quite some time now, and why I am interested in this patch.

I easily understand. I've seen a place where webservices were used a lot,
and in these environments, they use "500" to return "not found"! Quite a
mess when you want to set up some monitoring and alerts to report servers
going sick!!!

> I've set up a test scenario, and the only time haproxy responds with 408
> is if the client times out in the middle of request headers. If the
> client has sent all headers, but no body, or partial body, it times out
> after the configured 'timeout server' value, and responds with 504.

OK that's really useful. I'll try to reproduce that case. Could you please
test again with a shorter client timeout than server timeout, just to ensure
that it's not just a sequencing issue ?

> Applying the patch solves this behavior. But my test scenario is very
> simple, and I'm not sure if it has any other consequences.

It definitely has, which is why I'm trying to find the *exact* problem in
order to fix it.

Thanks,
Willy


Reply via email to