Hello Dmitry,

On Mon, Jan 11, 2010 at 03:37:36PM +0300, Dmitry Sivachenko wrote:
> On Mon, Jan 04, 2010 at 12:13:49AM +0100, Willy Tarreau wrote:
> > Hi all,
> > 
> > Yes that's it, it's not a joke !
> > 
> >  -- Keep-alive support is now functional on the client side. --
> > 
> 
> Hello!
> 
> Are there any plans to implement server-side HTTP keep-alive?

yes, along the full path. Maybe a first version of it could be
ready within a few weeks with some limitations.

> I mean I want client connecting to haproxy NOT to use keep-alive,
> but to utilize keep-alive between haproxy and backend servers.

Hmmm that's different. There are issues with the HTTP protocol
itself making this extremely difficult. When you're keeping a
connection alive in order to send a second request, you never
know if the server will suddenly close or not. If it does, then
the client must retransmit the request because only the client
knows if it takes a risk to resend or not. An intermediate
equipemnt is not allowed to do so because it might send two
orders for one request.

The problem is, the clients are already aware of this and happily
replay a request after the first one in case of unexpected session
termination. But they never do this if the session terminates during
the first request.

So by doing what you describe, your clients would regularly get some
random server errors when a server closes a connection it does not
want to sustain anymore before haproxy has a chance to detect it.

Another issue is that there are (still) some buggy applications which
believe that all the requests from a same session were initiated by
the same client. So such a feature must be used with extreme care.

Last, I'd say there is in my opinion little benefit to do that. Where
the most time is elapsed is between the client and haproxy. Haproxy
and the server are on the same LAN, so a connection setup/teardown
here is extremely cheap, as it's where we manage to run at more than
40000 connections per second (including connection setup, send request,
receive response and close). That means only 25 microseconds for the
whole process which isn't measurable at all by the client and is
extremely cheap for the server.

The only case which could help would be if we were able to use HTTP
pipelining on the server side, meaning that we aggregate multiple
requests in one single batch and wait for all the responses at once
and dispatch them to their respective owners. But that does not work
with most servers, requires a higher degree of complexity on the
proxy and emphasizes even more the issues described above.

In fact, I'd say that for an application to really benefit from that,
it would have to be specifically written for it. In this case, better
write it for normal users :-)

Best regards,
Willy


Reply via email to