Hi Willy and everyone,

> Subject: [ANNOUNCE] haproxy-1.5-dev20
>
> Hi all,
>
> here is probably the largest update we ever had, it's composed of 345
> patches!

Wow, thats one hell of a -dev release, nice work :)



> - keep-alive: the dynamic allocation of the connection and applet in the
> session now allows to reuse or kill a connection that was previously
> associated with the session. Thus we now have a very basic support for
> keep-alive to the servers. There is even an option to relax the load
> balancing to try to keep the same connection. Right now we don't do
> any connection sharing so the main use is for static servers and for
> far remote servers or those which require the broken NTLM auth. That
> said, the performance tests I have run show an increase from 71000
> connections per second to 150000 keep-alive requests per second running
> on one core of a Xeon E5 3.6 GHz. This doubled to 300k requests per
> second with two cores. I didn't test above, I lacked injection tools :-)

Very nice.

A few questions about this:

I keep reading that server side keepalive is not a big win on low latency
LAN networks (the docs also suggest this to some extent), but I am really
not so sure about this, and your tests seem to contradict it also.

Did you do your test in a LAN environment with low latency (I suspect you
did) and what was the object size?


How does end-to-end keep-alive work in conjunction with:
 timeout http-keep-alive <timeout>

When does the server side connection get closed? When the original client
connection closes?

I guess once we have connection sharing, it will make sense to have
independent keep-alive timeouts on the client and the server side and
perhaps even "connection-pools" to backends, with min/max values for
idle keep-alive connections. But that is 1.6 material I suspect ;)

IIRC one of the Amazon cloud load-balancer solutions (or was it cloudflare?)
always maintains at least a single tcp session to every backend.


Shouldn't server side TCP Fast Open have a similar effect on performance, btw?



> I still have to perform difficult changes on the health checks system to
> adapt the agent-check after identifying some limitations caused by the
> original design we decided on a few months ago.
>
> Another set of pending changes concerns the polling. Yes I know, each time
> I touch the pollers I break things. But I need to do them, as now with
> keep-alive it becomes obvious that we waste too much time enabling and
> disabling polling because we miss one flag ("this FD last returned EAGAIN").
> The good point is that it will simplify the connection management and checks.
>
> If these points are done quick enough, I'll see if I can implement a very
> basic support for server connection sharing connection (without which I
> still consider keep-alive as not a huge improvement).

It would be great to have this in 1.5!



> - optimizations (splicing, polling, etc...) : a few percent CPU could be
> saved ;
>
> - memory : the connections and applets are now allocated only when needed.
> Additionally, some structures were reorganized to avoid fragmentation on
> 64-bit systems. In practice, an idle session size has dropped from 1936
> bytes to 1296 bytes (-640 bytes, or -33%).

I also like that :)




Keep up the good work!



Regards,

Lukas                                     

Reply via email to