Hi Dan,
>> Instead, for your environment you should use http-server-close: >> http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#option%20http-server-close > > Does this actually close the HTTP session, close the sockets, etc? Yes. > Because of the large number of requests per second (~150k over our > whole cluster) I'm afraid that the overhead of connection > establishment might be problematic for us if so. Are the backend servers local (in the same datacenter) or are they on the other side of the planet? Whats the latency between the proxy and the backend server? If that just a low latency local LAN, this will work just fine. >> This way, frontend keeps doing keepalive with your clients, maintaining >> long-lived sessions, while your backend closes the connection after >> the response, thus, leastconn works for every single requests. > > Is there a hybrid mode, where connections (in the TCP sense) are kept > alive between HAProxy and the backends, but where individual HTTP > requests are potentially balanced to any of the existing backend > connections? That would be keepalive with connection multiplexing/pooling and it is not currently supported (but will probably come for 1.6). > You mentioned pre-1.5 (we're on 1.4.x), has something > changed in 1.5 that might be a fit for our use case? You can use 1.4 with http-server-close just fine, but you may want to give 1.5 with keep-alive mode + leastconn a try. Check it out: http://cbonte.github.io/haproxy-dconv/configuration-1.5.html#4.2-option%20http-keep-alive *If the client request has to go to another backend or another server due to content switching or the load balancing algorithm, the idle connection will immediately be closed and a new one re-opened.* So this should work in your case. Just remember that because of your leastconn configuration you will still have a lot of TCP session churn, because backend --> server tcp session are not multiplexed/pooled, but are dedicated to a single frontend tcp session. However, I don't think this is a big problem if the proxy -> server latency is low. Regards, Lukas

