Hi again David, On Wed, Sep 12, 2012 at 11:40:04PM -0600, David Torgerson wrote: > I have been benchmarking the haproxy SSL implementation for the past few > days and the results are very impressive. We have multiple ssl terminators > setup for redundancy and capacity. I took one of our terminators and was > able to get around 9500 new terminations a second with a 2048 bit cert > using stunnel + haproxy with accept-proxy. Doing the same test with haproxy > I was able to get around 11500 new TPS with the same 2048 bit cert! > > I know that there is a shared cache across multiple processes on the same > box. Are there any plans on implementing/consuming a shared SSL cache > across multiple systems? Stunnel has a sessiond implementation: > http://www.stunnel.org/sessiond.html
As you know, we don't like to have any SPOF in the chain so we're used to replicate in multi-master mode in haproxy. When we implement SSL session sharing, we'll do it this way. > We have a set of hardware load balancers operating in layer 4 which balance > traffic across our multiple ssl terminator servers. Currently we use > stunnel's sessiond so that they can all share the same ssl session cache. > One option would be to configure our layer 4 load balancer to keep ip > sessions sticky so that the same ip address lands on the same ssl > terminator every time, but this causes uneven load across the multiple ssl > terminators. I'm realizing that at such a load, a single (redunded) haproxy LB in the first layer would be enough, so you could take my example from the previous mail and even remove the peers, and L4 LB if you want to simplify it. Well, the peers at least provides redundancy. Anyway, you could have : 1) vrrp + haproxy + send-proxy + stick-table + peers 2) haproxy + accept-proxy + ssl This is simpler and gets rid of the L4 LB. > Is this feature something that is feasible? Yes it is. What's your schedule ? Do you think the architecture described above is not fine for you, or is it just that you didn't think about it ? Regards, Willy

