On Fri, Nov 25, 2016 at 02:44:35PM +0100, Christian Ruppert wrote:
> I have a default bind for process 1 which is basically the http frontend and
> the actual backend, RSA is bound to another, single process and ECC is bound
> to all the rest. So in this case SSL (in particular ECC) is the problem. The
> connections/handshakes should be *actually* using CPU+2 till NCPU.
That's exactly what I'm talking about, look, you have this :
frontend ECC
bind-process 3-36
bind :65420 ssl crt /etc/haproxy/test.pem-ECC
mode http
default_backend bk_ram
It creates a single socket (hence a single queue) and shares it between
all processes. Thus each incoming connection will wake up all processes
not doing anything, and the first one capable of grabbing it will take
it as well as a few following ones if any. You end up with a very
unbalanced load making it hard to scale.
Instead you can do this :
frontend ECC
bind :65420 ssl crt /etc/haproxy/test.pem-ECC process 3
bind :65420 ssl crt /etc/haproxy/test.pem-ECC process 4
bind :65420 ssl crt /etc/haproxy/test.pem-ECC process 5
bind :65420 ssl crt /etc/haproxy/test.pem-ECC process 6
...
bind :65420 ssl crt /etc/haproxy/test.pem-ECC process 36
mode http
default_backend bk_ram
You'll really have 34 listening sockets all fairly balanced with their
own queue. You can generally achieve higher loads this way and with a
lower average latency.
Also, I tend to bind network IRQs to the same cores as those doing SSL
because you hardly have the two at once. SSL is not able to deal with
traffic capable of saturating a NIC driver, so when SSL saturates the
CPU you have little traffic and when the NIC requires all the CPU for
high traffic, you know there's little SSL.
Cheers,
Willy