On 28/11/2014 01:19 μμ, Baptiste wrote:
> On Wed, Nov 26, 2014 at 9:48 PM, Pavlos Parissis
> <pavlos.paris...@gmail.com> wrote:
>> On 25/11/2014 07:08 μμ, Lukas Tribus wrote:
>>>> Hi, > > Thanks for your reply. We have tried this approach and while
>>> it gives > some benefit, the haproxy process itself > remains cpu-bound,
>>> with no idle time at all - with both pidstat and perf > reporting that
>>> it uses close to 100% > of available cpu while running. I think SSL/TLS
>>> termination is the only use case where HAProxy saturates a CPU core of a
>>> current generation 3,4Ghz+ CPU, which is why scaling SSL/TLS is more
>>> complex, requiring nbproc> 1. Lukas
>>
>> I am experiencing the same 'expected' behavior, where SSL computation
>> drives HAProxy CPU user level to high numbers.
>>
>> Using SSL tweaks like ECDSA/ECDH algorithms/TLS session id/ticketing
>> helps but it is not the ultimate solution. HAProxy guys had a webinar
>> about HAProxy and SSL few weeks ago, and they mentioned about using
>> multiple processes. They also mentioned about SSL cache being shared
>> between all these processes, which is a very efficient.
>>
>> Cheers,
>> Pavlos
>>
> 
> Hi Pavlos,
> 
> you're right.
> If you need to scale *a lot* your SSL processing capacity in HAProxy,
> you must use multiple processes.
> That said, multiproc model has some counter parts (stats, server
> status, health checks are local to each process, stick-tables can't be
> synchronized, etc..).
> With HAProxy 1.5, we can now start multiple stats socket and stats
> pages and bind them to each process, lowering the impact.

I don't see it as problem having multiple stats available. I have
written a Python lib(which I need to find to polish it a bit and upload
it to github) which aggregates stats from multiple stats socket(show
info, enable/disable/weight changes cmd). But, it could be tricky when
you have a complex map between CPUs and frontend/backends with one2many
relationships or even many2many relationships.

> That said, if stats, peers, etc matters and you still need a huge SSL
> processing capacity, then the best way is to use a first layer of
> HAProxy multi-process to decipher the traffic and make it point to a
> second layer of HAProxy in single process mode.
> 

This is a bit of complex setup.

Pavlos



Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to