On Dec 24, 2014, at 3:19 PM, Willy Tarreau <[email protected]> wrote:

> Hi guys,
> 
> On Mon, Dec 22, 2014 at 10:55:13AM -0800, Sergei Kononov wrote:
>> 
>> On Dec 22, 2014, at 10:47 AM, Lukas Tribus <[email protected]> wrote:
>> 
>>> Hi Sergei,
>>> 
>>> 
>>>>> What about if you run with 5 processes instead of 10? Ar you still maxing 
>>>>> out
>>>>> at 200k session (which would increase the per process sessions to 40k) or 
>>>>> are
>>>>> you maxing out at 100k (maintaing max 20k per process)?
>>>> 
>>>> I?ve tried to decrease number of processes - caused decrease in stot as 
>>>> well.
>>>> 
>>>>> 
>>>>> How are your benchmarking this, are you sure the limit is not on the 
>>>>> client
>>>>> (benchmark) site?
>>>> 
>>>> I thought so, but I?m using about ~10 virtual servers, each is running 
>>>> multiple
>>>> copies of testing app (python code). Increase of number of virtual server
>>>> doesn?t lead to increase in connections, unfortunately.
>>> 
>>> Ok, what about bandwidth? Are you sure there is no bandwidth chokepoint
>>> somewhere?
>>> 
>> 
>> Bandwidth looks ok. I?ve been running multiple ?ab? instances, but it?s
>> concurrency understudying is weird. you can?t have number of requests be
>> lower than concurrency - so with ~100k connections, it was doing about 1m of
>> requests, and it was load bandwidth up to 1.5Gbit.
> 
> Also keep in mind that ab cannot reliably cope with more than 1024 concurrent
> conns. Segfaults, freezes and random results are to be expected... You should
> use a fake server like httpterm which is able to wait before responding. That
> will definitely save your bandwidth and allow you to test for concurrency. In
> fact I wrote it for that exact purpose when benchmarking Apache 10 years ago!
> 
>> So after that I wrote small python app (with genets) which is doing many
>> connections, and doing only 1 request - so now number of connections and
>> requests about the same, bandwidth ~500-600Mbit
> 
> Perfect!
> 
>>> Anyway, please bump maxconn (both global and default), just to make sure
>>> you don't hit a limit there.
>> 
>> Will do, thanks!
> 
> I suspect that you're facing some concurrency limits in openssl. Openssl
> uses dynamic buffers that are allocated/released on the fly, and it's very
> possible that there's a limit somewhere that prevents new entries from
> being allocated. Maybe it's even something we can tune and we're not aware
> of. Also, you need to be very careful when pushing concurrency too far with
> SSL. We measured a peak of memory usage reaching 92kB per connection with
> openssl during handshakes. We managed to get that down to 44kB (we have not
> yet sent our patch upstream, let's blame the lack of time). But still that's
> huge when you add this to the buffers. 200k concurrent conns *can* mean
> 18 GB of RAM just for openssl itself if all handshakes are started in
> parallel (or you're under attack with incomplete handshakes, 3 bytes are
> enough to trigger them).
> 

I think you’re right about concurrency limits and I think my tests not exactly 
right, essentially it feels more like DoS attack than granular increase in 
traffic (kind of real world scenario). I was able to get up to 300k 
connections, occasionally.

Anyway, I’ve increased number of SSL termination processes up to 16 (in case if 
that’s the per processes limit), so far so good. I’ll let you know how it went 
once holidays are over :)

Thank you for such nice piece of software as haproxy, and happy holidays!

> Hoping this helps,
> Willy

Reply via email to