Hi Marco,

I see you enabled compression in your HAProxy configuration. Maybe you want
to disable it and re-run a test just to see (though I don't expect any
improvement since you seem to have some free CPU cycles on the machine).
Maybe you can run a "top" showing each CPU usage, so we can see how much
time is spent in SI and in userland.
I saw you're doing http-server-close. Is there any good reason for that?
The maxconn on your frontend seem too low too compared to your target
traffic (despite the 5000 will apply to each process).
Last, I would create 4 bind lines, one per process, like this in your
frontend:
  bind :80 process 1
  bind :80 process 2
  ...

Maybe one of your process is being saturated and you don't see it . The
configuration above will ensure an even load distribution of the incoming
connections to the HAProxy process.

Baptiste


On Fri, May 11, 2018 at 4:29 PM, Marco Colli <collimarc...@gmail.com> wrote:

> how many connections you have opened on the private side
>
>
> Thanks for the reply! What should I do exactly? Can you see it from
> HAProxy stats? I have taken two screenshots (see attachments) during the
> load test (30s, 2,000 client/s)
>
> here are not closing fast enough and you are reaching the limit.
>
>
> What can I do to improve that?
>
>
>
>
> On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila <uni...@gmail.com> wrote:
>
>> Check how many connections you have opened on the private side(i.e.
>> between haproxy and nginx), i'm thinking that there are not closing fast
>> enough and you are reaching the limit.
>>
>> Best regards,
>> Mihai
>>
>> On 5/11/2018 4:26 PM, Marco Colli wrote:
>>
>> Another note: each nginx server in the backend can handle 8,000 new
>> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
>> with the same http request)
>>
>> On Fri, May 11, 2018 at 2:02 PM, Marco Colli <collimarc...@gmail.com>
>> wrote:
>>
>>> Hello!
>>>
>>> Hope that this is the right place to ask.
>>>
>>> We have a website that uses HAProxy as a load balancer and nginx in the
>>> backend. The website is hosted on DigitalOcean (AMS2).
>>>
>>> The problem is that - no matter the configuration or the server size -
>>> we cannot achieve a connection rate higher than 1,000 new connections / s.
>>> Indeed we are testing using loader.io and these are the results:
>>> - for a session rate of 1,000 clients per second we get exactly 1,000
>>> responses per second
>>> - for session rates higher than that, we get long response times (e.g.
>>> 3s) and only some hundreds of responses per second (so there is a
>>> bottleneck) https://ldr.io/2I5hry9
>>>
>>> Note that if we use a long http keep alive in HAProxy and the same
>>> browser makes multiple requests we get much better results: however the
>>> problem is that in the reality we need to handle many different clients
>>> (which make 1 or 2 requests on average), not many requests from the same
>>> client.
>>>
>>> Currently we have this configuration:
>>> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result
>>> is the same)
>>> - system / process limits and HAProxy configuration:
>>> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
>>> - 10x nginx backend servers with 2 vCPU each
>>>
>>> What can we improve in order to handle more than 1,000 different new
>>> clients per second?
>>>
>>> Any suggestion would be extremely helpful.
>>>
>>> Have a nice day
>>> Marco Colli
>>>
>>>
>>
>

Reply via email to