>
> Solution is to have more than one ip on the backend and a round robin when
> sending to the backends.


What do you mean exactly? I already use round robin (as you can see in the
config file linked previously) and in the backend I have 10 different
servers with 10 different IPs

sysctl net.ipv4.ip_local_port_range


Currently I have ~30,000 ports available... they should be enough for 2,000
clients / s. Note that the number during the test is kept constant to 2,000
client (the number of connected clients is not cumulative / does not
increase during the test).
In any case I have also tested increasing the number of ports to 64k and
run a load test, but nothing changes.

You are probably keeping it opened for around 60 seconds and thus the limit


No, on the backend side I use http-server-close. On the client side the
number is constant to 2k clients during the test and in any case I have
http keep alive timeout set to 500ms.


On Fri, May 11, 2018 at 4:51 PM, Mihai Vintila <uni...@gmail.com> wrote:

> You can not have too many open ports . Once a new connections comes to
> haproxy on the backend it'll initiate a new connection to the nginx. Each
> new connections opens a local port, and ports are limited by sysctl
> net.ipv4.ip_local_port_range . So even if you set it to 1024 65535 you
> still have only ~ 64000 sessions. Solution is to have more than one ip on
> the backend and a round robin when sending to the backends. This way you'll
> have for each backend ip on the haproxy 64000 sessions. Alternatively make
> sure that you are not keeping the connections opened for too long . You are
> probably keeping it opened for around 60 seconds and thus the limit. As you
> can see you have 61565 sessions in the screenshots provided. Other limit
> could be the file descriptors but seems that this is set to 200k
>
> Best regards,
> Mihai Vintila
>
> On 5/11/2018 5:29 PM, Marco Colli wrote:
>
> how many connections you have opened on the private side
>
>
> Thanks for the reply! What should I do exactly? Can you see it from
> HAProxy stats? I have taken two screenshots (see attachments) during the
> load test (30s, 2,000 client/s)
>
> here are not closing fast enough and you are reaching the limit.
>
>
> What can I do to improve that?
>
>
>
>
> On Fri, May 11, 2018 at 3:30 PM, Mihai Vintila <uni...@gmail.com> wrote:
>
>> Check how many connections you have opened on the private side(i.e.
>> between haproxy and nginx), i'm thinking that there are not closing fast
>> enough and you are reaching the limit.
>>
>> Best regards,
>> Mihai
>>
>> On 5/11/2018 4:26 PM, Marco Colli wrote:
>>
>> Another note: each nginx server in the backend can handle 8,000 new
>> clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
>> with the same http request)
>>
>> On Fri, May 11, 2018 at 2:02 PM, Marco Colli <collimarc...@gmail.com>
>> wrote:
>>
>>> Hello!
>>>
>>> Hope that this is the right place to ask.
>>>
>>> We have a website that uses HAProxy as a load balancer and nginx in the
>>> backend. The website is hosted on DigitalOcean (AMS2).
>>>
>>> The problem is that - no matter the configuration or the server size -
>>> we cannot achieve a connection rate higher than 1,000 new connections / s.
>>> Indeed we are testing using loader.io and these are the results:
>>> - for a session rate of 1,000 clients per second we get exactly 1,000
>>> responses per second
>>> - for session rates higher than that, we get long response times (e.g.
>>> 3s) and only some hundreds of responses per second (so there is a
>>> bottleneck) https://ldr.io/2I5hry9
>>>
>>> Note that if we use a long http keep alive in HAProxy and the same
>>> browser makes multiple requests we get much better results: however the
>>> problem is that in the reality we need to handle many different clients
>>> (which make 1 or 2 requests on average), not many requests from the same
>>> client.
>>>
>>> Currently we have this configuration:
>>> - 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the result
>>> is the same)
>>> - system / process limits and HAProxy configuration:
>>> https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
>>> - 10x nginx backend servers with 2 vCPU each
>>>
>>> What can we improve in order to handle more than 1,000 different new
>>> clients per second?
>>>
>>> Any suggestion would be extremely helpful.
>>>
>>> Have a nice day
>>> Marco Colli
>>>
>>>
>>
>

Reply via email to