Check how many connections you have opened on the private side(i.e.
between haproxy and nginx), i'm thinking that there are not closing fast
enough and you are reaching the limit.
Best regards,
Mihai
On 5/11/2018 4:26 PM, Marco Colli wrote:
Another note: each nginx server in the backend can handle 8,000 new
clients/s: http://bit.ly/2Kh86j9 (tested with keep alive disabled and
with the same http request)
On Fri, May 11, 2018 at 2:02 PM, Marco Colli <[email protected]
<mailto:[email protected]>> wrote:
Hello!
Hope that this is the right place to ask.
We have a website that uses HAProxy as a load balancer and nginx
in the backend. The website is hosted on DigitalOcean (AMS2).
The problem is that - no matter the configuration or the server
size - we cannot achieve a connection rate higher thanĀ 1,000 new
connections / s. Indeed we are testing using loader.io
<http://loader.io> and these are the results:
- for a session rate of 1,000 clients per second we get exactly
1,000 responses per second
- for session rates higher than that, we get long response times
(e.g. 3s) and only some hundreds of responses per second (so there
is a bottleneck) https://ldr.io/2I5hry9 <https://ldr.io/2I5hry9>
Note that if we use a long http keep alive in HAProxy and the same
browser makes multiple requests we get much better results:
however the problem is that in the reality we need to handle many
different clients (which make 1 or 2 requests on average), not
many requests from the same client.
Currently we have this configuration:
- 1x HAProxy with 4 vCPU (we have also tested with 12 vCPU... the
result is the same)
- system / process limits and HAProxy configuration:
https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195
<https://gist.github.com/collimarco/347fa757b1bd1b3f1de536bf1e90f195>
- 10x nginx backend servers with 2 vCPU each
What can we improve in order to handle more than 1,000 different
new clients per second?
Any suggestion would be extremely helpful.
Have a nice day
Marco Colli