Hi Pavlos

On Wed, May 6, 2015 at 1:24 AM, Pavlos Parissis <pavlos.paris...@gmail.com>
wrote:

Shall I assume that you have run the same tests without iptables and got
> the same results?
>

Yes, I had tried it yesterday and saw no measurable difference.

May I suggest to try also httpress and wrk tool?
>

I tried it today, will post it after your result below.


> Have you compared 'sysctl -a' between haproxy and nginx server?


Yes, the difference is very litle:
11c11
< fs.dentry-state = 266125      130939  45      0       0       0
---
> fs.dentry-state = 19119       0       45      0       0       0
13,17c13,17
< fs.epoll.max_user_watches = 27046277
< fs.file-max = 1048576
< fs.file-nr = 1536     0       1048576
< fs.inode-nr = 262766  98714
< fs.inode-state = 262766       98714   0       0       0       0       0
---
> fs.epoll.max_user_watches = 27046297
> fs.file-max = 262144
> fs.file-nr = 1536     0       262144
> fs.inode-nr = 27290   8946
> fs.inode-state = 27290        8946    0       0       0       0       0

134c134
< kernel.sched_domain.cpu0.domain0.max_newidle_lb_cost = 2305
---
> kernel.sched_domain.cpu0.domain0.max_newidle_lb_cost = 3820

(and for each cpu, similar lb_cost)

Have you checked if you got all backends reported down at the same time?
>

Yes, that has not happened. After Baptiste's suggestion of adding port
number,
this has disappeared completely.

How many workers do you use on your Nginx which acts as LB?
>

I was using default of 4. Increasing to 16 seems to improve numbers 10-20%.\


> >
> www-backend,nginx-3,0,0,0,10,30000,184,23843,96517588,,0,,27,0,0,180,DOWN
> 1/2,1,1,0,7,3,6,39,,7,3,1,,220,,2,0,,37,L4CON,,0,0,184,0,0,0,0,0,,,,0,0,,,,,6,Out
> > of local source ports on the system,,0,2,3,92,
> >
>
> Hold on a second, what is this 'Out  of local source ports on the
> system' message? ab reports 'Concurrency Level:      500' and you said
> that HAProxy runs in keepalive mode(default on 1.5 releases) which means
> there will be only 500 TCP connections opened from HAProxy towards the
> backends, which it isn't that high and you shouldn't get that message
> unless net.ipv4.ip_local_port_range is very small( I don't think so).
>

It was set to "net.ipv4.ip_local_port_range = 32768    61000". I have not
seen
this issue after making the change Baptiste suggested. Though I could
increase
the range above and check too.


> # wrk --timeout 3s --latency -c 1000 -d 5m -t 24 http://a.b.c.d
> Running 5m test @ http://a.b.c.d
>   24 threads and 1000 connections
>   Thread Stats   Avg      Stdev     Max   +/- Stdev
>     Latency    87.07ms  593.84ms   7.85s    95.63%
>     Req/Sec    16.45k     7.43k   60.89k    74.25%
>   Latency Distribution
>      50%    1.75ms
>      75%    2.40ms
>      90%    3.57ms
>      99%    3.27s
>   111452585 requests in 5.00m, 15.98GB read
>   Socket errors: connect 0, read 0, write 0, timeout 33520
> Requests/sec: 371504.85
> Transfer/sec:     54.56MB
>

I get very strange result:

# wrk --timeout 3s --latency -c 1000 -d 1m -t 24 http://<haproxy>
Running 1m test @ http://<haproxy>
  24 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     2.40ms   26.64ms   1.02s    99.28%
    Req/Sec     8.77k     8.20k   26.98k    62.39%
  Latency Distribution
     50%    1.14ms
     75%    1.68ms
     90%    2.40ms
     99%    6.14ms
  98400 requests in 1.00m, 34.06MB read
Requests/sec:   1637.26
Transfer/sec:    580.36KB

# wrk --timeout 3s --latency -c 1000 -d 1m -t 24 http://<nginx>
Running 1m test @ http://<nginx>
  24 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     5.56ms   12.01ms 444.71ms   99.41%
    Req/Sec     8.53k   825.80    18.50k    90.91%
  Latency Distribution
     50%    4.81ms
     75%    6.80ms
     90%    8.58ms
     99%   11.92ms
  12175205 requests in 1.00m, 4.31GB read
Requests/sec: 202584.48
Transfer/sec:     73.41MB

Thank you,

Regards,
- Krishna Kumar

Reply via email to