Have you checked the socket level, and checking kernel log on all 3 servers (nginx and load balancer) meanwhile doing the test? It could be that for some reason you reach a limit really fast (We had an issue that we reached the nf_conntrack limit at 600 concurrent users because we had like 170 requests per page load)

halozen wrote:
2 nginx 1.4.6 web servers - ocfs cluster, web root inside mounted LUN
from SAN storage
2 MariaDB 5.5 servers - galera cluster, different network segment than
nginx web servers

nginx servers each two sockets quad core xeon, 128 gb ram
Load balanced via F5 load balancer (round-robin, http performance)

Based on my setup above, what options that I should use with siege to
perform load term to at least 5000 concurrent users?

There is a time when thousands of student storms university's web
application.

Below is result for 300 concurrent users.

# siege -c 300 -q -t 1m domain.com

siege aborted due to excessive socket failure; you
can change the failure threshold in $HOME/.siegerc

Transactions:                 370 hits
Availability:               25.38 %
Elapsed time:               47.06 secs
Data transferred:            4.84 MB
Response time:               20.09 secs
Transaction rate:            7.86 trans/sec
Throughput:                0.10 MB/sec
Concurrency:              157.98
Successful transactions:         370
Failed transactions:            1088
Longest transaction:           30.06
Shortest transaction:            0.00

Posted at Nginx Forum: 
http://forum.nginx.org/read.php?2,257373,257373#msg-257373

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

_______________________________________________
nginx mailing list
nginx@nginx.org
http://mailman.nginx.org/mailman/listinfo/nginx

Reply via email to