Folks, I'm sharing processing load across 3 remote servers, and am having a terrible time getting it balanced.
Here's the config
upstream backend {
server 192.168.162.218:9000 fail_timeout=30 max_fails=3 weight=1
; # Engine 1
server 192.168.175.5:9000 fail_timeout=30 max_fails=3 weight=1;
# Engine 2
server 192.168.175.213:9000 fail_timeout=30 max_fails=3 weight=1
; # Engine 3
}
When the server gets busy, all load seems to be put onto the final
entry, which is seeing load averages in the 70's, whereas the first 2
are below 5.
This is causing serious performance issues. How on earth can we force a
more even loading?
Cheers,
Steve
--
Steve Holdoway BSc(Hons) MIITP
http://www.greengecko.co.nz
Skype: sholdowa
smime.p7s
Description: S/MIME cryptographic signature
_______________________________________________ nginx mailing list [email protected] http://mailman.nginx.org/mailman/listinfo/nginx
