On Thu, Apr 15, 2010 at 4:38 PM, Cosimo Streppone <cos...@streppone.it> wrote: > In data 15 aprile 2010 alle ore 05:11:15, Brad Van Sickle > <bvs7...@gmail.com> ha scritto: > >> LVS does sound interesting but in your infrastructure layout aren't your >> single LVS load balancers single points of failure? > > I simplified a bit too much :) > > Every LVS machine has a hot-spare, and you can perform > manual or automated failover. > > Automated failover is said to keep your connections running > while migrating them over to the backup lvs. We have never > had a failure, just manual failover due to upgrades, etc...
We use LVS to load balance our reverse proxies as well as our app servers. - 2 LVS servers using heartbeat for automatic failover (we are looking to switch to keepalived instead of heartbeat in the future), - 3 nginx servers which do content compression and ssl offloading as well as caching (we don't need 3 of them but we like the redundancy and the ability to drop one without impacting performance) - 5 app servers running apache and mod_perl We have just switched to nginx from squid in the last few months and have been very happy with it. nginx can also deliver static content directly or act as a FastCGI frontend (relaying the requests to backend app servers) as well as many other things. But our main reason for switching to nginx was the ability to offload SSL requests and remove that complexity from the app servers (we previously used squid as our reverse proxy which can't do SSL offloading). nginx can do it's own load balancing as well but we preferred to use our existing LVS infrastructure to handle that for us. As an added bonus, LVS also load balances our mail cluster... Cheers, Cees Hek