On 28-1-2011 14:38, Caunter, Stefan wrote: > > > > On 2011-01-28, at 6:26 AM, "Stewart Robinson" <[email protected]> wrote: > >> Other people have configured two Varnish servers to be backends for >> each other. When you see the other Varnish cache as your remote IP you >> then point the request to the real backend. This duplicates your cache >> items in each cache. >> >> Be aware of http://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy >> >> Stew >> >> On 28 January 2011 10:46, Siju George <[email protected]> wrote: >>> Hi, >>> >>> I understand that varnish does not support cache peering like Squid. >>> My planned set up is something like >>> >>> >>> ---- Webserver1 --- ------- Cache --- >>> ------ API >>> LB ----| |---- LB----| |---- LB >>> ----| >>> ---- Webserver2 --- ------- Cache --- >>> ------ API >>> >>> So if I am using Varnish as Cache what is the best way to configure them so >>> that there is redundancy and the setup can continue even if one Cache fails? >>> >>> Thanks >>> >>> --Siju > > > Put two behind LB. Caches are cooler but you get high availability. > Easy to do maintenance this way.
We use Varnish on CentOS machines. We use Pacemaker for high-availability (multiple virtual ip's) and DNSRR for balancing end-users to the caches. see http://blog.hongens.nl/guides/setting-up-a-pacemaker-cluster-on-centosrhel/ for the pacemaker part.. -- With kind regards, Angelo Höngens systems administrator MCSE on Windows 2003 MCSE on Windows 2000 MS Small Business Specialist ------------------------------------------ NetMatch tourism internet software solutions Ringbaan Oost 2b 5013 CA Tilburg +31 (0)13 5811088 +31 (0)13 5821239 [email protected] www.netmatch.nl ------------------------------------------ _______________________________________________ varnish-misc mailing list [email protected] http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
