In my opinion, the problem of having separate caching based on URL is
that in case of problems, secondary failover server has a empty cache
for rest of URL, so can affect to throughtput.

Our architecture is the following:

[1. F5 LB] => [2. Varnish]  => [3. Tomcat]

1) F5 Big IP Hardware Load Balancer
2) Four Varnish cache in diferent machines
3) Four Tomcat servers in diferent machines

We don't care to have redundant caching because:
  -  we don't have resource problems
  -  in case of problems, all varnish instances has the cache already populated


2011/1/28 Gresens, August:
> We have two varnish servers behind the load balancer (nginx). Each varnish 
> server has an identical configuration and load balances the actual backends 
> (web servers).
>
> Traffic for particular url patterns are routed to one of the varnish servers 
> by the load balancer. For each url pattern the secondary source is the 
> alternate varnish server. In this way we can we partition traffic between the 
> two varnish servers and avoid redundant caching but the second one will act 
> as a failover if the primary goes down.
>
> Best,
>
> A
>
> -----Original Message-----
> From: [email protected] 
> [mailto:[email protected]] On Behalf Of Angelo Höngens
> Sent: Friday, January 28, 2011 8:42 AM
> To: [email protected]
> Subject: Re: How to set up varnish not be a single point of failure
>
> On 28-1-2011 14:38, Caunter, Stefan wrote:
>>
>>
>>
>> On 2011-01-28, at 6:26 AM, "Stewart Robinson" <[email protected]> wrote:
>>
>>> Other people have configured two Varnish servers to be backends for
>>> each other. When you see the other Varnish cache as your remote IP you
>>> then point the request to the real backend. This duplicates your cache
>>> items in each cache.
>>>
>>> Be aware of http://www.varnish-cache.org/trac/wiki/VCLExampleHashIgnoreBusy
>>>
>>> Stew
>>>
>>> On 28 January 2011 10:46, Siju George <[email protected]> wrote:
>>>> Hi,
>>>>
>>>> I understand that varnish does not support cache peering like Squid.
>>>> My planned set up is something like
>>>>
>>>>
>>>>           ---- Webserver1 ---              ------- Cache ---
>>>> ------ API
>>>> LB ----|                          |---- LB----|                    |---- LB
>>>> ----|
>>>>           ---- Webserver2 ---              ------- Cache ---
>>>> ------ API
>>>>
>>>> So if I am using Varnish as Cache what is the best way to configure them so
>>>> that there is redundancy and the setup can continue even if one Cache 
>>>> fails?
>>>>
>>>> Thanks
>>>>
>>>> --Siju
>>
>>
>> Put two behind LB. Caches are cooler but you get high availability.
>> Easy to do maintenance this way.
>
>
> We use Varnish on CentOS machines. We use Pacemaker for
> high-availability (multiple virtual ip's) and DNSRR for balancing
> end-users to the caches.
>
> see
> http://blog.hongens.nl/guides/setting-up-a-pacemaker-cluster-on-centosrhel/
> for the pacemaker part..
>
> --
>
>
> With kind regards,
>
>
> Angelo Höngens
> systems administrator
>
> MCSE on Windows 2003
> MCSE on Windows 2000
> MS Small Business Specialist
> ------------------------------------------
> NetMatch
> tourism internet software solutions
>
> Ringbaan Oost 2b
> 5013 CA Tilburg
> +31 (0)13 5811088
> +31 (0)13 5821239
>
> [email protected]
> www.netmatch.nl
> ------------------------------------------
>
>
>
> _______________________________________________
> varnish-misc mailing list
> [email protected]
> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>
> SCHOLASTIC
> Read Every Day.
> Lead a Better Life.
>
>
>
>
>
> _______________________________________________
> varnish-misc mailing list
> [email protected]
> http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc
>

_______________________________________________
varnish-misc mailing list
[email protected]
http://www.varnish-cache.org/lists/mailman/listinfo/varnish-misc

Reply via email to