On Thu, Jun 17, 2010 at 3:31 PM, Don Faulkner <[email protected]> wrote:
> I would like to hear more about how you're combining varnish and haproxy, and 
> what you're trying to achieve.
>
> I'm just getting started with varnish, but I've used haproxy before.
>
> I'm trying to construct a cluster of caching, load balancing, and ssl 
> termination to sit in front of my web infrastructure. In thinking about this, 
> I seem to be caught in an infinite loop.
>
> I've seen several threads suggesting that the "right" way to build the web 
> pipeline is this:
>
> web server -> cache -> load balancer -> ssl endpoint -> (internet & clients)
>
> But, in this case, all I have the load balancer doing is balancing between 
> the various caches.
Is there a something that you don't like about this setup?

> On the other hand, if I reverse this and put the cache in front, then I'm 
> caching the output of the load balancers, and there's no load balancing for 
> the caches.
>
> I obviously haven't thought this through enough. Could someone pry me out of 
> my loop?
> --
> Don Faulkner, KB5WPM
> All that is gold does not glitter. Not all those who wander are lost.
>
> On Jun 17, 2010, at 1:50 PM, Ken Brownfield wrote:
>
>> Seems like that will do the job.
>>
>> You might also want to look into the consistent hash of haproxy, which 
>> should provide cache "distribution" over an arbitrary pool.  Doing it in 
>> varnish would get pretty complicated as you add more varnishes, and the 
>> infinite loop potential is a little unnerving (to me anyway :)
>>
>> We wanted redundant caches in a similar way (but for boxes with ~1T of 
>> cache) and set up a test config with haproxy that seems to work, but we 
>> haven't put real-world load on it yet.
>> --
>> Ken
>>
>> On Jun 17, 2010, at 6:54 AM, Martin Boer wrote:
>>
>>> Hello all,
>>>
>>> I want to have 2 servers running varnish in parallel so that if one fails 
>>> the other still contains all cacheable data and the backend servers won't 
>>> be overloaded.
>>> Could someone check to see if I'm on the right track ?
>>>
>>> This is how I figure it should be working.
>>> I don't know how large 'weight' can be, but with varnish caching > 90% that 
>>> impact would be affordable.
>>> Regards,
>>> Martin Boer
>>>
>>>
>>> director via_other_varnish random {
>>> .retries = 5;
>>> {
>>>    .backend = other_server;
>>>    .weight = 9;
>>> }
>>> # use the regular backends if the other varnish instance fails.
>>> {
>>>    .backend = backend_1;
>>>    .weight = 1;
>>>  }
>>> {
>>>    .backend = backend_2;
>>>    .weight = 1;
>>>  }
>>> {
>>>    .backend = backend_3;
>>>    .weight = 1;
>>>  }
>>> }
>>>
>>> director via_backends random {
>>>  {
>>>    .backend = backend_1;
>>>    .weight = 1;
>>>  }
>>> {
>>>    .backend = backend_2;
>>>    .weight = 1;
>>>  }
>>> {
>>>    .backend = backend_3;
>>>    .weight = 1;
>>>  }
>>> }
>>>
>>>
>>> sub vcl_recv {
>>> if ( resp.http.X-through-varnish > 0 ) {
>>>    # other varnish forwarded the request already
>>>    # so forward to backends
>>>    set req.backend = via_backends;
>>>    remove resp.http.X-through-varnish;
>>> } else {
>>>    # try the other varnish
>>>    resp.http.X-through-varnish = 1;
>>>    set req.backend = via_other_varnish;
>>> }
>>> ..
>>>
>>>
>>> _______________________________________________
>>> varnish-misc mailing list
>>> [email protected]
>>> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc
>>
>>
>> _______________________________________________
>> varnish-misc mailing list
>> [email protected]
>> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc
>
>
> _______________________________________________
> varnish-misc mailing list
> [email protected]
> http://lists.varnish-cache.org/mailman/listinfo/varnish-misc
>

_______________________________________________
varnish-misc mailing list
[email protected]
http://lists.varnish-cache.org/mailman/listinfo/varnish-misc

Reply via email to