Dag-Erling Smørgrav wrote: >> One good way to do this is to put a pass-only Varnish instance (i.e., >> a content switch) in front of a set of intermediate backends (Varnish >> caching proxies), each of which is assigned to cache a subset of the >> possible URI namespace. >> >> However, in order to do this, the content switch must make consistent >> decisions about which cache to direct the incoming requests to. One >> good way of doing that is implementing a hash function H(U) -> V, >> where U is the request URI, and V is the intermediate-level proxy. >> > > That's actually a pretty good idea... Could you open a ticket for it? > > DES > This is called CARP/"Cache Array Routing Protocol" in squid land. Here's a link to some info on it:
http://docs.huihoo.com/gnu_linux/squid/html/x2398.html It works quite well for reducing the number of globally duplicated objects in an multilayer accelerator setup, as you can add additional machines in the interstitial space between the frontline caches and the origin as a cheap and easy way to increase the overall ram available to hot objects without having to use some front end load balancer like perlbal, big ip or whatever to direct the individual clients to specific frontlines to accomplish the same thing ( though you usually still have a load balancer for fault tolerance ). Though in squid there are some bugs with their implementation ... --DHF _______________________________________________ varnish-misc mailing list [email protected] http://projects.linpro.no/mailman/listinfo/varnish-misc
