Sounds like you've defined your parent proxies as "default" instead of "round-robin". But that's just a guess. To promote load-balancing you can use the "round-robin" cache_peer directive:
cache_peer parent1.mydomain.net parent 3128 3130 round-robin cache_peer parent2.mydomain.net parent 3128 3130 round-robin Or "carp-load-factor" (which you have to compile in to the child): cache_peer parent1.mydomain.net parent 3128 3130 carp-load-factor=0.7 cache_peer parent2.mydomain.net parent 3128 3130 carp-load-factor=0.3 Both of the above methods can cause problems with sites that are expecting a one client, one IP address relationship. You might want to look into WCCP: http://www.sublime.com.au/squid-wccp/ and http://www.squid-cache.org/Doc/FAQ/FAQ-17.html#ss17.13 Chris -----Original Message----- From: Tobias Reckhard [mailto:[EMAIL PROTECTED] Sent: Sunday, November 21, 2004 10:17 PM To: [EMAIL PROTECTED] Subject: [squid-users] Failover and/or load sharing possible with cache_peer? Hi Short question: is it possible to implement failover and/or load sharing with Squid configuration parameters alone when one Squid has two upstream Squids available as cache_peers (but no direct access to anything else) and, if so, how? Longer question: I've got an internal Squid server and two in a DMZ. I'd like the internal Squid to distribute its request across the two in the DMZ when both are available and stick to one of them when the other one fails to respond. I managed to implement failover, but practically all requests go to only one of the DMZ Squids until it fails (e.g. by my stopping the daemon) and the internal Squid switches over to the other one, where it then stays until it also goes away. Is it possible to do what I want with Squid alone? If so, how? Cheers, Tobias
