Hi,



We're running squid2.5 stable9 on redhat ES3



Our squid is configured with 2 upstreams parents (main & backup)



If we make main parent unavailable, we observe :

- with HTTP :

- At 1st request from client, squid is trying 2 times main parent and then 
backup parent

-> so that it's quite transparent for the end user

-> we also saw that after about 10 client connections the main parent is 
declared DEAD

-> after it's DEAD, client requests are directly forwarded to the backup parent

(avoiding testing the main parent every request)



- with HTTPS (connect method)

- Assuming main parent is considered ALIVE by squid, but dead in real life

- At 1st request, squid do use the main parent once, got error and do not try 
backup peer

- If only https requests are delivered to the squid, failover to backup parent 
never occurs

- If enough http request are delivered to the squid, then it declares main as 
DEAD

=> and only then HTTPS works again as it do not try anymore to use main 
(because it's dead)



Looking deeper in the code (up to stable12 and also in Squid3), i've found a 
clear difference between 

the handling of http / https requests



Http request are handled by "forward.c", whose callback "fwdServerClosed" 
manages retries and shifts to

all available and allowed peers (the ones found by peerSelect function)



Https requests are handled by "ssl.c" whose callback "sslServerClosed" seems to 
be lazier than the other, and

get up at the first error encountered



Before thinking about patching the ssl.c module, i was wondering if it was done 
on purpose or not



Regards



Tafit

_______________________________________________
Join Excite! - http://www.excite.com
The most personalized portal on the Web!


Reply via email to