Hi
Cache takes a considreable amt of memory which our system cannot afford, hence the choice of disbling the cache.
Probably it is a better idea to get one or more of cheap intel-based machines, 1GB should be affordable with this machines.
The redirector that I am using is a self made one for content filtering. It communicates with a remote server to obtain the details of the content of the url. This communication alone could take a max of 2000msecs.
We have a servicetimes something like 300ms when under load, hit service time is about 5ms. Adam Aube told me:
"Anything under ~ 1 seconds is probably fine for misses, and even up to 2 seconds depending on congestion and latency on your link."
In your case the servicetime will be more than 2secs as you still need to fetch that object after passing the redirector. That is awfully long.
I cannot avoid this though. What is the average time for a redirector?
Not sure about that as squidguards processing time is not measurable on my machine: 0ms
I would say that less than 10ms is very good, less than 100ms acceptable - just a wild guess.
What is the maximum queue length for the redirectors ? Is this a configurable parameter?
No idea.
I have seen FATAL errors being thrown for 22 on 10 redirectors. Isn't this a very small number?
It depends on the number of requests, not the clients. Ebay is a good example for a site which use a lot of small objects, thus causing clients to issue a lot of small requests in a short amount of time.
Plz tell me if I can change this parameter some
place and correct the problem. Regards and TIA,
Deepa
No idea either, but I think that would be turning the wrong knob anyway.
IMHO you need to improve that processing time. Can you implement some sort of caching?
I dont know how perfect your remote content filter is and how crtitical bypassed request are, but this may be another approach: Enable bypass and increase the no of redirectors to find the minimum percentage of bypassed requests.
Regards, Hendrik Voigtl�nder
