mån 2007-03-26 klockan 11:40 +0200 skrev Guillaume Smet:
> On 3/26/07, Henrik Nordstrom <[EMAIL PROTECTED]> wrote:
> > One way is to set up a separate set of cache_peer for these robots,
> > using the no-cache cache_peer option to avoid having that traffic
> > cached. Then use cache_peer_access with suitable acls to route the robot
> > requests via these peers and deny them from the other normal set of
> > peers.
> 
> AFAICS, it won't solve the problem as the robots won't be able to
> access the "global" cache readonly.

It does.

Squid processing sequence is kind of:

1. accept request
2. http_access etc
3. cache lookup, send response if a cache hit.
4. if cache miss, look for a cache_peer
5. cache response if allowed

Regards
Henrik

Attachment: signature.asc
Description: Detta är en digitalt signerad meddelandedel

Reply via email to