My comments are interspersed for comprehension reasons...

On Sunday, February 25, 2001, at 07:10 PM, Graham Leggett wrote:
>  
> This is what squid does - yes - this would be a very useful feature 
> indeed. 
>  
> Some would argue that putting a squid cache in front of Apache will 
> solve this problem, BUT - the idea falls down due to request logging. 
> The only sane place to collect hits statistics is at the topmost level 
> in the cache hierarchy. If you were using squid, then squid would have 
> been doing all the logging, and handling different virtual hosts in 
> different files isn't something squid is any good at. So - a 100% Apache 
> solution would be great. 
>  

I've tried the Squid solution before, in front of a cluster with a couple of 
dozen virtual hosts configured, and agree that the logging is a problem with 
Squid and trying to correlate logs. 8^( It's really the reason why I decided 
not to use that idea on Elibrary and friends.

> Squid already uses a number of protocols to implement parent and sibling 
> caches - Apache's new "any protocol" support should be ideal for this. 
>  

Hmm, yes. I hadn't thought that far ahead, but that's an excellent use for it.

> One thing I wouldn't do is build the capability into mod_cache - I'd 
> build the capability directly into mod_proxy itself, with the ability to 
> query the objects in the storage manager directly. This way mod_cache 
> doesn't get the ability to talk to other servers out there - that's 
> mod_proxy's job. 
>

And it fits nicely into the proxy idea I sketched on your design. Just another 
protocol content generator, but hooked in a little differently (to the storage 
manager).
  
> In fact this might be a good reason to spilt out the storage manager 
> into it's own module: mod_storage. This module could be used to load and 
> save any object to any kind of storage, be it RAM, disk, database, LDAP, 
> whatever. 
>  

True. Something for post-mod_cache 1.0?

Chuck

Chuck Murcko
Topsail Group
http://www.topsail.org/

Reply via email to