Forwarding my reply to axkit-users... On Wed, 9 Jan 2002, Tod Harter wrote:
> Why would it matter? Any arbitrary filter chain SHOULD have the same caching > behaviour. The results of each stage in the filter chain need to be cached, > and each stage needs to have a way to tell the caching system what aspects of > a request are significant to it so that proper cache invalidation can take > place. I'm not entirely convinced by this architecture (which IIRC is what Cocoon uses). If caching were a zero sum operation, it would be fine. But it's not. It has a lot of overhead. So we officially cache at the end of the request (we unofficially cache everywhere, but it's a different story). I think Cocoon2 has proved that while some ideas may sound like they will give good performance, they are not instant wins. > In other words the caching architecture should involve something like each > processing stage being able to supply a hash to the caching system. Why bother? All we would need to know is where to start. Each stage doesn't need a new hash... The use cases for being able to restart at arbitrary stages on a per-request basis are probably non-existant. > Every > time a request is processed the system needs to go through the filter chain > and request output. Each stage would then be free to analyze the request and > the environment and either tell the cache to supply the next stage with > existing data or else regenerate its output along with a new hash for that > particular request that can be used by the cache later to recover it if > required. > > I suspect that really accurate caching behaviour along those lines is going > to require some major changes to the processor and cache module APIs. Most likely. But I don't see the win. Cocoon's certainly no better off for it. Except for a slightly cleaner design. Of course I'd be willing to eat my words for some code... :-) > I'd suggest something along the lines of a caching API that allows the core > to ask each stage to "provide output", then the stages themselves would ask > the caching subsystem if it already has the required data and output it. If > not then that stage would generate the output and a hash key for the cache to > identify it by. So essentially the cache would just need a couple of > functions, one to recover content by hash key, and one to save content on a > particular hash key. In point of fact the cache itself is little more than a > persistent perl hash... One area we lose with this style is that our current caching architecture allows us to use DECLINED to deliver the resulting cache. I think that's something to consider. It might not lose us much performance wise, but it's very cool as far as getting all the right headers sent for us. Another thing to consider is we "cache" other things too, like stylesheets and compiled perl code. That's likely the actual problem with the code below, because the XPS is cached perl code, and so is the XSP. In all, I think there's a much simpler solution to this particular problem, though I'd be willing to investigate more complete caching architectures for certain. Maybe AxKitB2B is the best place to try this out though - since it has very little code in the way at the moment, and we also have a fairly good plan on caching SAX events. -- <!-- Matt --> <:->Get a smart net</:-> --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]
