I didn't yet have time to read your essay on caching (I will), but a few thoughts...
Stefano Mazzocchi wrote: > <skip/> > > Everytime a request comes, I have the information on the 'estimated' > time the resource will need to be generated on the different routes. > Once the decision is taken, I have the information on how much it took > to get it and I can compare it with the "assumed" time that would have > taken on the other path. Then I know how much time I *saved* with this > choice. I really like the idea of having two options to choose from based on a single variable (the time it takes to produce the resource). It might seem overly simplified, but it actually encapsulates the problem well. In fact, with one extra level of abstraction stating that "cache is just another place to get the same thing", this could be further simplified to something like: Source soc = new SourceOfChoice(Source a, Source b); Here, SourceOfChoice (or SoC, if you will ;) is only responsible of tracking the time it takes to finish certain operation, such as the retrieval of some resource, performing the load balancing calculations, and delegating the requests accordingly. This design enforces SoC (in its real meaning) and allows SourceOfChoice to focus on a very small problem. ('Source' here, of course, isn't referring to Cocoon or TrAX Sources.) In complex applications this could result in a tree that automatically optimizes the lookup paths (on a node level). I really don't know Cocoon that well, but for 'some server application' the structure might look something like: <Pipeline> / \ [completely cached] [dynamic] / \ <Generator> <Transformers> / \ [cached] [dynamic] / \ <File> <DBQuery> Of course, the cost of making the <decisions> must be justified by the overall performance gain from adaptibility. The idea might also be better applied to more complex applications (can't think of many, though ;) > But we don't stop here: we also have a way to measure the efficiency of > the cache itself between cache hits and cache misses. I've used an extremely simple, but quite powerful adaptive cache implementation that periodically adjusts its size according to the number of cache hits and misses. It supports pluggable capacity adjustment algorithms and the default one works as follows: if cacheMisses > maxMisses and currentCapacity < maxCapacity increase capacity by cacheMisses / maxMisses * capacityIncrement else if cacheMisses < minMisses and currentCapacity > minCapacity decrease capacity by capacityDecrement This simple approach is quite effective as it allows the capacity to grow rapidly under high load conditions. I'm not sure if any of this is really useful for Cocoon, but you never know ;) (: Anrie ;) --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, email: [EMAIL PROTECTED]