Antti Koivunen wrote: > > Everytime a request comes, I have the information on the 'estimated' > > time the resource will need to be generated on the different routes. > > Once the decision is taken, I have the information on how much it took > > to get it and I can compare it with the "assumed" time that would have > > taken on the other path. Then I know how much time I *saved* with this > > choice. > > I really like the idea of having two options to choose from based on > a single variable (the time it takes to produce the resource).
Well, that's one possibility. In general, the choice is made between the 'least expensive' path, where you can define 'cost' as you like (normally, time to produce the resource) > It might > seem overly simplified, but it actually encapsulates the problem well. > > In fact, with one extra level of abstraction stating that "cache is just > another place to get the same thing", this could be further simplified > to something like: > > Source soc = new SourceOfChoice(Source a, Source b); > > Here, SourceOfChoice (or SoC, if you will ;) is only responsible of > tracking the time it takes to finish certain operation, such as the > retrieval of some resource, performing the load balancing calculations, > and delegating the requests accordingly. Uh, that's a good suggestion, didn't think about adding load balancing to the picture... > This design enforces SoC (in > its real meaning) and allows SourceOfChoice to focus on a very small > problem. ('Source' here, of course, isn't referring to Cocoon or TrAX > Sources.) > > In complex applications this could result in a tree that automatically > optimizes the lookup paths (on a node level). Yes, that's the idea. > I really don't know Cocoon > that well, but for 'some server application' the structure might look > something like: > > <Pipeline> > / \ > [completely cached] [dynamic] > / \ > <Generator> <Transformers> > / \ > [cached] [dynamic] > / \ > <File> <DBQuery> > > Of course, the cost of making the <decisions> must be justified by the > overall performance gain from adaptibility. The idea might also be > better applied to more complex applications (can't think of many, > though ;) well, on example would be a transformer stylesheet dynamically generated out of aggregated dynamic pipelines. Might be big time FS, I know this, but it's possible with today's architecture and the cache adapts to this very well. > > But we don't stop here: we also have a way to measure the efficiency of > > the cache itself between cache hits and cache misses. > > I've used an extremely simple, but quite powerful adaptive cache > implementation that periodically adjusts its size according to the > number of cache hits and misses. It supports pluggable capacity > adjustment algorithms and the default one works as follows: > > if cacheMisses > maxMisses and currentCapacity < maxCapacity > > increase capacity by cacheMisses / maxMisses * capacityIncrement > > else if cacheMisses < minMisses and currentCapacity > minCapacity > > decrease capacity by capacityDecrement > > This simple approach is quite effective as it allows the capacity to > grow rapidly under high load conditions. I'm not sure if any of this is > really useful for Cocoon, but you never know ;) Ah, BTW, in my caching algorithms, there is no notion of 'floating storage capacity' since I assume a given fixed amount of memory. This will need to be fixed (didn't have time to complete the survey back then and forgot about it until now that I saw your pseudocode). Thanks for bringing this up. -- Stefano Mazzocchi One must still have chaos in oneself to be able to give birth to a dancing star. <[EMAIL PROTECTED]> Friedrich Nietzsche -------------------------------------------------------------------- --------------------------------------------------------------------- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, email: [EMAIL PROTECTED]