G'day all.The key here is the pattern in which the data in the tree are accessed -- and it seems that whether or not that pattern changes over time is not terribly significant. If there's a degree of locality of reference, i.e. accessing a certain value means there's likely an enhanced probability of values `near' (in terms of whatever ordering is being used) it being accessed in the short term, it's likely a big win. Otherwise the constant factors likely clobber you, as accesses `drag along' the neighbors.
Tom Pledger wrote:
How about adapting splay trees so that their pointers become weak after a certain depth? The advantage for caching is that the more frequently used elements move closer to the root, so you wouldn't have to add much code for tracking recent use, just a depth threshold.
Nice idea, but I think it wouldn't be generally useful. Splay trees optimise themselves for static frequency distributions. Caches, on the other hand, are generally used more for changing working sets (i.e. dynamic frequency distributions). It seems to me that splay trees are therefore not necessarily a good fit.
As far as taking a modified LRU approach is concerned (since, as mentioned elsethread, when LRU goes bad, it goes *really* bad), you might want to look at:
http://www.cc.gatech.edu/~yannis/eelru.ps
Cheers, (with hopes I'm not out of my league) --ag -- Artie Gold -- Austin, Texas Oh, for the good old days of regular old SPAM.
_______________________________________________ Haskell mailing list [EMAIL PROTECTED] http://www.haskell.org/mailman/listinfo/haskell