Mike Matrigali <[EMAIL PROTECTED]> writes: > thanks, that makes sense - I just wasn't thinking about the > startup costs and that growing the cache created invalid pages in the > same way as "shrinking" does in the case of drop table. > I tend to ignore the startup and wait for the > steady state and concentrate on performance there, great > you found this issue.
The problem was more like the steady state was never reached. Not sure "steady" is the right word for the state being reached, though... > The clock algorithm is an area that may be ripe for improvements > (or probably better a complete new cache factory), > especially when dealing with very large caches. Also the cache > may be the first place to look to use new more concurrent > data structures provided by java. I would expect the current > design to scale reasonably well on 1, 2 and maybe 4 processors - > but it may see problems after that. I would expect the most gain > to be first the buffer manager, next the lock manager, and then > the various other caches (statement cache, open file cache). I imagine it would be relatively easy to implement and test prototypes of new cache managers in Derby. Implementing the CacheFactory and CacheManager interfaces should be enough, I think. Experimenting with concurrent hash tables, multiple (prioritized) LRUs and other caching strategies would certainly be an interesting task, and I too believe there's a lot to gain in this area. -- Knut Anders
