> > thought out way of predicting/limiting their size. (2) How the heck do > > you get rid of obsoleted cached plans, if the things stick around in > > shared memory even after you start a new backend? (3) A shared cache > > requires locking; contention among multiple backends to access that > > shared resource could negate whatever performance benefit you might hope > > to realize from it.
I don't understand all these locking problems? Surely the only lock a transaction would need on a stored query is one that prevents the cache invalidation mechanism from deleting it out from under it? Surely this means that there would be tonnes of readers on the cache - none of them blocking each other, and the odd invalidation event that needs a complete lock? Also, as for invalidation, there probably could be just two reasons to invalidate a query in the cache. (1) The cache is running out of space and you use LRU or something to remove old queries, or (2) someone runs ANALYZE, in which case all cached queries should just be flushed? If they specify an actual table to analyze, then just drop all queries on the table. Could this cache mechanism be used to make views fast as well? You could cache the queries that back views on first use, and then they can follow the above rules for flushing... Chris ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]