Heikki Linnakangas <heikki.linnakan...@enterprisedb.com> writes: > On 12.08.2011 21:49, Robert Haas wrote: >> I don't think it really matters whether we occasionally blow away an >> entry unnecessarily due to a hash-value collision. IIUC, we'd only >> need to worry about hash-value collisions between rows in the same >> catalog; and the number of entries that we have cached had better be >> many orders of magnitude less than 2^32. If the cache is large enough >> that we're having hash value collisions more than once in a great >> while, we probably should have flushed some entries out of it a whole >> lot sooner and a whole lot more aggressively, because we're likely >> eating memory like crazy.
> What would suck, though, is if you have an application that repeatedly > creates and drops a temporary table, and the hash value for that happens > to match some other table in the database. catcache invalidation would > keep flushing the entry for that other table too, and you couldn't do > anything about it except for renaming one of the tables. > Despite that, +1 for option #2. The risk of collision seems acceptable, > and the consequence of a collision wouldn't be too bad in most > applications anyway. Yeah. Also, to my mind this is only a fix that will be used in 9.0 and 9.1 --- now that it's occurred to me that we could use tuple xmin/xmax to invalidate catcaches instead of recording individual TIDs, I'm excited about doing that instead for 9.2 and beyond. I believe that that could result in a significant reduction in sinval traffic, which would be a considerable performance win. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers