On Jun 29, 2011, at 3:18 PM, Robert Haas wrote:
> To be clear, I don't really think it matters how sensitive the cache
> is to a *complete* flush.  The question I want to ask is: how much
> does it take to knock ONE page out of cache?  And what are the chances
> of that happening too frequently?  It seems to me that if a run of 100
> tuples with the same previously-unseen XID is enough to knock over the
> applecart, then that's not a real high bar - you could easily hit that
> limit on a single page.  And if that isn't enough, then I don't
> understand the algorithm.

Would it be reasonable to keep a second level cache that store individual XIDs 
instead of blocks? That would provide protection for XIDs that are extremely 
common but don't have a good fit with the pattern of XID ranges that we're 
caching. I would expect this to happen if you had a transaction that touched a 
bunch of data (ie: bulk load or update) some time ago (so the other XIDs around 
it are less likely to be interesting) but not old enough to have been frozen 
yet. Obviously you couldn't keep too many XIDs in this secondary cache, but if 
you're just trying to prevent certain pathological cases then hopefully you 
wouldn't need to keep that many.
--
Jim C. Nasby, Database Architect                   j...@nasby.net
512.569.9461 (cell)                         http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to