On Fri, Jun 14, 2013 at 11:28 PM, Alvaro Herrera
<alvhe...@2ndquadrant.com> wrote:
> Re-summarization is relatively expensive, because the complete page range has
> to be scanned.

That doesn't sound too bad to me. It just means there's a downside to
having larger page ranges. I would expect the page ranges to be
something in the ballpark of 32 pages --  scanning 32 pages to
resummarize doesn't sound that painful but sounds like it's large
enough that the resulting index would be a reasonable size.

But I don't understand why an insert would invalid a tuple. An insert
can just update the min and max incrementally. It's a delete that
invalidates the range but as you note it doesn't really invalidate it,
just mark it as needing a refresh -- and even then only if the value
being deleted is equal to either the min or max.

> Same-size page ranges?
> Current related literature seems to consider that each "index entry" in a
> minmax index must cover the same number of pages.  There doesn't seem to be a

I assume the reason for this in the literature is the need to quickly
find the summary for a given page when you're handling an insert or
delete. If you have some kind of meta data structure that lets you
find it (which I gather is what the validity map is?) then you
wouldn't need it. But that seems like a difficulty cost to justify
compared to just having a 1:1 mapping from block to bitmap tuple.

-- 
greg


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to