On Mon, Mar 14, 2011 at 7:40 PM, Greg Stark <gsst...@mit.edu> wrote:
> On Mon, Mar 14, 2011 at 8:33 PM, Robert Haas <robertmh...@gmail.com> wrote:
>> I'm not sure about that either, although I'm not sure of the reverse
>> either.  But before I invest any time in it, do you have any other
>> good ideas for addressing the "it stinks to scan the entire index
>> every time we vacuum" problem?  Or for generally making vacuum
>> cheaper?
>
> You could imagine an index am that instead of scanning the index just
> accumulated all the dead tuples in a hash table and checked it before
> following any index link. Whenever the hash table gets too big it
> could do a sequential scan and prune any pointers to those tuples and
> start a new hash table.

Hmm.  For something like a btree, you could also remove each TID from
the hash table when you kill the corresponding index tuple.

> That would work well if there are frequent vacuums finding a few
> tuples per vacuum. It might even allow us to absorb dead tuples from
> "retail" vacuums so we could get rid of line pointers earlier.  But it
> would involve more WAL-logged operations and incur an extra overhead
> on each index lookup.

Yeah, that seems deeply unfortunate.  It's hard to imagine us wanting
to go there.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to