On Tue, Oct 29, 2013 at 8:16 AM, Tom Lane <t...@sss.pgh.pa.us> wrote:

> Leonardo Francalanci <m_li...@yahoo.it> writes:
> >> Before getting too excited about some new academic index type, it's
> worth
> >> noting the sad state in which hash indexes have languished for years.
>
> > Aren't hash indexes in a poor state because they are not faster than
> btree in every condition?
>
> They should, in theory, be faster than btrees -- O(1) not O(log N) page
> fetches per lookup.


However, all but one or two of those page fetches are almost surely cached,
so if the problem is IO then the benefits are not likely to be seen.



> In practice they don't seem to be faster, and
> nobody's bothered to find out exactly why.


We know why, more or less.  Hash indexes use lmgr locks to protect against
bucket splits conflicting with ordinary operations, and that destroys
performance even in isolation, and destroys it even more in concurrent
situations.

Robert removed the lmgr lock on the meta page by using a retry loop with
lightweight locks.  I've outlined how to remove the heavyweight lock on the
bucket page as well, but it would require an on-disk change to the index so
that each page knows how far the bucket it is in has been split, and it
also might abuse the intention of lightweight locks a bit.  But I'm
reluctant to put much time into that without there being any prospects of
solving the problem of how to WAL bucket splits when buckets can have an
unbounded number of overflow pages.

(Once each page knows its own split level, we could also remove the need
for even a light-weight lock on the metapage for most operations by
stuffing some of the key info from that into the relcache.)

Cheers,

Jeff

Reply via email to