"Simon Riggs" <[EMAIL PROTECTED]> writes:
> As more UPDATEs take place these tuple chains would grow, making
> locating the latest tuple take progressively longer.

This is the part that bothers me --- particularly the random-access
nature of the search.  I wonder whether you couldn't do something
involving an initial table fill-factor of less than 50%, and having
the first updated version living on the same heap page as its parent.
Only when the active chain length was more than one (which you
hypothesize is rare) would it actually be necessary to do a random
access into the overflow table.

More generally, do we need an overflow table at all, rather than having
these overflow tuples living in the same file as the root tuples?  As
long as there's a bit available to mark a tuple as being this special
not-separately-indexed type, you don't need a special location to know
what it is.  This might break down in the presence of seqscans though.

Actually, you omitted to mention the locking aspects of moving tuples
around --- exactly how are you going to make that work without breaking
concurrent scans?

> This allows the length of a typical tuple chain to be extremely short in
> practice. For a single connection issuing a stream of UPDATEs the chain
> length will no more than 1 at any time.

Only if there are no other transactions being held open, which makes
this claim a lot weaker.

> HOT can only work in cases where a tuple does not modify one of the
> columns defined in an index on the table, and when we do not alter the
> row length of the tuple.

Seems like "altering the row length" isn't the issue, it's just "is
there room on the page for the new version".  Again, a generous
fillfactor would give you more flexibility.

> [We'll be able to do that more efficiently when
> we have plan invalidation]

Uh, what's that got to do with it?

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at

                http://www.postgresql.org/about/donate

Reply via email to