Andres Freund <and...@2ndquadrant.com> writes:
> On 2014-03-17 21:09:10 +0000, Greg Stark wrote:
>> That said, it would be nice to actually fix the problem, not just
>> detect it. Eventually vacuum would fix the problem. I think. I'm not
>> really sure what will happen actually.

> Indexes will quite possibly stay corrupted. I think. If there was a
> index lookup for a affected row, the kill_prior_tuple logic will have
> quite possibly have zapped the index entry.

Whether it did or not, there's no way for the index entry to reach the
now-live tuple version (if it was a HOT update), so the question is moot.
What seems more interesting is whether REINDEX could fix the problem,
but at least with the current logic in catalog/index.c the answer seems
to be "no".

It's possible that a REINDEX attempt would work to detect whether you have
a problem (ie, see if you get one of the errors I just added).  I'm not
sure that's bulletproof though.

> I think the best way to really cleanup a table is to use something like:
> ALTER TABLE rew ALTER COLUMN data TYPE text USING (data);
> where text is the previous type of the column. That should trigger a
> full table rewrite, without any finesse about tracking ctid chains.

Um... don't we have logic in there that's smart enough to short-circuit
that?

                        regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to