I found two, relatively minor, issues.

1) I think we should perform a relkind check in
   collect_corrupt_items(). Atm we'll "gladly" run against an index. If
   we actually entered the main portion of the loop in
   collect_corrupt_items(), that could end up corrupting the table (via
   HeapTupleSatisfiesVacuum()). But it's probably safe, because the vm
   fork doesn't exist for anything but heap/toast relations.

2) GetOldestXmin() currently specifies a relation, which can cause
   trouble in recovery:

         * If we're not computing a relation specific limit, or if a shared
         * relation has been passed in, backends in all databases have to be
         * considered.
        allDbs = rel == NULL || rel->rd_rel->relisshared;

        /* Cannot look for individual databases during recovery */
        Assert(allDbs || !RecoveryInProgress());

  I think that needs to be fixed.

3) Harmless here, but I think it's bad policy to release locks
   on normal relations before the end of xact.
+       relation_close(rel, AccessShareLock);

   i.e. we'll Assert out.

+                       if (check_visible)
+                       {
+                               HTSV_Result state;
+                               state = HeapTupleSatisfiesVacuum(&tuple, 
OldestXmin, buffer);
+                               if (state != HEAPTUPLE_LIVE ||
+                                       record_corrupt_item(items, 
+                               else

This theoretically could give false positives, if GetOldestXmin() went
backwards. But I think that's ok.

5) There's a bunch of whitespace damage in the diff, like
                Oid                     relid = PG_GETARG_OID(0);
-               MemoryContext   oldcontext;
+               MemoryContext oldcontext;

Otherwise this looks good. I played with it for a while, and besides
finding intentionally caused corruption, it didn't flag anything
(besides crashing on a standby, as in 2)).


Andres Freund

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to