On Wed, Jun 15, 2016 at 12:44 AM, Robert Haas <robertmh...@gmail.com> wrote:
> On Tue, Jun 14, 2016 at 8:11 AM, Robert Haas <robertmh...@gmail.com> wrote:
>>>> I noticed that the tuples that it reported were always offset 1 in a
>>>> page, and that the page always had a maxoff over a couple of hundred,
>>>> and that we called record_corrupt_item because VM_ALL_VISIBLE returned
>>>> true but HeapTupleSatisfiesVacuum on the first tuple returned
>>>> It did that because HEAP_XMAX_COMMITTED was not set and
>>>> TransactionIdIsInProgress returned true for xmax.
>>> So this seems like it might be a visibility map bug rather than a bug
>>> in the test code, but I'm not completely sure of that.  How was it
>>> legitimate to mark the page as all-visible if a tuple on the page
>>> still had a live xmax?  If xmax is live and not just a locker then the
>>> tuple is not visible to the transaction that wrote xmax, at least.
>> Ah, wait a minute.  I see how this could happen.  Hang on, let me
>> update the pg_visibility patch.
> The problem should be fixed in the attached revision of
> pg_check_visible.  I think what happened is:
> 1. pg_check_visible computed an OldestXmin.
> 2. Some transaction committed.
> 3. VACUUM computed a newer OldestXmin and marked a page all-visible with it.
> 4. pg_check_visible then used its older OldestXmin to check the
> visibility of tuples on that page, and saw delete-in-progress as a
> result.
> I added a guard against a similar scenario involving xmin in the last
> version of this patch, but forgot that we need to protect xmax in the
> same way.  With this version of the patch, I can no longer get any
> TIDs to pop up out of pg_check_visible in my testing.  (I haven't run
> your test script for lack of the proper Python environment...)

I can still reproduce the problem with this new patch.  What I see is
that the OldestXmin, the new RecomputedOldestXmin and the tuple's xmax
are all the same.

Thomas Munro

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to