I’m unclear on what is being repro’d in 9.6. Are you getting the duplicate rows problem or just the reindex problem? Are you testing with asserts enabled(I’m not)?
If you are getting the dup rows consider the code in the block in heapam.c that starts with the comment “replace multi by update xid”. When I repro this I find that MultiXactIdGetUpdateXid() returns 0. There is an updater in the multixact array however the status is MultiXactStatusForNoKeyUpdate and not MultiXactStatusNoKeyUpdate. I assume this is a preliminary status before the following row in the hot chain has it’s multixact set to NoKeyUpdate. Since a 0 is returned this does precede cutoff_xid and TransactionIdDidCommit(0) will return false. This ends up aborting the multixact on the row even though the real xid is committed. This sets XMAX to 0 and that row becomes visible as one of the dups. Interestingly the real xid of the updater is 122944 and the cutoff_xid is 122945. I’m still debugging but I start late so I’m passing this incomplete info along now. On 10/7/17, 4:25 PM, "Alvaro Herrera" <alvhe...@alvh.no-ip.org> wrote: Peter Geoghegan wrote: > On Sat, Oct 7, 2017 at 1:31 AM, Alvaro Herrera <alvhe...@alvh.no-ip.org> wrote: > >> As you must have seen, Alvaro said he has a variant of Dan's original > >> script that demonstrates that a problem remains, at least on 9.6+, > >> even with today's fix. I think it's the stress-test that plays with > >> fillfactor, many clients, etc [1]. > > > > I just execute setup.sql once and then run this shell command, > > > > while :; do > > psql -e -P pager=off -f ./repro.sql > > for i in `seq 1 5`; do > > psql -P pager=off -e --no-psqlrc -f ./lock.sql & > > done > > wait && psql -P pager=off -e --no-psqlrc -f ./reindex.sql > > psql -P pager=off -e --no-psqlrc -f ./report.sql > > echo "done" > > done > > I cannot reproduce the problem on my personal machine using this > script/stress-test. I tried to do so on the master branch git tip. > This reinforces the theory that there is some timing sensitivity, > because the remaining race condition is very narrow. Hmm, I think I added a random sleep (max. 100ms) right after the HeapTupleSatisfiesVacuum call in vacuumlazy.c (lazy_scan_heap), and that makes the race easier to hit. -- Álvaro Herrera https://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers