On Thu, Jan 12, 2017 at 7:23 PM, Amit Kapila <amit.kapil...@gmail.com>

> On Fri, Jan 13, 2017 at 1:04 AM, Jesper Pedersen
> <jesper.peder...@redhat.com> wrote:
> > On 12/27/2016 01:58 AM, Amit Kapila wrote:
> >>
> >> After recent commit's 7819ba1e and 25216c98, this patch requires a
> >> rebase.  Attached is the rebased patch.
> >>
> >
> > This needs a rebase after commit e898437.
> >
> Attached find the rebased patch.
> Thanks!

I've put this through a lot of crash-recovery testing using different
variations of my previously posted testing harness, and have not had any

I occasionally get deadlocks which are properly detected.  I think this is
just a user-space problem, not a bug, but will describe it anyway just in

What happens is that connections occasionally create 10,000 nuisance tuples
all using the same randomly chosen negative integer index value (the one
the hash index is on), and then some time later delete those tuples using
the memorized negative index value, to force the page split and squeeze to
get exercised.  If two connections happen to choose the same negative index
for their nuisance tuples, and then try to delete "their" tuples at the
same time, and they are deleting the same 20,000 "nuisance" tuples
concurrently with a bucket split/squeeze/something, they might see the
tuples in a different order from each other and so deadlock against each
other waiting on each others transaction locks due to "locked" tuples.  Is
that a plausible explanation?



Reply via email to