Hm, in my example, there are no INSERTs in the two conflicting transactions?
The suggestion on adding an ON INSERT trigger would have no effect as far as
I can see.
The comment from trigger.c is also about INSERT, can't see how it affects
us.

I don't understand exactly why this deadlock occurs, but the one thing I
cannot understand is why process 2 is not allowed to update the same row,
which it has already updated in the same transaction.

In general, if a transaction has a "write row lock" (or what ever it is
called in postgres), i.e., exclusive right to modify the row in the table,
shouldn't that same transaction always be allowed to update the same row in
a later stage? I understand the foreign key is the reason for the conflict,
but process 2 doesn't attempt to modify the foreign key data, it only does
update on table B.

It just doesn't make sense to abort process 2 with a deadlock in my example.

(If it helps, we would be willing to assign a bounty prize to anyone taking
on the task to solve this problem.)

Best regards,

Joel Jacobson
Glue Finance


2010/8/20 Tom Lane <t...@sss.pgh.pa.us>

> "Kevin Grittner" <kevin.gritt...@wicourts.gov> writes:
> > The surprising thing is that a particular row is (using the
> > identifiers from the attachment):
>
> > Process 2 updates a particular row without blocking.
> > Process 1 updates the same row, which blocks.
> > Process 2 updates the same row again (with *exactly* the same UPDATE
> > statement), which fails with a deadlock.
>
> > I'm not sure I consider that a bug, but it moves the needle on the
> > astonishment meter.
>
> OK, I looked a bit closer.  The first update in process 2 is changing
> a row in B that has an FK reference to an already-modified row in A.
> The only reason that doesn't block is that we optimize away taking a
> sharelock on the referenced row if the update doesn't change the FK
> column(s), as this doesn't.  However, the *second* update doesn't
> get the benefit of that optimization, as per this comment in trigger.c:
>
>                     * There is one exception when updating FK tables: if
> the
>                     * updated row was inserted by our own transaction and
> the
>                     * FK is deferred, we still need to fire the trigger.
> This
>                     * is because our UPDATE will invalidate the INSERT so
> the
>                     * end-of-transaction INSERT RI trigger will not do
>                     * anything, so we have to do the check for the UPDATE
>                     * anyway.
>
> So it goes and waits for sharelock on the A row, and then you have a
> deadlock because process 1 has exclusive lock on that row and is already
> blocked waiting for process 2.
>
> The Glue guys aren't the first to complain of this behavior, so it'd
> be nice to improve it.
>
> If we knew that the already-updated row was one for which we'd been able
> to optimize away the FK check, then we could do so again on the second
> update (assuming it still didn't change the FK columns), but I don't see
> any practical way to know that.  We only have our hands on the current
> update's old and new tuples, not on previous versions; and there's no
> convenient way to find the previous version because the update ctid
> links run the other way.
>
> [ thinks for awhile... ]  Conceivably we could get around this by
> programming the ON INSERT trigger to chase forward to the latest live
> row version, rather than just doing nothing when the initially inserted
> row has been outdated.  It'd be a pretty ticklish thing to get right,
> though.
>
>                        regards, tom lane
>



-- 
Best regards,

Joel Jacobson
Glue Finance

E: j...@gluefinance.com
T: +46 70 360 38 01

Postal address:
Glue Finance AB
Box  549
114 11  Stockholm
Sweden

Visiting address:
Glue Finance AB
Birger Jarlsgatan 14
114 34 Stockholm
Sweden

Reply via email to