On 19.07.2011 19:22, Simon Riggs wrote:
Remove O(N^2) performance issue with multiple SAVEPOINTs.
Subtransaction locks now released en masse at main commit, rather than
repeatedly re-scanning for locks as we ascend the nested transaction tree.
Split transaction state TBLOCK_SUBEND into two states, TBLOCK_SUBCOMMIT
and TBLOCK_SUBRELEASE to allow the commit path to be optimised using
the existing code in ResourceOwnerRelease() which appears to have been
intended for this usage, judging from comments therein.

CommitSubTransaction(true) does this:

ResourceOwnerRelease(s->curTransactionOwner, RESOURCE_RELEASE_LOCKS, true, isTopLevel /* == true */);
...
ResourceOwnerDelete(s->curTransactionOwner);

Because isTopLevel is passed as true, ResourceOwnerRelease() doesn't release or transfer the locks belonging to the resource owner. After the call, they still point to s->curTransactionOwner. Then, the resource owner is deleted. After those two calls, the locks still have pointers to the now-pfree'd ResourceOwner object. Looking at lock.c, we apparently never dereference LOCALLOCKOWNER.owner field. Nevertheless, a dangling pointer like that seems like a recipe for trouble. After releasing all subtransactions, we still fire deferred triggers, for example, which can do arbitrarily complex things. For example, you might allocate new resource owners, which if you're really unlucky might get allocated at the same address as the already-pfree'd resource owner. I'm not sure what would happen then, but it can't be good.

Instead of leaving the locks dangling to an already-destroyed resource owner, how about assigning all locks directly to the top-level resource owner in one sweep? That'd still be much better than the old way of recursively reassigning them up the subtransaction tree, one level at a time.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to