Tom Lane wrote:
Simon was complaining a bit ago that we still have problems with
excessive contention for the ProcArrayLock, and that much of this stems
from the need for transaction exit to take that lock exclusively.
The lazy-XID patch, as committed, doesn't help that situation at all,
* Lock ProcArrayLock because that's what GetSnapshotData uses.
* You might assume that we can skip this step if we had no
* transaction id assigned, because the failure case outlined
* in GetSnapshotData cannot happen in that case. This is true,
* but we *still* need the lock guarantee that two concurrent
* computations of the *oldest* xmin will get the same result.
I think the comment is correct in principle - If we remove the oldest
xmin without locking, then two concurrent OldestXmin calculations
will get two different results. The question is if that has any
negative effects, though.
That leaves xmin, which AFAICS is
only interesting for the computations of GetOldestXmin() and
RecentGlobalXmin. And I assert it doesn't matter if those numbers
advance asynchronously, so long as they never go backward.
Yes, the xmin is surely the only field that might need need the locking.
It was this comment in GetSnapshotData that made me keep the locking
in the first place:
* It is sufficient to get shared lock on ProcArrayLock, even if we are
* computing a serializable snapshot and therefore will be setting
* MyProc->xmin. This is because any two backends that have overlapping
* shared holds on ProcArrayLock will certainly compute the same xmin
* (since no xact, in particular not the oldest, can exit the set of
* running transactions while we hold ProcArrayLock --- see further
* discussion just below). So it doesn't matter whether another backend
* concurrently doing GetSnapshotData or GetOldestXmin sees our xmin as
* set or not; he'd compute the same xmin for himself either way.
* (We are assuming here that xmin can be set and read atomically,
* just like xid.)
But now that I read this again, I think that comment is just missleading -
especially the part "So it doesn't matter whether another backend concurrently
doing GetSnapshotData or GetOldestXmin sees our xmin as set or not; he'd compute
the same xmin for himself either way."
This sounds as if the Proc->xmin that *one* backend announces had
influence over the Proc->xmin that *another* backend might compute.
Which isn't true - it only influences the GlobalXmin that another backend might
So I believe you're right, and we can skip taking the lock in the no xid case -
I actually think with just a little bit of more work, we can go even further,
and get rid of the ReadNewTransactionId() call completely during snapshotting.
There are two things we must ensure when I comes to snapshots, commits and
1) A transaction must either be not in progress, be in our snapshot, or
have an xid >= xmax.
2) If transaction A sees B as committed, and B sees C as committed, then
A must see C as committed.
ad 1): We guarantee that by storing the xid in the proc array before releasing
the XidGenLock. Therefore, when we later obtain our xmax value,
we can be sure that we see all xacts in the proc array that have an
xid < xmax and are in progress.
ad 2): We guarantee that by serializing snapshotting against committing. Since
we use ReadNewTransactionId() as the snapshot's xmax this implies that
we take the ProcArrayLock *before* reading our xmax value.
Now, ReadNewTransactionId() is actually larger than necessary as a xmax.
The minimal xmax that we can set is "largest committed xid"+1. We can
easily track that value during commit when we hold the ProcArrayLock
(If we have no xid, and therefor don't have to hold the lock, we also
don't need to update that value).
If we used this "LatestCommittedXid" as xmax, we'd still guarantee (2),
but without having to hold the XidGenLock during GetSnapshotData().
I wouldn't have dared to suggest this for 8.3, but since you came up with
locking improvements in the first place... ;-)
greetings, Florian Pflug
---------------------------(end of broadcast)---------------------------
TIP 7: You can help support the PostgreSQL project by donating at