I just spent 20+ hours debugging a elusive problem which only happened
under heavy concurrency. Slight changes to the code made it
disappear. In the end it turned out that gcc liked to move *one*
instruction across the SpinLockRelease() - and only sometimes. Unrelated
changes made the vanish.
The relevant code was:
volatile LWLock *lock = l;
dlist_push_head((dlist_head *) &lock->waiters, &MyProc->lwWaitLink);
Now you could argue that it's my fault because of two things:
a) The code casts away the volatile from lock.
b) MyProc isn't volatile.
But a) isn't really avoidable because it'll otherwise generate compiler
warnings and b) is done that way all over the tree. E.g. lwlock.c has
several copies of (note the nonvolatility of proc):
volatile LWLock *lock = l;
PGPROC *proc = MyProc;
proc->lwWaiting = true;
proc->lwWaitMode = LW_WAIT_UNTIL_FREE;
proc->lwWaitLink = NULL;
/* waiters are added to the front of the queue */
proc->lwWaitLink = lock->head;
if (lock->head == NULL)
lock->tail = proc;
lock->head = proc;
/* Can release the mutex now */
There's nothing forcing the compiler to not move any of the proc->*
assignments past the SpinLockRelease(). And indeed in my case it was
actually the store to lwWaitLink that was moved across the lock.
I think it's just pure luck that there's no active bug (that we know of)
caused by this. I wouldn't be surprised if some dubious behaviour we've
seen would be caused by similar issues.
Now, we can fix this and similar cases by more gratuitous use of
volatile. But for one we're never going to find all cases. For another
it won't help *at all* for architectures with looser CPU level memory
I think we finally need to bite the bullet and make all S_UNLOCKs
I'd previously, in
gone through the list of S_UNLOCKs and found several that were
lacking. Most prominently the default S_UNLOCK is just
#define S_UNLOCK(lock) (*((volatile slock_t *) (lock)) = 0)
which allows the compiler to move non volatile access across and does
nothing for CPU level cache coherency.
I think we should rework things so that we fall back to
pg_write_barrier(), (*((volatile slock_t *) (lock)) = 0) instead of what
we have right now.
That'd require to make barrier.h independent from s_lock.h, but I think
that'd be a pretty clear improvement. One open question is what happens
with the SpinlockRelease() when barriers are implemented using spinlocks
and we need a barrier for the SpinlockRelease().
Better ideas, other suggestions?
I'm now going to drink.
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
Sent via pgsql-hackers mailing list (email@example.com)
To make changes to your subscription: