With --disable-spinlocks, we need to know the number of spinlocks in the
system at startup, so that we can reserve enough semaphores to mimic the
spinlocks. It's calculated in SpinLockSemas():
/*
* Report number of semaphores needed to support spinlocks.
*/
int
SpinlockSemas(void)
{
/*
* It would be cleaner to distribute this logic into the affected
modules,
* similar to the way shmem space estimation is handled.
*
* For now, though, we just need a few spinlocks (10 should be plenty)
* plus one for each LWLock and one for each buffer header. Plus one
* for each XLog insertion slot in xlog.c.
*/
return NumLWLocks() + NBuffers + 10 + NUM_XLOGINSERT_SLOTS;
}
Ten spinlocks might've been plenty in 2001 when that comment was
written, but we have reached that number. I grepped the sources for
SpinLockInit, and found that we use:
1 spinlock for each LWLock
1 spinlock for each buffer
1 spinlock for each wal sender process slot (max_wal_senders in total)
1 spinlock for each partitioned hash table (2 in predicate.c, 2 in
lock.c, 1 in buf_table.c)
1 spinlock in XLogCtl->info_lck
1 spinlock for WAL receiver (WalRcv->mutex)
1 spinlock for hot standby xid tracking (procArray->known_assigned_xids_lck)
1 spinlock for shared memory allocator (ShmemLock)
1 spinlock for shared inval messaging (shmInvalBuffer->msgnumLock)
1 spinlock for the proc array freelist (ProcStructLock)
1 spinlock for fast-path lock mechanism (FastPathStrongRelationLocks->mutex)
1 spinlock for the checkpointer (CheckpointerShmem->ckpt_lck)
That's a fixed number of 13 spinlocks, plus 1 for each LWLock, buffer,
and wal sender.
I'll go and adjust SpinLockSemas() to take the walsenders into account,
and bump the fixed number from 10 to 30. That should be enough headroom
for the next ten years.
- Heikki
--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers