Re: [HACKERS] spinlock contention

2011-07-21 Thread Florian Pflug
On Jul18, 2011, at 04:36 , Robert Haas wrote: > On Fri, Jul 8, 2011 at 6:02 AM, Florian Pflug wrote: >>> I don't want to fiddle with your git repo, but if you attach a patch >>> that applies to the master branch I'll give it a spin if I have time. >> >> Patch attached. >> >> Beware that it needs

Re: [HACKERS] spinlock contention

2011-07-17 Thread Robert Haas
On Fri, Jul 8, 2011 at 6:02 AM, Florian Pflug wrote: >> I don't want to fiddle with your git repo, but if you attach a patch >> that applies to the master branch I'll give it a spin if I have time. > > Patch attached. > > Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition >

Re: [HACKERS] spinlock contention

2011-07-13 Thread Florian Pflug
On Jul13, 2011, at 22:04 , Robert Haas wrote: > On Jul 12, 2011, at 8:10 PM, Florian Pflug wrote: >> I wonder if clearing the waiters-present bit only upon clearing the >> queue completely is necessary for correctness. Wouldn't it be OK >> to clear the bit after waking up at least one locker? That

Re: [HACKERS] spinlock contention

2011-07-13 Thread Robert Haas
On Jul 12, 2011, at 8:10 PM, Florian Pflug wrote: > On Jul13, 2011, at 00:10 , Robert Haas wrote: >> On Jul 12, 2011, at 8:03 AM, Florian Pflug wrote: >>> The algorithm is quite straight forward, if one assumes a lock-free >>> implementation of a queue (More on that below) >> >> This is similar

Re: [HACKERS] spinlock contention

2011-07-12 Thread Florian Pflug
On Jul13, 2011, at 00:10 , Robert Haas wrote: > On Jul 12, 2011, at 8:03 AM, Florian Pflug wrote: >> The algorithm is quite straight forward, if one assumes a lock-free >> implementation of a queue (More on that below) > > This is similar to the CAS-based LWLocks I played around with, though > I

Re: [HACKERS] spinlock contention

2011-07-12 Thread Robert Haas
On Jul 12, 2011, at 8:03 AM, Florian Pflug wrote: > The algorithm is quite straight forward, if one assumes a lock-free > implementation of a queue (More on that below) This is similar to the CAS-based LWLocks I played around with, though I didn't use a lock-free queue. I think I probably need

Re: [HACKERS] spinlock contention

2011-07-12 Thread Florian Pflug
On Jul7, 2011, at 03:35 , Robert Haas wrote: > Some poking around suggests that the problem isn't that > spinlocks are routinely contended - it seems that we nearly always get > the spinlock right off the bat. I'm wondering if the problem may be > not so much that we have continuous spinlock conte

Re: [HACKERS] spinlock contention

2011-07-08 Thread Florian Pflug
On Jul8, 2011, at 22:27 , Stefan Kaltenbrunner wrote: > On 07/08/2011 04:21 PM, Tom Lane wrote: >> Florian Pflug writes: >>> Patch attached. >> >>> Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition >>> spin lock instead of "locked xadd" to increment the shared counters. >

Re: [HACKERS] spinlock contention

2011-07-08 Thread Stefan Kaltenbrunner
On 07/08/2011 04:21 PM, Tom Lane wrote: > Florian Pflug writes: >> Patch attached. > >> Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition >> spin lock instead of "locked xadd" to increment the shared counters. > > That's already sufficient reason to reject the patch. No

Re: [HACKERS] spinlock contention

2011-07-08 Thread Florian Pflug
On Jul8, 2011, at 16:21 , Tom Lane wrote: > Florian Pflug writes: >> Patch attached. > >> Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition >> spin lock instead of "locked xadd" to increment the shared counters. > > That's already sufficient reason to reject the patch.

Re: [HACKERS] spinlock contention

2011-07-08 Thread Tom Lane
Florian Pflug writes: > Patch attached. > Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition > spin lock instead of "locked xadd" to increment the shared counters. That's already sufficient reason to reject the patch. Not everyone uses gcc, let alone very recent versions

Re: [HACKERS] spinlock contention

2011-07-08 Thread Florian Pflug
On Jul7, 2011, at 18:09 , Robert Haas wrote: > On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug wrote: >> In effect, the resulting thing is an LWLock with a partitioned shared >> counter. The partition one backend operates on for shared locks is >> determined by its backend id. >> >> I've added the

Re: [HACKERS] spinlock contention

2011-07-07 Thread Robert Haas
On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug wrote: > In effect, the resulting thing is an LWLock with a partitioned shared > counter. The partition one backend operates on for shared locks is > determined by its backend id. > > I've added the implementation to the lock benchmarking tool at >  ht

Re: [HACKERS] spinlock contention

2011-07-07 Thread Florian Pflug
On Jul7, 2011, at 03:35 , Robert Haas wrote: > On Thu, Jun 23, 2011 at 11:42 AM, Robert Haas wrote: >> On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote: >>> On Jun12, 2011, at 23:39 , Robert Haas wrote: So, the majority (60%) of the excess spinning appears to be due to SInvalReadLoc

Re: [HACKERS] spinlock contention

2011-07-06 Thread Robert Haas
On Thu, Jun 23, 2011 at 11:42 AM, Robert Haas wrote: > On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote: >> On Jun12, 2011, at 23:39 , Robert Haas wrote: >>> So, the majority (60%) of the excess spinning appears to be due to >>> SInvalReadLock.  A good chunk are due to ProcArrayLock (25%). >>

Re: [HACKERS] spinlock contention

2011-06-28 Thread Robert Haas
On Tue, Jun 28, 2011 at 8:11 PM, Florian Pflug wrote: >> I wrote a little script to show to reorganize this data in a >> possibly-easier-to-understand format - ordering each column from >> lowest to highest, and showing each algorithm as a multiple of the >> cheapest value for that column: > > If

Re: [HACKERS] spinlock contention

2011-06-28 Thread Florian Pflug
On Jun28, 2011, at 22:18 , Robert Haas wrote: > On Tue, Jun 28, 2011 at 2:33 PM, Florian Pflug wrote: >> [ testing of various spinlock implementations ] > > I set T=30 and N="1 2 4 8 16 32" and tried this out on a 32-core > loaner from Nate Boley: Cool, thanks! > 100 counter increments per cycl

Re: [HACKERS] spinlock contention

2011-06-28 Thread Robert Haas
On Tue, Jun 28, 2011 at 5:55 PM, Florian Pflug wrote: > On Jun28, 2011, at 23:48 , Robert Haas wrote: >> On Tue, Jun 28, 2011 at 5:33 PM, Merlin Moncure wrote: >>> On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas wrote: user-32: none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2)

Re: [HACKERS] spinlock contention

2011-06-28 Thread Florian Pflug
On Jun28, 2011, at 23:48 , Robert Haas wrote: > On Tue, Jun 28, 2011 at 5:33 PM, Merlin Moncure wrote: >> On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas wrote: >>> user-32: >>> none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2),pg_lwlock(588.2),spin(1264.7) >> >> I may not be following

Re: [HACKERS] spinlock contention

2011-06-28 Thread Robert Haas
On Tue, Jun 28, 2011 at 5:33 PM, Merlin Moncure wrote: > On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas wrote: >> user-32: >> none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2),pg_lwlock(588.2),spin(1264.7) > > I may not be following all this correctly, but doesn't this suggest a > huge

Re: [HACKERS] spinlock contention

2011-06-28 Thread Merlin Moncure
On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas wrote: > user-32: > none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2),pg_lwlock(588.2),spin(1264.7) I may not be following all this correctly, but doesn't this suggest a huge potential upside for the cas based patch you posted upthread when

Re: [HACKERS] spinlock contention

2011-06-28 Thread Robert Haas
On Tue, Jun 28, 2011 at 2:33 PM, Florian Pflug wrote: > [ testing of various spinlock implementations ] I set T=30 and N="1 2 4 8 16 32" and tried this out on a 32-core loaner from Nate Boley: 100 counter increments per cycle worker 1 2 4 8

Re: [HACKERS] spinlock contention

2011-06-28 Thread Florian Pflug
On Jun23, 2011, at 23:40 , Robert Haas wrote: >>> I tried rewriting the LWLocks using CAS. It actually seems to make >>> things slightly worse on the tests I've done so far, perhaps because I >>> didn't make it respect spins_per_delay. Perhaps fetch-and-add would >>> be better, but I'm not holdin

Re: [HACKERS] spinlock contention

2011-06-27 Thread Robert Haas
On Sat, Jun 25, 2011 at 8:26 PM, Greg Stark wrote: > On Thu, Jun 23, 2011 at 4:42 PM, Robert Haas wrote: >> ProcArrayLock looks like a tougher nut to crack - there's simply no >> way, with the system we have right now, that you can take a snapshot >> without locking the list of running processes.

Re: [HACKERS] spinlock contention

2011-06-25 Thread Greg Stark
On Thu, Jun 23, 2011 at 4:42 PM, Robert Haas wrote: > ProcArrayLock looks like a tougher nut to crack - there's simply no > way, with the system we have right now, that you can take a snapshot > without locking the list of running processes.  I'm not sure what to > do about that, but we're probabl

Re: [HACKERS] spinlock contention

2011-06-23 Thread Robert Haas
On Thu, Jun 23, 2011 at 5:35 PM, Florian Pflug wrote: >> Well, I'm sure there is some effect, but my experiments seem to >> indicate that it's not a very important one.  Again, please feel free >> to provide contrary evidence.  I think the basic issue is that - in >> the best possible case - paddi

Re: [HACKERS] spinlock contention

2011-06-23 Thread Florian Pflug
On Jun23, 2011, at 22:15 , Robert Haas wrote: > On Thu, Jun 23, 2011 at 2:34 PM, Florian Pflug wrote: >> It seems hard to believe that there isn't some effect of two locks >> sharing a cache line. There are architectures (even some of the >> Intel architectures, I believe) where cache lines are 32

Re: [HACKERS] spinlock contention

2011-06-23 Thread Robert Haas
On Thu, Jun 23, 2011 at 2:34 PM, Florian Pflug wrote: > It seems hard to believe that there isn't some effect of two locks > sharing a cache line. There are architectures (even some of the > Intel architectures, I believe) where cache lines are 32 bit, though - > so measuring this certainly isn't

Re: [HACKERS] spinlock contention

2011-06-23 Thread Merlin Moncure
On Thu, Jun 23, 2011 at 1:34 PM, Florian Pflug wrote: > On Jun23, 2011, at 17:42 , Robert Haas wrote: >> On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote: >>> On Jun12, 2011, at 23:39 , Robert Haas wrote: So, the majority (60%) of the excess spinning appears to be due to SInvalReadL

Re: [HACKERS] spinlock contention

2011-06-23 Thread Robert Haas
On Thu, Jun 23, 2011 at 12:19 PM, Heikki Linnakangas wrote: > On 23.06.2011 18:42, Robert Haas wrote: >> ProcArrayLock looks like a tougher nut to crack - there's simply no >> way, with the system we have right now, that you can take a snapshot >> without locking the list of running processes.  I'

Re: [HACKERS] spinlock contention

2011-06-23 Thread Florian Pflug
On Jun23, 2011, at 17:42 , Robert Haas wrote: > On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote: >> On Jun12, 2011, at 23:39 , Robert Haas wrote: >>> So, the majority (60%) of the excess spinning appears to be due to >>> SInvalReadLock. A good chunk are due to ProcArrayLock (25%). >> >> Hm,

Re: [HACKERS] spinlock contention

2011-06-23 Thread Heikki Linnakangas
On 23.06.2011 18:42, Robert Haas wrote: ProcArrayLock looks like a tougher nut to crack - there's simply no way, with the system we have right now, that you can take a snapshot without locking the list of running processes. I'm not sure what to do about that, but we're probably going to have to

[HACKERS] spinlock contention

2011-06-23 Thread Robert Haas
On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote: > On Jun12, 2011, at 23:39 , Robert Haas wrote: >> So, the majority (60%) of the excess spinning appears to be due to >> SInvalReadLock.  A good chunk are due to ProcArrayLock (25%). > > Hm, sizeof(LWLock) is 24 on X86-64, making sizeof(LWLockP