On 2014-06-27 22:47:02 -0400, Robert Haas wrote:
> On Fri, Jun 27, 2014 at 2:00 PM, Andres Freund <and...@2ndquadrant.com> wrote:
> > On 2014-06-27 13:15:25 -0400, Robert Haas wrote:
> >> On Thu, Jun 26, 2014 at 3:04 PM, Andres Freund <and...@2ndquadrant.com> 
> >> wrote:
> >> > I don't really see usecases where it's not related data that's being
> >> > touched, but: The point is that the fastpath (not using a spinlock) might
> >> > touch the atomic variable, even while the slowpath has acquired the
> >> > spinlock. So the slowpath can't just use non-atomic operations on the
> >> > atomic variable.
> >> > Imagine something like releasing a buffer pin while somebody else is
> >> > doing something that requires holding the buffer header spinlock.
> >>
> >> If the atomic variable can be manipulated without the spinlock under
> >> *any* circumstances, then how is it a good idea to ever manipulate it
> >> with the spinlock?  That seems hard to reason about, and unnecessary.
> >> Critical sections for spinlocks should be short and contain only the
> >> instructions that need protection, and clearly the atomic op is not
> >> one of those.
> >
> > Imagine the situation for the buffer header spinlock which is one of the
> > bigger performance issues atm. We could aim to replace all usages of
> > that with clever and complicated logic, but it's hard.
> >
> > The IMO reasonable (and prototyped) way to do it is to make the common
> > paths lockless, but fall back to the spinlock for the more complicated
> > situations. For the buffer header that means that pin/unpin and buffer
> > lookup are lockless, but IO and changing the identity of a buffer still
> > require the spinlock. My attempts to avoid the latter basically required
> > a buffer header specific reimplementation of spinlocks.
> >
> > To make pin/unpin et al safe without acquiring the spinlock the pincount
> > and the flags had to be moved into one variable so that when
> > incrementing the pincount you automatically get the flagbits back. If
> > it's currently valid all is good, otherwise the spinlock needs to be
> > acquired. But for that to work you need to set flags while the spinlock
> > is held.
> >
> > Possibly you can come up with ways to avoid that, but I am pretty damn
> > sure that the code will be less robust by forbidding atomics inside a
> > critical section, not the contrary. It's a good idea to avoid it, but
> > not at all costs.
> You might be right, but I'm not convinced.

Why? Anything I can do to convince you of this? Note that other users of
atomics (e.g. the linux kernel) widely use atomics inside spinlock
protected regions.

> Does the lwlock scalability patch work this way, too?

No. I've moved one add/sub inside a critical section in the current
version while debugging, but it's not required at all. I generally think
it makes sense to document the suggestion of moving atomics outside, but
not make it a requirement.

> What I'm basically afraid of is that this will work fine in many cases
> but have really ugly failure modes.  That's pretty much what happens
> with spinlocks already - the overhead is insignificant at low levels
> of contention but as the spinlock starts to become contended the CPUs
> all fight over the cache line and performance goes to pot.  ISTM that
> making the spinlock critical section significantly longer by putting
> atomic ops in there could appear to win in some cases while being very
> bad in others.

Well, I'm not saying it's something I suggest doing all the time. But if
using an atomic op in the slow path allows you to remove the spinlock
from 99% of the cases I don't see it having a significant impact.
In most scenarios (where atomics aren't emulated, i.e. platforms we
expect to used in heavily concurrent cases) the spinlock and the atomic
will be on the same cacheline making stalls much less likely.


Andres Freund

 Andres Freund                     http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training & Services

Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:

Reply via email to