On Tue, Jan 08, 2013 at 05:32:41PM -0500, Rik van Riel wrote: > Subject: x86,smp: proportional backoff for ticket spinlocks > > Simple fixed value proportional backoff for ticket spinlocks. > By pounding on the cacheline with the spin lock less often, > bus traffic is reduced. In cases of a data structure with > embedded spinlock, the lock holder has a better chance of > making progress. > > If we are next in line behind the current holder of the > lock, we do a fast spin, so as not to waste any time when > the lock is released. > > The number 50 is likely to be wrong for many setups, and > this patch is mostly to illustrate the concept of proportional > backup. The next patch automatically tunes the delay value. > > Signed-off-by: Rik van Riel <r...@redhat.com> > Signed-off-by: Michel Lespinasse <wal...@google.com> > ---
Acked-by: Rafael Aquini <aqu...@redhat.com> > arch/x86/kernel/smp.c | 23 ++++++++++++++++++++--- > 1 files changed, 20 insertions(+), 3 deletions(-) > > diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c > index 20da354..aa743e9 100644 > --- a/arch/x86/kernel/smp.c > +++ b/arch/x86/kernel/smp.c > @@ -117,11 +117,28 @@ static bool smp_no_nmi_ipi = false; > */ > void ticket_spin_lock_wait(arch_spinlock_t *lock, struct __raw_tickets inc) > { > + __ticket_t head = inc.head, ticket = inc.tail; > + __ticket_t waiters_ahead; > + unsigned loops; > + > for (;;) { > - cpu_relax(); > - inc.head = ACCESS_ONCE(lock->tickets.head); > + waiters_ahead = ticket - head - 1; > + /* > + * We are next after the current lock holder. Check often > + * to avoid wasting time when the lock is released. > + */ > + if (!waiters_ahead) { > + do { > + cpu_relax(); > + } while (ACCESS_ONCE(lock->tickets.head) != ticket); > + break; > + } > + loops = 50 * waiters_ahead; > + while (loops--) > + cpu_relax(); > > - if (inc.head == inc.tail) > + head = ACCESS_ONCE(lock->tickets.head); > + if (head == ticket) > break; > } > } > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/