Re: [PATCH 0/6] x86: reduce paravirtualized spinlock overhead

2015-04-30 Thread Jeremy Fitzhardinge
On 04/30/2015 03:53 AM, Juergen Gross wrote:
 Paravirtualized spinlocks produce some overhead even if the kernel is
 running on bare metal. The main reason are the more complex locking
 and unlocking functions. Especially unlocking is no longer just one
 instruction but so complex that it is no longer inlined.

 This patch series addresses this issue by adding two more pvops
 functions to reduce the size of the inlined spinlock functions. When
 running on bare metal unlocking is again basically one instruction.

Out of curiosity, is there a measurable difference?

J

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] x86 spinlock: Fix memory corruption on completing completions

2015-02-11 Thread Jeremy Fitzhardinge

On 02/11/2015 09:24 AM, Oleg Nesterov wrote:
 I agree, and I have to admit I am not sure I fully understand why
 unlock uses the locked add. Except we need a barrier to avoid the race
 with the enter_slowpath() users, of course. Perhaps this is the only
 reason?

Right now it needs to be a locked operation to prevent read-reordering.
x86 memory ordering rules state that all writes are seen in a globally
consistent order, and are globally ordered wrt reads *on the same
addresses*, but reads to different addresses can be reordered wrt to writes.

So, if the unlocking add were not a locked operation:

__add(lock-tickets.head, TICKET_LOCK_INC);/* not locked */

if (unlikely(lock-tickets.tail  TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);

Then the read of lock-tickets.tail can be reordered before the unlock,
which introduces a race:

/* read reordered here */
if (unlikely(lock-tickets.tail  TICKET_SLOWPATH_FLAG)) /* false */
/* ... */;

/* other CPU sets SLOWPATH and blocks */

__add(lock-tickets.head, TICKET_LOCK_INC);/* not locked */

/* other CPU hung */

So it doesn't *have* to be a locked operation. This should also work:

__add(lock-tickets.head, TICKET_LOCK_INC);/* not locked */

lfence();   /* prevent read 
reordering */
if (unlikely(lock-tickets.tail  TICKET_SLOWPATH_FLAG))
__ticket_unlock_slowpath(lock, prev);

but in practice a locked add is cheaper than an lfence (or at least was).

This *might* be OK, but I think it's on dubious ground:

__add(lock-tickets.head, TICKET_LOCK_INC);/* not locked */

/* read overlaps write, and so is ordered */
if (unlikely(lock-head_tail  (TICKET_SLOWPATH_FLAG  TICKET_SHIFT))
__ticket_unlock_slowpath(lock, prev);

because I think Intel and AMD differed in interpretation about how
overlapping but different-sized reads  writes are ordered (or it simply
isn't architecturally defined).

If the slowpath flag is moved to head, then it would always have to be
locked anyway, because it needs to be atomic against other CPU's RMW
operations setting the flag.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] x86 spinlock: Fix memory corruption on completing completions

2015-02-10 Thread Jeremy Fitzhardinge

On 02/10/2015 05:26 AM, Oleg Nesterov wrote:
 On 02/10, Raghavendra K T wrote:
 On 02/10/2015 06:23 AM, Linus Torvalds wrote:

  add_smp(lock-tickets.head, TICKET_LOCK_INC);
  if (READ_ONCE(lock-tickets.tail)  TICKET_SLOWPATH_FLAG) ..

 into something like

  val = xadd((lock-ticket.head_tail, TICKET_LOCK_INC  
 TICKET_SHIFT);
  if (unlikely(val  TICKET_SLOWPATH_FLAG)) ...

 would be the right thing to do. Somebody should just check that I got
 that shift right, and that the tail is in the high bytes (head really
 needs to be high to work, if it's in the low byte(s) the xadd would
 overflow from head into tail which would be wrong).
 Unfortunately xadd could result in head overflow as tail is high.

 The other option was repeated cmpxchg which is bad I believe.
 Any suggestions?
 Stupid question... what if we simply move SLOWPATH from .tail to .head?
 In this case arch_spin_unlock() could do xadd(tickets.head) and check
 the result

Well, right now, tail is manipulated by locked instructions by CPUs
who are contending for the ticketlock, but head can be manipulated
unlocked by the CPU which currently owns the ticketlock. If SLOWPATH
moved into head, then non-owner CPUs would be touching head, requiring
everyone to use locked instructions on it.

That's the theory, but I don't see much (any?) code which depends on that.

Ideally we could find a way so that pv ticketlocks could use a plain
unlocked add for the unlock like the non-pv case, but I just don't see a
way to do it.

 In this case __ticket_check_and_clear_slowpath() really needs to cmpxchg
 the whole .head_tail. Plus obviously more boring changes. This needs a
 separate patch even _if_ this can work.

Definitely.

 BTW. If we move clear slowpath into lock path, then probably trylock
 should be changed too? Something like below, we just need to clear SLOWPATH
 before cmpxchg.

How important / widely used is trylock these days?

J


 Oleg.

 --- x/arch/x86/include/asm/spinlock.h
 +++ x/arch/x86/include/asm/spinlock.h
 @@ -109,7 +109,8 @@ static __always_inline int arch_spin_try
   if (old.tickets.head != (old.tickets.tail  ~TICKET_SLOWPATH_FLAG))
   return 0;
  
 - new.head_tail = old.head_tail + (TICKET_LOCK_INC  TICKET_SHIFT);
 + new.tickets.head = old.tickets.head;
 + new.tickets.tail = (old.tickets.tail  ~TICKET_SLOWPATH_FLAG) + 
 TICKET_LOCK_INC;
  
   /* cmpxchg is a full barrier, so nothing can move before it */
   return cmpxchg(lock-head_tail, old.head_tail, new.head_tail) == 
 old.head_tail;


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] x86 spinlock: Fix memory corruption on completing completions

2015-02-08 Thread Jeremy Fitzhardinge
On 02/06/2015 06:49 AM, Raghavendra K T wrote:
 Paravirt spinlock clears slowpath flag after doing unlock.
 As explained by Linus currently it does:
 prev = *lock;
 add_smp(lock-tickets.head, TICKET_LOCK_INC);

 /* add_smp() is a full mb() */

 if (unlikely(lock-tickets.tail  TICKET_SLOWPATH_FLAG))
 __ticket_unlock_slowpath(lock, prev);


 which is *exactly* the kind of things you cannot do with spinlocks,
 because after you've done the add_smp() and released the spinlock
 for the fast-path, you can't access the spinlock any more.  Exactly
 because a fast-path lock might come in, and release the whole data
 structure.

Yeah, that's an embarrasingly obvious bug in retrospect.

 Linus suggested that we should not do any writes to lock after unlock(),
 and we can move slowpath clearing to fastpath lock.

Yep, that seems like a sound approach.

 However it brings additional case to be handled, viz., slowpath still
 could be set when somebody does arch_trylock. Handle that too by ignoring
 slowpath flag during lock availability check.

 Reported-by: Sasha Levin sasha.le...@oracle.com
 Suggested-by: Linus Torvalds torva...@linux-foundation.org
 Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
 ---
  arch/x86/include/asm/spinlock.h | 70 
 -
  1 file changed, 34 insertions(+), 36 deletions(-)

 diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
 index 625660f..0829f86 100644
 --- a/arch/x86/include/asm/spinlock.h
 +++ b/arch/x86/include/asm/spinlock.h
 @@ -49,6 +49,23 @@ static inline void __ticket_enter_slowpath(arch_spinlock_t 
 *lock)
   set_bit(0, (volatile unsigned long *)lock-tickets.tail);
  }
  
 +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock)
 +{
 + arch_spinlock_t old, new;
 + __ticket_t diff;
 +
 + old.tickets = READ_ONCE(lock-tickets);

Couldn't the caller pass in the lock state that it read rather than
re-reading it?

 + diff = (old.tickets.tail  ~TICKET_SLOWPATH_FLAG) - old.tickets.head;
 +
 + /* try to clear slowpath flag when there are no contenders */
 + if ((old.tickets.tail  TICKET_SLOWPATH_FLAG) 
 + (diff == TICKET_LOCK_INC)) {
 + new = old;
 + new.tickets.tail = ~TICKET_SLOWPATH_FLAG;
 + cmpxchg(lock-head_tail, old.head_tail, new.head_tail);
 + }
 +}
 +
  #else  /* !CONFIG_PARAVIRT_SPINLOCKS */
  static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock,
   __ticket_t ticket)
 @@ -59,6 +76,10 @@ static inline void __ticket_unlock_kick(arch_spinlock_t 
 *lock,
  {
  }
  
 +static inline void __ticket_check_and_clear_slowpath(arch_spinlock_t *lock)
 +{
 +}
 +
  #endif /* CONFIG_PARAVIRT_SPINLOCKS */
  
  static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 @@ -84,7 +105,7 @@ static __always_inline void arch_spin_lock(arch_spinlock_t 
 *lock)
   register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
  
   inc = xadd(lock-tickets, inc);
 - if (likely(inc.head == inc.tail))
 + if (likely(inc.head == (inc.tail  ~TICKET_SLOWPATH_FLAG)))

The intent of this conditional was to be the quickest possible path when
taking a fastpath lock, with the code below being used for all slowpath
locks (free or taken). So I don't think masking out SLOWPATH_FLAG is
necessary here.

   goto out;
  
   inc.tail = ~TICKET_SLOWPATH_FLAG;
 @@ -98,7 +119,10 @@ static __always_inline void 
 arch_spin_lock(arch_spinlock_t *lock)
   } while (--count);
   __ticket_lock_spinning(lock, inc.tail);
   }
 -out: barrier();  /* make sure nothing creeps before the lock is taken */
 +out:
 + __ticket_check_and_clear_slowpath(lock);
 +
 + barrier();  /* make sure nothing creeps before the lock is taken */

Which means that if goto out path is only ever used for fastpath
locks, you can limit calling __ticket_check_and_clear_slowpath() to the
slowpath case.

  }
  
  static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 @@ -115,47 +139,21 @@ static __always_inline int 
 arch_spin_trylock(arch_spinlock_t *lock)
   return cmpxchg(lock-head_tail, old.head_tail, new.head_tail) == 
 old.head_tail;
  }
  
 -static inline void __ticket_unlock_slowpath(arch_spinlock_t *lock,
 - arch_spinlock_t old)
 -{
 - arch_spinlock_t new;
 -
 - BUILD_BUG_ON(((__ticket_t)NR_CPUS) != NR_CPUS);
 -
 - /* Perform the unlock on the before copy */
 - old.tickets.head += TICKET_LOCK_INC;

NB (see below)

 -
 - /* Clear the slowpath flag */
 - new.head_tail = old.head_tail  ~(TICKET_SLOWPATH_FLAG  TICKET_SHIFT);
 -
 - /*
 -  * If the lock is uncontended, clear the flag - use cmpxchg in
 -  * case it changes behind our back 

Re: [PATCH delta V13 14/14] kvm : Paravirtual ticketlocks support for linux guests running on KVM hypervisor

2013-08-13 Thread Jeremy Fitzhardinge
On 08/13/2013 01:02 PM, Raghavendra K T wrote:
 * Ingo Molnar mi...@kernel.org [2013-08-13 18:55:52]:

 Would be nice to have a delta fix patch against tip:x86/spinlocks, which 
 I'll then backmerge into that series via rebasing it.

 There was a namespace collision of PER_CPU lock_waiting variable when
 we have both Xen and KVM enabled. 

 Perhaps this week wasn't for me. Had run 100 times randconfig in a loop
 for the fix sent earlier :(. 

 Ingo, below delta patch should fix it, IIRC, I hope you will be folding this
 back to patch 14/14 itself. Else please let me.
 I have already run allnoconfig, allyesconfig, randconfig with below patch. 
 But will
 test again. This should apply on top of tip:x86/spinlocks.

 ---8---
 From: Raghavendra K T raghavendra...@linux.vnet.ibm.com

 Fix Namespace collision for lock_waiting

 Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
 ---
 diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
 index d442471..b8ef630 100644
 --- a/arch/x86/kernel/kvm.c
 +++ b/arch/x86/kernel/kvm.c
 @@ -673,7 +673,7 @@ struct kvm_lock_waiting {
  static cpumask_t waiting_cpus;
  
  /* Track spinlock on which a cpu is waiting */
 -static DEFINE_PER_CPU(struct kvm_lock_waiting, lock_waiting);
 +static DEFINE_PER_CPU(struct kvm_lock_waiting, klock_waiting);

Has static stopped meaning static?

J

  
  static void kvm_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
  {
 @@ -685,7 +685,7 @@ static void kvm_lock_spinning(struct arch_spinlock *lock, 
 __ticket_t want)
   if (in_nmi())
   return;
  
 - w = __get_cpu_var(lock_waiting);
 + w = __get_cpu_var(klock_waiting);
   cpu = smp_processor_id();
   start = spin_time_start();
  
 @@ -756,7 +756,7 @@ static void kvm_unlock_kick(struct arch_spinlock *lock, 
 __ticket_t ticket)
  
   add_stats(RELEASED_SLOW, 1);
   for_each_cpu(cpu, waiting_cpus) {
 - const struct kvm_lock_waiting *w = per_cpu(lock_waiting, cpu);
 + const struct kvm_lock_waiting *w = per_cpu(klock_waiting, cpu);
   if (ACCESS_ONCE(w-lock) == lock 
   ACCESS_ONCE(w-want) == ticket) {
   add_stats(RELEASED_SLOW_KICKED, 1);



--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V9 0/19] Paravirtualized ticket spinlocks

2013-06-01 Thread Jeremy Fitzhardinge
On 06/01/2013 01:14 PM, Andi Kleen wrote:
 FWIW I use the paravirt spinlock ops for adding lock elision
 to the spinlocks.

Does lock elision still use the ticketlock algorithm/structure, or are
they different?  If they're still basically ticketlocks, then it seems
to me that they're complimentary - hle handles the fastpath, and pv the
slowpath.

 This needs to be done at the top level (so the level you're removing)

 However I don't like the pv mechanism very much and would 
 be fine with using an static key hook in the main path
 like I do for all the other lock types.

Right.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V9 1/19] x86/spinlock: Replace pv spinlocks with pv ticketlocks

2013-06-01 Thread Jeremy Fitzhardinge
On 06/01/2013 12:21 PM, Raghavendra K T wrote:
 x86/spinlock: Replace pv spinlocks with pv ticketlocks

 From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
I'm not sure what the etiquette is here; I did the work while at Citrix,
but jer...@goop.org is my canonical email address.  The Citrix address
is dead and bounces, so is useless for anything.  Probably best to
change it.

J


 Rather than outright replacing the entire spinlock implementation in
 order to paravirtualize it, keep the ticket lock implementation but add
 a couple of pvops hooks on the slow patch (long spin on lock, unlocking
 a contended lock).

 Ticket locks have a number of nice properties, but they also have some
 surprising behaviours in virtual environments.  They enforce a strict
 FIFO ordering on cpus trying to take a lock; however, if the hypervisor
 scheduler does not schedule the cpus in the correct order, the system can
 waste a huge amount of time spinning until the next cpu can take the lock.

 (See Thomas Friebel's talk Prevent Guests from Spinning Around
 http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

 To address this, we add two hooks:
  - __ticket_spin_lock which is called after the cpu has been
spinning on the lock for a significant number of iterations but has
failed to take the lock (presumably because the cpu holding the lock
has been descheduled).  The lock_spinning pvop is expected to block
the cpu until it has been kicked by the current lock holder.
  - __ticket_spin_unlock, which on releasing a contended lock
(there are more cpus with tail tickets), it looks to see if the next
cpu is blocked and wakes it if so.

 When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
 functions causes all the extra code to go away.

 Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
 Reviewed-by: Konrad Rzeszutek Wilk konrad.w...@oracle.com
 Tested-by: Attilio Rao attilio@citrix.com
 [ Raghavendra: Changed SPIN_THRESHOLD ]
 Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
 ---
  arch/x86/include/asm/paravirt.h   |   32 
  arch/x86/include/asm/paravirt_types.h |   10 ++
  arch/x86/include/asm/spinlock.h   |   53 
 +++--
  arch/x86/include/asm/spinlock_types.h |4 --
  arch/x86/kernel/paravirt-spinlocks.c  |   15 +
  arch/x86/xen/spinlock.c   |8 -
  6 files changed, 61 insertions(+), 61 deletions(-)

 diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
 index cfdc9ee..040e72d 100644
 --- a/arch/x86/include/asm/paravirt.h
 +++ b/arch/x86/include/asm/paravirt.h
 @@ -712,36 +712,16 @@ static inline void __set_fixmap(unsigned /* enum 
 fixed_addresses */ idx,
  
  #if defined(CONFIG_SMP)  defined(CONFIG_PARAVIRT_SPINLOCKS)
  
 -static inline int arch_spin_is_locked(struct arch_spinlock *lock)
 +static __always_inline void __ticket_lock_spinning(struct arch_spinlock 
 *lock,
 + __ticket_t ticket)
  {
 - return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
 + PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
  }
  
 -static inline int arch_spin_is_contended(struct arch_spinlock *lock)
 +static __always_inline void ticket_unlock_kick(struct arch_spinlock 
 *lock,
 + __ticket_t ticket)
  {
 - return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
 -}
 -#define arch_spin_is_contended   arch_spin_is_contended
 -
 -static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 -{
 - PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
 -}
 -
 -static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
 -   unsigned long flags)
 -{
 - PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
 -}
 -
 -static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
 -{
 - return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
 -}
 -
 -static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
 -{
 - PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
 + PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
  }
  
  #endif
 diff --git a/arch/x86/include/asm/paravirt_types.h 
 b/arch/x86/include/asm/paravirt_types.h
 index 0db1fca..d5deb6d 100644
 --- a/arch/x86/include/asm/paravirt_types.h
 +++ b/arch/x86/include/asm/paravirt_types.h
 @@ -327,13 +327,11 @@ struct pv_mmu_ops {
  };
  
  struct arch_spinlock;
 +#include asm/spinlock_types.h
 +
  struct pv_lock_ops {
 - int (*spin_is_locked)(struct arch_spinlock *lock);
 - int (*spin_is_contended)(struct arch_spinlock *lock);
 - void (*spin_lock)(struct arch_spinlock *lock);
 - void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long 
 flags);
 - int (*spin_trylock)(struct arch_spinlock *lock);
 - void (*spin_unlock

Re: [patch 09/18] KVM: x86: introduce facility to support vsyscall pvclock, via MSR

2012-10-29 Thread Jeremy Fitzhardinge
On 10/29/2012 07:45 AM, Glauber Costa wrote:
 On 10/24/2012 05:13 PM, Marcelo Tosatti wrote:
 Allow a guest to register a second location for the VCPU time info

 structure for each vcpu (as described by MSR_KVM_SYSTEM_TIME_NEW).
 This is intended to allow the guest kernel to map this information
 into a usermode accessible page, so that usermode can efficiently
 calculate system time from the TSC without having to make a syscall.

 Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
 Can you please be a bit more specific about why we need this? Why does
 the host need to provide us with two pages with the exact same data? Why
 can't just do it with mapping tricks in the guest?

In Xen the pvclock structure is embedded within a pile of other stuff
that shouldn't be mapped into guest memory, so providing for a second
location allows it to be placed whereever is convenient for the guest.
That's a restriction of the Xen ABI, but I don't know if it affects KVM.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [patch 08/18] x86: pvclock: generic pvclock vsyscall initialization

2012-10-29 Thread Jeremy Fitzhardinge
On 10/29/2012 07:54 AM, Marcelo Tosatti wrote:
 On Mon, Oct 29, 2012 at 06:18:20PM +0400, Glauber Costa wrote:
 On 10/24/2012 05:13 PM, Marcelo Tosatti wrote:
 Index: vsyscall/arch/x86/Kconfig
 ===
 --- vsyscall.orig/arch/x86/Kconfig
 +++ vsyscall/arch/x86/Kconfig
 @@ -632,6 +632,13 @@ config PARAVIRT_SPINLOCKS
  
  config PARAVIRT_CLOCK
 bool
 +config PARAVIRT_CLOCK_VSYSCALL
 +   bool Paravirt clock vsyscall support
 +   depends on PARAVIRT_CLOCK  GENERIC_TIME_VSYSCALL
 +   ---help---
 + Enable performance critical clock related system calls to
 + be executed in userspace, provided that the hypervisor
 + supports it.
  
  endif
 Besides debugging, what is the point in having this as an
 extra-selectable? Is there any case in which a virtual machine has code
 for this, but may decide to run without it ?
 Don't think so (its pretty small anyway, the code).

 I believe all this code in vsyscall should be wrapped in PARAVIRT_CLOCK
 only.
 Unless Jeremy has a reason, i'm fine with that.

I often set up blind config variables for dependency management; I'm
guessing the GENERIC_TIME_VSYSCALL dependency is important.  I think
the problem is that this exists, but that it's a user-selectable
option.  Removing the prompt should fix that.

J

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks

2012-05-14 Thread Jeremy Fitzhardinge
On 05/13/2012 11:45 AM, Raghavendra K T wrote:
 On 05/07/2012 08:22 PM, Avi Kivity wrote:

 I could not come with pv-flush results (also Nikunj had clarified that
 the result was on NOn PLE

 I'd like to see those numbers, then.

 Ingo, please hold on the kvm-specific patches, meanwhile.


 3 guests 8GB RAM, 1 used for kernbench
 (kernbench -f -H -M -o 20) other for cpuhog (shell script with  while
 true do hackbench)

 1x: no hogs
 2x: 8hogs in one guest
 3x: 8hogs each in two guest

 kernbench on PLE:
 Machine : IBM xSeries with Intel(R) Xeon(R)  X7560 2.27GHz CPU with 32
 core, with 8 online cpus and 4*64GB RAM.

 The average is taken over 4 iterations with 3 run each (4*3=12). and
 stdev is calculated over mean reported in each run.


 A): 8 vcpu guest

  BASEBASE+patch %improvement w.r.t
  mean (sd)   mean (sd) 
 patched kernel time
 case 1*1x:61.7075  (1.17872)60.93 (1.475625)1.27605
 case 1*2x:107.2125 (1.3821349)97.506675 (1.3461878)   9.95401
 case 1*3x:144.3515 (1.8203927)138.9525  (0.58309319)  3.8855


 B): 16 vcpu guest
  BASEBASE+patch %improvement w.r.t
  mean (sd)   mean (sd) 
 patched kernel time
 case 2*1x:70.524   (1.5941395)69.68866  (1.9392529)   1.19867
 case 2*2x:133.0738 (1.4558653)124.8568  (1.4544986)   6.58114
 case 2*3x:206.0094 (1.3437359)181.4712  (2.9134116)   13.5218

 B): 32 vcpu guest
  BASEBASE+patch %improvementw.r.t
  mean (sd)   mean (sd) 
 patched kernel time
 case 4*1x:100.61046 (2.7603485) 85.48734  (2.6035035)  17.6905

What does the 4*1x notation mean? Do these workloads have overcommit
of the PCPU resources?

When I measured it, even quite small amounts of overcommit lead to large
performance drops with non-pv ticket locks (on the order of 10%
improvements when there were 5 busy VCPUs on a 4 cpu system).  I never
tested it on larger machines, but I guess that represents around 25%
overcommit, or 40 busy VCPUs on a 32-PCPU system.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V8 0/17] Paravirtualized ticket spinlocks

2012-05-07 Thread Jeremy Fitzhardinge
On 05/07/2012 06:49 AM, Avi Kivity wrote:
 On 05/07/2012 04:46 PM, Srivatsa Vaddagiri wrote:
 * Raghavendra K T raghavendra...@linux.vnet.ibm.com [2012-05-07 19:08:51]:

 I 'll get hold of a PLE mc  and come up with the numbers soon. but I
 'll expect the improvement around 1-3% as it was in last version.
 Deferring preemption (when vcpu is holding lock) may give us better than 
 1-3% 
 results on PLE hardware. Something worth trying IMHO.
 Is the improvement so low, because PLE is interfering with the patch, or
 because PLE already does a good job?

How does PLE help with ticket scheduling on unlock?  I thought it would
just help with the actual spin loops.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC PATCH v1 3/5] KVM: Add paravirt kvm_flush_tlb_others

2012-05-01 Thread Jeremy Fitzhardinge
On 05/01/2012 03:59 AM, Peter Zijlstra wrote:
 On Tue, 2012-05-01 at 12:57 +0200, Peter Zijlstra wrote:
 Anyway, I don't have any idea about the costs involved with
 HAVE_RCU_TABLE_FREE, but I don't think its much.. otherwise these other
 platforms (PPC,SPARC) wouldn't have used it, gup_fast() is a very
 specific case, whereas mmu-gather is something affecting pretty much all
 tasks. 
 Which reminds me, I thought Xen needed this too, but a git grep on
 HAVE_RCU_TABLE_FREE shows its still only ppc and sparc.

 Jeremy?

Yeah, I was thinking that too, but I can't remember what we did to
resolve it.  For pure PV guests, gupf simply isn't used, so the problem
is moot.  But for dom0 or PCI-passthrough it could be.

Konrad, Stefano?

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH RFC V6 0/11] Paravirtualized ticketlocks

2012-04-16 Thread Jeremy Fitzhardinge
On 04/16/2012 09:36 AM, Ian Campbell wrote:
 On Mon, 2012-04-16 at 16:44 +0100, Konrad Rzeszutek Wilk wrote:
 On Sat, Mar 31, 2012 at 09:37:45AM +0530, Srivatsa Vaddagiri wrote:
 * Thomas Gleixner t...@linutronix.de [2012-03-31 00:07:58]:

 I know that Peter is going to go berserk on me, but if we are running
 a paravirt guest then it's simple to provide a mechanism which allows
 the host (aka hypervisor) to check that in the guest just by looking
 at some global state.

 So if a guest exits due to an external event it's easy to inspect the
 state of that guest and avoid to schedule away when it was interrupted
 in a spinlock held section. That guest/host shared state needs to be
 modified to indicate the guest to invoke an exit when the last nested
 lock has been released.
 I had attempted something like that long back:

 http://lkml.org/lkml/2010/6/3/4

 The issue is with ticketlocks though. VCPUs could go into a spin w/o
 a lock being held by anybody. Say VCPUs 1-99 try to grab a lock in
 that order (on a host with one cpu). VCPU1 wins (after VCPU0 releases it)
 and releases the lock. VCPU1 is next eligible to take the lock. If 
 that is not scheduled early enough by host, then remaining vcpus would keep 
 spinning (even though lock is technically not held by anybody) w/o making 
 forward progress.

 In that situation, what we really need is for the guest to hint to host
 scheduler to schedule VCPU1 early (via yield_to or something similar). 

 The current pv-spinlock patches however does not track which vcpu is
 spinning at what head of the ticketlock. I suppose we can consider 
 that optimization in future and see how much benefit it provides (over
 plain yield/sleep the way its done now).
 Right. I think Jeremy played around with this some time?
 5/11 xen/pvticketlock: Xen implementation for PV ticket locks tracks
 which vcpus are waiting for a lock in cpumask_t waiting_cpus and
 tracks which lock each is waiting for in per-cpu lock_waiting. This is
 used in xen_unlock_kick to kick the right CPU. There's a loop over only
 the waiting cpus to figure out who to kick.

Yes, and AFAIK the KVM pv-ticketlock patches do the same thing.  If a
(V)CPU is asleep, then sending it a kick is pretty much equivalent to a
yield to (not precisely, but it should get scheduled soon enough, and it
won't be competing with a pile of VCPUs with no useful work to do).

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V2 5/5] kvm guest : pv-ticketlocks support for linux guests running on KVM hypervisor

2011-10-26 Thread Jeremy Fitzhardinge
On 10/26/2011 12:23 PM, Raghavendra K T wrote:
 On 10/26/2011 12:04 AM, Jeremy Fitzhardinge wrote:
 On 10/23/2011 12:07 PM, Raghavendra K T wrote:
 This patch extends Linux guests running on KVM hypervisor to support
 +/*
 + * Setup pv_lock_ops to exploit KVM_FEATURE_WAIT_FOR_KICK if present.
 + * This needs to be setup really early in boot, before the first
 call to
 + * spinlock is issued!

 Actually, it doesn't matter that much.  The in-memory format is the same
 for regular and PV spinlocks, and the PV paths only come into play if
 the slowpath flag is set in the lock, which it never will be by the
 non-PV code.

 In principle, you could defer initializing PV ticketlocks until some
 arbitrarily late point if you notice that the system is oversubscribed
 enough to require it.

 ok.. so this means it will not affect even if it is initialized in
 middle somewhere, but better to do it before we start seeing lock
 contention.

Right.  Or more specifically, lock contention while you have VCPU
overcommit.

 our current aim was to have before any printk happens.
 So I  'll trim the comment to somethings like :

 Setup pv_lock_ops to exploit KVM_FEATURE_WAIT_FOR_KICK if present.
 This needs to be setup early in boot. ?

You can hook the smp_ops.smp_prepare_cpus call and initialize it there. 
There's no need to add new hook code.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V2 3/5] kvm hypervisor : Add two hypercalls to support pv-ticketlock

2011-10-26 Thread Jeremy Fitzhardinge
On 10/26/2011 03:34 AM, Avi Kivity wrote:
 On 10/25/2011 08:24 PM, Raghavendra K T wrote:
 So then do also you foresee the need for directed yield at some point,
 to address LHP? provided we have good improvements to prove.
 Doesn't this patchset completely eliminate lock holder preemption?

Well, there's the question of whether its better for someone waiting for
a contended lock to just go to sleep and rely on the scheduler to give
CPU time to whoever currently has the lock, or if the scheduler needs a
little hint to boost the lock holder by giving it the waiter's timeslice.

I tend to prefer the former, since there's no reason to suppose that the
the lock holder vcpu is necessarily the scheduler's top priority, and it
may want to schedule something else anyway.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V2 4/5] kvm guest : Added configuration support to enable debug information for KVM Guests

2011-10-25 Thread Jeremy Fitzhardinge
On 10/24/2011 03:15 AM, Avi Kivity wrote:
 On 10/23/2011 09:07 PM, Raghavendra K T wrote:
 Added configuration support to enable debug information
 for KVM Guests in debugfs
 
 Signed-off-by: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
 Signed-off-by: Suzuki Poulose suz...@in.ibm.com
 Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
 ---
 diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
 index 1f03f82..ed34269 100644
 --- a/arch/x86/Kconfig
 +++ b/arch/x86/Kconfig
 @@ -562,6 +562,15 @@ config KVM_GUEST
This option enables various optimizations for running under the KVM
hypervisor.
  
 +config KVM_DEBUG_FS
 +bool Enable debug information for KVM Guests in debugfs
 +depends on KVM_GUEST
 +default n
 +---help---
 +  This option enables collection of various statistics for KVM guest.
 +  Statistics are displayed in debugfs filesystem. Enabling this option
 +  may incur significant overhead.
 +
  source arch/x86/lguest/Kconfig
  

 This might be better implemented through tracepoints, which an be
 enabled dynamically.

Tracepoints use spinlocks, so that could get awkward.

J

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V2 5/5] kvm guest : pv-ticketlocks support for linux guests running on KVM hypervisor

2011-10-25 Thread Jeremy Fitzhardinge
On 10/23/2011 12:07 PM, Raghavendra K T wrote:
 This patch extends Linux guests running on KVM hypervisor to support
 pv-ticketlocks. Very early during bootup, paravirtualied KVM guest detects if 
 the hypervisor has required feature (KVM_FEATURE_WAIT_FOR_KICK) to support 
 pv-ticketlocks. If so, support for pv-ticketlocks is registered via 
 pv_lock_ops.

 Signed-off-by: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
 Signed-off-by: Suzuki Poulose suz...@in.ibm.com
 Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
 ---
 diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
 index 2874c19..c7f34b7 100644
 --- a/arch/x86/include/asm/kvm_para.h
 +++ b/arch/x86/include/asm/kvm_para.h
 @@ -195,10 +195,18 @@ void kvm_async_pf_task_wait(u32 token);
  void kvm_async_pf_task_wake(u32 token);
  u32 kvm_read_and_reset_pf_reason(void);
  extern void kvm_disable_steal_time(void);
 -#else
 +
 +#ifdef CONFIG_PARAVIRT_SPINLOCKS
 +void __init kvm_guest_early_init(void);
 +#else /* CONFIG_PARAVIRT_SPINLOCKS */
 +#define kvm_guest_early_init() do { } while (0)
 +#endif /* CONFIG_PARAVIRT_SPINLOCKS */
 +
 +#else /* CONFIG_KVM_GUEST */
  #define kvm_guest_init() do { } while (0)
  #define kvm_async_pf_task_wait(T) do {} while(0)
  #define kvm_async_pf_task_wake(T) do {} while(0)
 +#define kvm_guest_early_init() do { } while (0)
  static inline u32 kvm_read_and_reset_pf_reason(void)
  {
   return 0;
 diff --git a/arch/x86/kernel/head32.c b/arch/x86/kernel/head32.c
 index 3bb0850..fb25bca 100644
 --- a/arch/x86/kernel/head32.c
 +++ b/arch/x86/kernel/head32.c
 @@ -9,6 +9,7 @@
  #include linux/start_kernel.h
  #include linux/mm.h
  #include linux/memblock.h
 +#include linux/kvm_para.h
  
  #include asm/setup.h
  #include asm/sections.h
 @@ -59,6 +60,8 @@ void __init i386_start_kernel(void)
   break;
   }
  
 +  kvm_guest_early_init();
 +
   /*
* At this point everything still needed from the boot loader
* or BIOS or kernel text should be early reserved or marked not
 diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
 index 5655c22..cabf8ec 100644
 --- a/arch/x86/kernel/head64.c
 +++ b/arch/x86/kernel/head64.c
 @@ -13,6 +13,7 @@
  #include linux/start_kernel.h
  #include linux/io.h
  #include linux/memblock.h
 +#include linux/kvm_para.h
  
  #include asm/processor.h
  #include asm/proto.h
 @@ -115,6 +116,8 @@ void __init x86_64_start_reservations(char 
 *real_mode_data)
  
   reserve_ebda_region();
  
 + kvm_guest_early_init();
 +
   /*
* At this point everything still needed from the boot loader
* or BIOS or kernel text should be early reserved or marked not
 diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
 index a9c2116..f4f341f 100644
 --- a/arch/x86/kernel/kvm.c
 +++ b/arch/x86/kernel/kvm.c
 @@ -39,6 +39,16 @@
  #include asm/desc.h
  #include asm/tlbflush.h
  
 +#ifdef CONFIG_PARAVIRT_SPINLOCKS
 +
 +#ifdef CONFIG_KVM_DEBUG_FS
 +
 +#include linux/debugfs.h
 +
 +#endif /* CONFIG_KVM_DEBUG_FS */
 +
 +#endif /* CONFIG_PARAVIRT_SPINLOCKS */
 +
  #define MMU_QUEUE_SIZE 1024
  
  static int kvmapf = 1;
 @@ -627,3 +637,240 @@ static __init int activate_jump_labels(void)
   return 0;
  }
  arch_initcall(activate_jump_labels);
 +
 +#ifdef CONFIG_PARAVIRT_SPINLOCKS
 +
 +#ifdef CONFIG_KVM_DEBUG_FS
 +
 +static struct kvm_spinlock_stats
 +{
 + u32 taken_slow;
 + u32 taken_slow_pickup;
 +
 + u32 released_slow;
 + u32 released_slow_kicked;
 +
 +#define HISTO_BUCKETS30
 + u32 histo_spin_blocked[HISTO_BUCKETS+1];
 +
 + u64 time_blocked;
 +} spinlock_stats;
 +
 +static u8 zero_stats;
 +
 +static inline void check_zero(void)
 +{
 + if (unlikely(zero_stats)) {
 + memset(spinlock_stats, 0, sizeof(spinlock_stats));
 + zero_stats = 0;
 + }
 +}
 +
 +#define ADD_STATS(elem, val) \
 + do { check_zero(); spinlock_stats.elem += (val); } while (0)
 +
 +static inline u64 spin_time_start(void)
 +{
 + return sched_clock();
 +}
 +
 +static void __spin_time_accum(u64 delta, u32 *array)
 +{
 + unsigned index = ilog2(delta);
 +
 + check_zero();
 +
 + if (index  HISTO_BUCKETS)
 + array[index]++;
 + else
 + array[HISTO_BUCKETS]++;
 +}
 +
 +static inline void spin_time_accum_blocked(u64 start)
 +{
 + u32 delta = sched_clock() - start;
 +
 + __spin_time_accum(delta, spinlock_stats.histo_spin_blocked);
 + spinlock_stats.time_blocked += delta;
 +}
 +
 +static struct dentry *d_spin_debug;
 +static struct dentry *d_kvm_debug;
 +
 +struct dentry *kvm_init_debugfs(void)
 +{
 + d_kvm_debug = debugfs_create_dir(kvm, NULL);
 + if (!d_kvm_debug)
 + printk(KERN_WARNING Could not create 'kvm' debugfs 
 directory\n);
 +
 + return d_kvm_debug;
 +}
 +
 +static int __init kvm_spinlock_debugfs(void)
 +{
 + struct dentry *d_kvm = kvm_init_debugfs();
 +
 + if (d_kvm == NULL)
 +  

Re: [PATCH RFC V2 5/5] kvm guest : pv-ticketlocks support for linux guests running on KVM hypervisor

2011-10-25 Thread Jeremy Fitzhardinge
On 10/23/2011 12:07 PM, Raghavendra K T wrote:
 This patch extends Linux guests running on KVM hypervisor to support
 pv-ticketlocks. Very early during bootup, paravirtualied KVM guest detects if 
 the hypervisor has required feature (KVM_FEATURE_WAIT_FOR_KICK) to support 
 pv-ticketlocks. If so, support for pv-ticketlocks is registered via 
 pv_lock_ops.

 Signed-off-by: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
 Signed-off-by: Suzuki Poulose suz...@in.ibm.com
 Signed-off-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com
 ---
 diff --git a/arch/x86/include/asm/kvm_para.h b/arch/x86/include/asm/kvm_para.h
 index 2874c19..c7f34b7 100644
 --- a/arch/x86/include/asm/kvm_para.h
 +++ b/arch/x86/include/asm/kvm_para.h
 @@ -195,10 +195,18 @@ void kvm_async_pf_task_wait(u32 token);
  void kvm_async_pf_task_wake(u32 token);
  u32 kvm_read_and_reset_pf_reason(void);
  extern void kvm_disable_steal_time(void);
 -#else
 +
 +#ifdef CONFIG_PARAVIRT_SPINLOCKS
 +void __init kvm_guest_early_init(void);
 +#else /* CONFIG_PARAVIRT_SPINLOCKS */
 +#define kvm_guest_early_init() do { } while (0)
 +#endif /* CONFIG_PARAVIRT_SPINLOCKS */
 +
 +#else /* CONFIG_KVM_GUEST */
  #define kvm_guest_init() do { } while (0)
  #define kvm_async_pf_task_wait(T) do {} while(0)
  #define kvm_async_pf_task_wake(T) do {} while(0)
 +#define kvm_guest_early_init() do { } while (0)
  static inline u32 kvm_read_and_reset_pf_reason(void)
  {
   return 0;
 diff --git a/arch/x86/kernel/head32.c b/arch/x86/kernel/head32.c
 index 3bb0850..fb25bca 100644
 --- a/arch/x86/kernel/head32.c
 +++ b/arch/x86/kernel/head32.c
 @@ -9,6 +9,7 @@
  #include linux/start_kernel.h
  #include linux/mm.h
  #include linux/memblock.h
 +#include linux/kvm_para.h
  
  #include asm/setup.h
  #include asm/sections.h
 @@ -59,6 +60,8 @@ void __init i386_start_kernel(void)
   break;
   }
  
 +  kvm_guest_early_init();
 +
   /*
* At this point everything still needed from the boot loader
* or BIOS or kernel text should be early reserved or marked not
 diff --git a/arch/x86/kernel/head64.c b/arch/x86/kernel/head64.c
 index 5655c22..cabf8ec 100644
 --- a/arch/x86/kernel/head64.c
 +++ b/arch/x86/kernel/head64.c
 @@ -13,6 +13,7 @@
  #include linux/start_kernel.h
  #include linux/io.h
  #include linux/memblock.h
 +#include linux/kvm_para.h
  
  #include asm/processor.h
  #include asm/proto.h
 @@ -115,6 +116,8 @@ void __init x86_64_start_reservations(char 
 *real_mode_data)
  
   reserve_ebda_region();
  
 + kvm_guest_early_init();
 +
   /*
* At this point everything still needed from the boot loader
* or BIOS or kernel text should be early reserved or marked not
 diff --git a/arch/x86/kernel/kvm.c b/arch/x86/kernel/kvm.c
 index a9c2116..f4f341f 100644
 --- a/arch/x86/kernel/kvm.c
 +++ b/arch/x86/kernel/kvm.c
 @@ -39,6 +39,16 @@
  #include asm/desc.h
  #include asm/tlbflush.h
  
 +#ifdef CONFIG_PARAVIRT_SPINLOCKS
 +
 +#ifdef CONFIG_KVM_DEBUG_FS
 +
 +#include linux/debugfs.h
 +
 +#endif /* CONFIG_KVM_DEBUG_FS */
 +
 +#endif /* CONFIG_PARAVIRT_SPINLOCKS */

This is a big mess.  Is there any problem with including linux/debugfs.h
unconditionally?  Or at least using #if
defined(CONFIG_PARAVIRT_SPINLOCKS)  defined(CONFIG_KVM_DEBUG_FS)?

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks

2011-10-14 Thread Jeremy Fitzhardinge
On 10/14/2011 07:17 AM, Jason Baron wrote:
 On Thu, Oct 13, 2011 at 09:44:48AM -0700, Jeremy Fitzhardinge wrote:
 pvops is basically a collection of ordinary _ops structures full of
 function pointers, but it has a layer of patching to help optimise it. 
 In the common case, this just replaces an indirect call with a direct
 one, but in some special cases it can inline code.  This is used for
 small, extremely performance-critical things like cli/sti, but it
 awkward to use in general because you have to specify the inlined code
 as a parameterless asm.

 I haven't look at the pvops patching (probably should), but I was
 wondering if jump labels could be used for it? Or is there something
 that the pvops patching is doing that jump labels can't handle?

Jump labels are essentially binary: you can use path A or path B.  pvops
are multiway: there's no limit to the number of potential number of
paravirtualized hypervisor implementations.  At the moment we have 4:
native, Xen, KVM and lguest.

As I said, pvops patching is very general since it allows a particular
op site to be either patched with a direct call/jump to the target code,
or have code inserted inline at the site.  In fact, it probably wouldn't
take very much to allow it to implement jump labels.

And the pvops patching mechanism is certainly general to any *ops style
structure which is initialized once (or rarely) and could be optimised. 
LSM, perhaps?

 Jump_labels is basically an efficient way of doing conditionals
 predicated on rarely-changed booleans - so it's similar to pvops in that
 it is effectively a very ordinary C construct optimised by dynamic code
 patching.

 Another thing is that it can be changed at run-time...Can pvops be
 adjusted at run-time as opposed to just boot-time?

No.  In general that wouldn't really make sense, because once you've
booted on one hypervisor you're stuck there (though hypothetically you
could consider migration between machines with different hypervisors). 
In some cases it might make sense though, such as switching on PV
ticketlocks if the host system becomes overcommitted, but leaving the
native ticketlocks enabled if not.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks

2011-10-14 Thread Jeremy Fitzhardinge
On 10/14/2011 11:38 AM, H. Peter Anvin wrote:
 On 10/14/2011 11:35 AM, Jason Baron wrote:
 A nice featuer of jump labels, is that it allows the various branches
 (currently we only support 2), to be written in c code (as opposed to asm),
 which means you can write your code as you normally would and access any
 parameters as you normally would - hopefully, making the code pretty
 readable as well.

 I hope this better clarifies the use-cases for the various mechanisms.

 There is an important subcase which might be handy which would be to
 allow direct patching of call instructions instead of using indirect calls.

Right, that's how the pvops patching is primarily used.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks

2011-10-14 Thread Jeremy Fitzhardinge
On 10/14/2011 11:35 AM, Jason Baron wrote:
 On Fri, Oct 14, 2011 at 10:02:35AM -0700, Jeremy Fitzhardinge wrote:
 On 10/14/2011 07:17 AM, Jason Baron wrote:
 On Thu, Oct 13, 2011 at 09:44:48AM -0700, Jeremy Fitzhardinge wrote:
 pvops is basically a collection of ordinary _ops structures full of
 function pointers, but it has a layer of patching to help optimise it. 
 In the common case, this just replaces an indirect call with a direct
 one, but in some special cases it can inline code.  This is used for
 small, extremely performance-critical things like cli/sti, but it
 awkward to use in general because you have to specify the inlined code
 as a parameterless asm.

 I haven't look at the pvops patching (probably should), but I was
 wondering if jump labels could be used for it? Or is there something
 that the pvops patching is doing that jump labels can't handle?
 Jump labels are essentially binary: you can use path A or path B.  pvops
 are multiway: there's no limit to the number of potential number of
 paravirtualized hypervisor implementations.  At the moment we have 4:
 native, Xen, KVM and lguest.

 Yes, they are binary using the static_branch() interface. But in
 general, the asm goto() construct, allows branching to any number of
 labels. I have implemented the boolean static_branch() b/c it seems like
 the most common interface for jump labels, but I imagine we will
 introduce new interfaces as time goes on. You could of course nest
 static_branch() calls, although I can't say I've tried it.

At the moment we're using pvops to optimise things like:

(*pv_mmu_ops.set_pte)(...);

To do that with some kind of multiway jump label thing, then that would
need to expand out to something akin to:

if (static_branch(is_xen))
xen_set_pte(...);
else if (static_branch(is_kvm))
kvm_set_pte(...);
else if (static_branch(is_lguest))
lguest_set_pte(...);
else
native_set_pte(...);

or something similar with an actual jump table.  But I don't see how it
offers much scope for improvement.

If there were something like:

STATIC_INDIRECT_CALL(pv_mmu_ops.set_pte)(...);

where the apparently indirect call is actually patched to be a direct
call, then that would offer a large subset of what we do with pvops.

However, to completely replace pvops patching, the static branch / jump
label mechanism would also need to work in assembler code, and be
capable of actually patching callsites with instructions rather than
just calls (sti/cli/pushf/popf being the most important).

We also keep track of the live registers at the callsite, and compare
that to what registers the target functions will clobber in order to
optimise the amount of register save/restore is needed.  And as a result
we have some pvops functions with non-standard calling conventions to
minimise save/restores on critical paths.

 We could have an interface, that allowed static branch(), to specifiy an
 arbitrary number of no-ops such that call-site itself could look anyway
 we want, if we don't know the bias at compile time. This, of course
 means potentially greater than 1 no-op in the fast path. I assume the
 pvops can have greater than 1 no-op in the fast path. Or is there a
 better solution here?

See above.  But pvops patching is pretty well tuned for its job.

However, I definitely think its worth investigating some way to reduce
the number of patching mechanisms, and if pvops patching doesn't stretch
static jumps in unnatural ways, then perhaps that's the way to go.

Thanks,
J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks

2011-10-14 Thread Jeremy Fitzhardinge
On 10/14/2011 11:37 AM, H. Peter Anvin wrote:
 On 10/14/2011 10:02 AM, Jeremy Fitzhardinge wrote:
 Jump labels are essentially binary: you can use path A or path B.  pvops
 are multiway: there's no limit to the number of potential number of
 paravirtualized hypervisor implementations.  At the moment we have 4:
 native, Xen, KVM and lguest.

 This isn't (or shouldn't be) really true... it should be possible to do
 an N-way jump label even if the current mechanism doesn't.

We probably don't want all those implementations (near) inline, so they
would end up being plain function calls anyway.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC V5 00/11] Paravirtualized ticketlocks

2011-10-13 Thread Jeremy Fitzhardinge
On 10/13/2011 03:54 AM, Peter Zijlstra wrote:
 On Wed, 2011-10-12 at 17:51 -0700, Jeremy Fitzhardinge wrote:
 This is is all unnecessary complication if you're not using PV ticket
 locks, it also uses the jump-label machinery to use the standard
 add-based unlock in the non-PV case.

 if (TICKET_SLOWPATH_FLAG 
 unlikely(static_branch(paravirt_ticketlocks_enabled))) {
 arch_spinlock_t prev;

 prev = *lock;
 add_smp(lock-tickets.head, TICKET_LOCK_INC);

 /* add_smp() is a full mb() */

 if (unlikely(lock-tickets.tail  TICKET_SLOWPATH_FLAG))
 __ticket_unlock_slowpath(lock, prev);
 } else
 __add(lock-tickets.head, TICKET_LOCK_INC, 
 UNLOCK_LOCK_PREFIX); 
 Not that I mind the jump_label usage, but didn't paravirt have an
 existing alternative() thingy to do things like this? Or is the
 alternative() stuff not flexible enough to express this?

Yeah, that's a good question.  There are three mechanisms with somewhat
overlapping concerns:

  * alternative()
  * pvops patching
  * jump_labels

Alternative() is for low-level instruction substitution, and really only
makes sense at the assembler level with one or two instructions.

pvops is basically a collection of ordinary _ops structures full of
function pointers, but it has a layer of patching to help optimise it. 
In the common case, this just replaces an indirect call with a direct
one, but in some special cases it can inline code.  This is used for
small, extremely performance-critical things like cli/sti, but it
awkward to use in general because you have to specify the inlined code
as a parameterless asm.

Jump_labels is basically an efficient way of doing conditionals
predicated on rarely-changed booleans - so it's similar to pvops in that
it is effectively a very ordinary C construct optimised by dynamic code
patching.


So for _arch_spin_unlock(), what I'm trying to go for is that if you're
not using PV ticketlocks, then the unlock sequence is unchanged from
normal.  But also, even if you are using PV ticketlocks, I want the
fastpath to be inlined, with the call out to a special function only
happening on the slow path.  So the result is that if().  If the
static_branch is false, then the executed code sequence is:

nop5
addb $2, (lock)
ret

which is pretty much ideal.  If the static_branch is true, then it ends
up being:

jmp5 1f
[...]

1:  lock add $2, (lock)
test $1, (lock.tail)
jne slowpath
ret
slowpath:...

which is also pretty good, given all the other constraints.

While I could try use inline patching to get a simply add for the non-PV
unlock case (it would be awkward without asm parameters), but I wouldn't
be able to also get the PV unlock fastpath code to be (near) inline. 
Hence jump_label.

Thanks,
J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V5 02/11] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

The code size expands somewhat, and its probably better to just call
a function rather than inline it.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/Kconfig |3 +++
 kernel/Kconfig.locks |2 +-
 2 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6a47bb2..1f03f82 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -585,6 +585,9 @@ config PARAVIRT_SPINLOCKS
 
  If you are unsure how to answer this question, answer N.
 
+config ARCH_NOINLINE_SPIN_UNLOCK
+   def_bool PARAVIRT_SPINLOCKS
+
 config PARAVIRT_CLOCK
bool
 
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 5068e2a..584637b 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -125,7 +125,7 @@ config INLINE_SPIN_LOCK_IRQSAVE
 ARCH_INLINE_SPIN_LOCK_IRQSAVE
 
 config INLINE_SPIN_UNLOCK
-   def_bool !DEBUG_SPINLOCK  (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)
+   def_bool !DEBUG_SPINLOCK  (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)  
!ARCH_NOINLINE_SPIN_UNLOCK
 
 config INLINE_SPIN_UNLOCK_BH
def_bool !DEBUG_SPINLOCK  ARCH_INLINE_SPIN_UNLOCK_BH
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V5 07/11] x86/pvticketlock: use callee-save for lock_spinning

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |2 +-
 arch/x86/include/asm/paravirt_types.h |2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |2 +-
 arch/x86/xen/spinlock.c   |3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 76cae7a..50281c7 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -752,7 +752,7 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+   PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 005e24d..5e0c138 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include asm/spinlock_types.h
 
 struct pv_lock_ops {
-   void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+   struct paravirt_callee_save lock_spinning;
void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c 
b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-   .lock_spinning = paravirt_nop,
+   .lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 1e21c99..431d231 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -145,6 +145,7 @@ out:
 
spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -204,7 +205,7 @@ void __init xen_init_spinlocks(void)
return;
}
 
-   pv_lock_ops.lock_spinning = xen_lock_spinning;
+   pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V5 11/11] xen: enable PV ticketlocks on HVM Xen

2011-10-12 Thread Jeremy Fitzhardinge
From: Stefano Stabellini stefano.stabell...@eu.citrix.com

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/smp.c |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 4dec905..2d01aeb 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -552,4 +552,5 @@ void __init xen_hvm_smp_init(void)
smp_ops.cpu_die = xen_hvm_cpu_die;
smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
smp_ops.send_call_func_single_ipi = 
xen_smp_send_call_function_single_ipi;
+   xen_init_spinlocks();
 }
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V5 10/11] xen/pvticketlock: allow interrupts to be enabled while blocking

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a spinlock which ends up going into
the slow state (invalidating the per-cpu lock and want values),
then when the interrupt handler returns the event channel will
remain pending so the poll will return immediately, causing it to
return out to the main spinlock loop.
Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |   48 --
 1 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 0a552ec..fc506e6 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -106,11 +106,28 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
__ticket_t want)
 
start = spin_time_start();
 
-   /* Make sure interrupts are disabled to ensure that these
-  per-cpu values are not overwritten. */
+   /*
+* Make sure an interrupt handler can't upset things in a
+* partially setup state.
+*/
local_irq_save(flags);
 
+   /*
+* We don't really care if we're overwriting some other
+* (lock,want) pair, as that would mean that we're currently
+* in an interrupt context, and the outer context had
+* interrupts enabled.  That has already kicked the VCPU out
+* of xen_poll_irq(), so it will just return spuriously and
+* retry with newly setup (lock,want).
+*
+* The ordering protocol on this is that the lock pointer
+* may only be set non-NULL if the want ticket is correct.
+* If we're updating want, we must first clear lock.
+*/
+   w-lock = NULL;
+   smp_wmb();
w-want = want;
+   smp_wmb();
w-lock = lock;
 
/* This uses set_bit, which atomic and therefore a barrier */
@@ -124,21 +141,36 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
__ticket_t want)
/* Only check lock once pending cleared */
barrier();
 
-   /* Mark entry to slowpath before doing the pickup test to make
-  sure we don't deadlock with an unlocker. */
+   /*
+* Mark entry to slowpath before doing the pickup test to make
+* sure we don't deadlock with an unlocker.
+*/
__ticket_enter_slowpath(lock);
 
-   /* check again make sure it didn't become free while
-  we weren't looking  */
+   /*
+* check again make sure it didn't become free while
+* we weren't looking 
+*/
if (ACCESS_ONCE(lock-tickets.head) == want) {
ADD_STATS(taken_slow_pickup, 1);
goto out;
}
 
+   /* Allow interrupts while blocked */
+   local_irq_restore(flags);
+
+   /*
+* If an interrupt happens here, it will leave the wakeup irq
+* pending, which will cause xen_poll_irq() to return
+* immediately.
+*/
+
/* Block until irq becomes pending (or perhaps a spurious wakeup) */
xen_poll_irq(irq);
ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
+   local_irq_save(flags);
+
kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
@@ -160,7 +192,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, 
__ticket_t next)
for_each_cpu(cpu, waiting_cpus) {
const struct xen_lock_waiting *w = per_cpu(lock_waiting, cpu);
 
-   if (w-lock == lock  w-want == next) {
+   /* Make sure we read lock before want */
+   if (ACCESS_ONCE(w-lock) == lock 
+   ACCESS_ONCE(w-want) == next) {
ADD_STATS(released_slow_kicked, 1);
xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
break;
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V5 09/11] x86/ticketlock: add slowpath logic

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

UnlockerLocker
test for lock pickup
- fail
unlock
test slowpath
- false
set slowpath flags
block

Whereas this works in any ordering:

UnlockerLocker
set slowpath flags
test for lock pickup
- fail
block
unlock
test slowpath
- true, kick

If the unlocker finds that the lock has the slowpath flag set but it is
actually uncontended (ie, head == tail, so nobody is waiting), then it
clears the slowpath flag.

The unlock code uses a locked add to update the head counter.  This also
acts as a full memory barrier so that its safe to subsequently
read back the slowflag state, knowing that the updated lock is visible
to the other CPUs.  If it were an unlocked add, then the flag read may
just be forwarded from the store buffer before it was visible to the other
CPUs, which could result in a deadlock.

Unfortunately this means we need to do a locked instruction when
unlocking with PV ticketlocks.  However, if PV ticketlocks are not
enabled, then the old non-locked add is the only unlocking code.

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.
Thanks to Stephan Diestelhorst for commenting on some code which relied
on an inaccurate reading of the x86 memory ordering rules.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
Signed-off-by: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
Cc: Stephan Diestelhorst stephan.diestelho...@amd.com
---
 arch/x86/include/asm/paravirt.h   |2 +-
 arch/x86/include/asm/spinlock.h   |   79 
 arch/x86/include/asm/spinlock_types.h |2 +
 arch/x86/kernel/paravirt-spinlocks.c  |3 +
 arch/x86/xen/spinlock.c   |6 +++
 5 files changed, 71 insertions(+), 21 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 50281c7..13b3d8b 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -755,7 +755,7 @@ static __always_inline void __ticket_lock_spinning(struct 
arch_spinlock *lock, _
PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index dd155f7..8e0b9cf 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -1,11 +1,14 @@
 #ifndef _ASM_X86_SPINLOCK_H
 #define _ASM_X86_SPINLOCK_H
 
+#include linux/jump_label.h
 #include linux/atomic.h
 #include asm/page.h
 #include asm/processor.h
 #include linux/compiler.h
 #include asm/paravirt.h
+#include asm/bitops.h
+
 /*
  * Your basic SMP spinlocks, allowing only a single CPU anywhere
  *
@@ -40,29 +43,27 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD (1  11)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+extern struct jump_label_key paravirt_ticketlocks_enabled;
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
+static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
 {
+   set_bit(0, (volatile unsigned long *)lock-tickets.tail);
 }
 
-static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock, 
__ticket_t ticket)
 {
 }
 
-#endif /* CONFIG_PARAVIRT_SPINLOCKS */
-
-
-/* 
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t next)
+static inline

[PATCH RFC V5 08/11] x86/pvticketlock: when paravirtualizing ticket locks, increment by 2

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a is in slowpath state bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h   |   10 +-
 arch/x86/include/asm/spinlock_types.h |   10 +-
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f0d6a59..dd155f7 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct 
arch_spinlock *lock, __t
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-   register struct __raw_tickets inc = { .tail = 1 };
+   register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
inc = xadd(lock-tickets, inc);
 
@@ -104,7 +104,7 @@ static __always_inline int 
arch_spin_trylock(arch_spinlock_t *lock)
if (old.tickets.head != old.tickets.tail)
return 0;
 
-   new.head_tail = old.head_tail + (1  TICKET_SHIFT);
+   new.head_tail = old.head_tail + (TICKET_LOCK_INC  TICKET_SHIFT);
 
/* cmpxchg is a full barrier, so nothing can move before it */
return cmpxchg(lock-head_tail, old.head_tail, new.head_tail) == 
old.head_tail;
@@ -112,9 +112,9 @@ static __always_inline int 
arch_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-   __ticket_t next = lock-tickets.head + 1;
+   __ticket_t next = lock-tickets.head + TICKET_LOCK_INC;
 
-   __add(lock-tickets.head, 1, UNLOCK_LOCK_PREFIX);
+   __add(lock-tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
__ticket_unlock_kick(lock, next);
 }
 
@@ -129,7 +129,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t 
*lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
-   return ((tmp.tail - tmp.head)  TICKET_MASK)  1;
+   return ((tmp.tail - tmp.head)  TICKET_MASK)  TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h 
b/arch/x86/include/asm/spinlock_types.h
index dbe223d..aa9a205 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include linux/types.h
 
-#if (CONFIG_NR_CPUS  256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC  2
+#else
+#define __TICKET_LOCK_INC  1
+#endif
+
+#if (CONFIG_NR_CPUS  (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT   (sizeof(__ticket_t) * 8)
 #define TICKET_MASK((__ticket_t)((1  TICKET_SHIFT) - 1))
 
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V5 05/11] xen/pvticketlock: Xen implementation for PV ticket locks

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |  287 +++
 1 files changed, 43 insertions(+), 244 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 23af06a..f6133c5 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-   u64 taken;
u32 taken_slow;
-   u32 taken_slow_nested;
u32 taken_slow_pickup;
u32 taken_slow_spurious;
-   u32 taken_slow_irqenable;
 
-   u64 released;
u32 released_slow;
u32 released_slow_kicked;
 
 #define HISTO_BUCKETS  30
-   u32 histo_spin_total[HISTO_BUCKETS+1];
-   u32 histo_spin_spinning[HISTO_BUCKETS+1];
u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-   u64 time_total;
-   u64 time_spinning;
u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1  10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-   spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_total);
-   spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,84 @@ static inline u64 spin_time_start(void)
return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-   unsigned char lock; /* 0 - free; 1 - locked */
-   unsigned short spinners;/* count of waiting cpus */
+struct xen_lock_waiting {
+   struct arch_spinlock *lock;
+   __ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   return xl-lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   /* Not strictly true; this is only the count of contended
-  lock-takers entering the slow path. */
-   return xl-spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-   u8 old = 1;
-
-   asm(xchgb %b0,%1
-   : +q (old), +m (xl-lock) : : memory);
-
-   return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-   struct xen_spinlock *prev;
-
-   prev = __this_cpu_read(lock_spinners);
-   __this_cpu_write(lock_spinners, xl);
-
-   wmb();  /* set lock of interest before count */
-
-   asm(LOCK_PREFIX  incw %0
-   : +m (xl-spinners) : : memory);
-
-   return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct 
xen_spinlock *prev)
-{
-   asm(LOCK_PREFIX  decw %0
-   : +m (xl-spinners) : : memory);
-   wmb

[PATCH RFC V5 04/11] xen: defer spinlock setup until boot CPU setup

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/smp.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index e79dbb9..4dec905 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -200,6 +200,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
 
xen_filter_cpu_maps();
xen_setup_vcpu_info_placement();
+   xen_init_spinlocks();
 }
 
 static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
@@ -513,7 +514,6 @@ void __init xen_smp_init(void)
 {
smp_ops = xen_smp_ops;
xen_fill_possible_map();
-   xen_init_spinlocks();
 }
 
 static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V5 00/11] Paravirtualized ticketlocks

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

[ Changes since last posting: 
  - Use lock add for unlock operation rather than lock xadd; it is
equivalent to add; mfence, but more efficient than both lock
xadd and mfence.

  I think this version is ready for submission.
]

NOTE: this series is available in:
  git://github.com/jsgf/linux-xen.git upstream/pvticketlock-slowflag
and is based on the previously posted ticketlock cleanup series in
  git://github.com/jsgf/linux-xen.git upstream/ticketlock-cleanup

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct next
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into slowpath state.

- When releasing a lock, if it is in slowpath state, the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The slowpath state is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a small ticket can deal with 128 CPUs, and large ticket
32768).

This series provides a Xen implementation, but it should be
straightforward to add a KVM implementation as well.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in slowpath
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
inc = xadd(lock-tickets, inc);
inc.tail = ~TICKET_SLOWPATH_FLAG;

if (likely(inc.head == inc.tail))
goto out;

for (;;) {
unsigned count = SPIN_THRESHOLD;

do {
if (ACCESS_ONCE(lock-tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out:barrier();

which results in:
push   %rbp
mov%rsp,%rbp

mov$0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f   # Slowpath if lock in contention

pop%rbp
retq   

### SLOWPATH START
1:  and$-2,%edx
movzbl %dl,%esi

2:  mov$0x800,%eax
jmp4f

3:  pause  
sub$0x1,%eax
je 5f

4:  movzbl (%rdi),%ecx
cmp%cl,%dl
jne3b

pop%rbp
retq   

5:  callq  *__ticket_lock_spinning
jmp2b
### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

push   %rbp
mov%rsp,%rbp

mov$0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f

pop%rbp
retq   

### SLOWPATH START
1:  pause  
movzbl (%rdi),%eax
cmp%dl,%al
jne1b

pop%rbp
retq   
### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
head and fetch the slowpath flag from tail.  This version of the
patch uses a locked add to do this, followed by a test to see if the
slowflag is set.  The lock prefix acts as a full memory barrier, so we
can be sure that other CPUs will have seen the unlock before we read
the flag (without the barrier the read could be fetched from the
store queue before it hits memory, which could result in a deadlock).

This is is all unnecessary

[PATCH RFC V5 01/11] x86/spinlock: replace pv spinlocks with pv ticketlocks

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |   30 ++-
 arch/x86/include/asm/paravirt_types.h |   10 ++
 arch/x86/include/asm/spinlock.h   |   50 ++--
 arch/x86/include/asm/spinlock_types.h |4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +
 arch/x86/xen/spinlock.c   |7 -
 6 files changed, 56 insertions(+), 60 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index a7d2db9..76cae7a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -750,36 +750,14 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP)  defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+   PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
- unsigned long flags)
-{
-   PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-   return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+   PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 8e8b9a4..005e24d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include asm/spinlock_types.h
+
 struct pv_lock_ops {
-   int (*spin_is_locked)(struct arch_spinlock *lock);
-   int (*spin_is_contended)(struct arch_spinlock *lock);
-   void (*spin_lock)(struct arch_spinlock *lock);
-   void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long 
flags);
-   int (*spin_trylock)(struct arch_spinlock *lock);
-   void (*spin_unlock)(struct arch_spinlock *lock);
+   void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+   void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index a82c2bf..5efd2f9 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -37,6 +37,32 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD (1  11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket

[PATCH RFC V5 03/11] x86/ticketlock: collapse a layer of functions

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h |   35 +--
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 5efd2f9..f0d6a59 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct 
arch_spinlock *lock, __t
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
register struct __raw_tickets inc = { .tail = 1 };
 
@@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct 
arch_spinlock *lock)
 out:   barrier();  /* make sure nothing creeps before the lock is 
taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
arch_spinlock_t old, new;
 
@@ -110,7 +110,7 @@ static __always_inline int 
__ticket_spin_trylock(arch_spinlock_t *lock)
return cmpxchg(lock-head_tail, old.head_tail, new.head_tail) == 
old.head_tail;
 }
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
__ticket_t next = lock-tickets.head + 1;
 
@@ -118,46 +118,21 @@ static __always_inline void 
__ticket_spin_unlock(arch_spinlock_t *lock)
__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return ((tmp.tail - tmp.head)  TICKET_MASK)  1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-   __ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-   return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-   __ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
  unsigned long flags)
 {
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V5 06/11] xen/pvticketlocks: add xen_nopvspin parameter to disable xen pv ticketlocks

2011-10-12 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |   14 ++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index f6133c5..1e21c99 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -195,12 +195,26 @@ void xen_uninit_lock_cpu(int cpu)
unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
 }
 
+static bool xen_pvspin __initdata = true;
+
 void __init xen_init_spinlocks(void)
 {
+   if (!xen_pvspin) {
+   printk(KERN_DEBUG xen: PV spinlocks disabled\n);
+   return;
+   }
+
pv_lock_ops.lock_spinning = xen_lock_spinning;
pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
+static __init int xen_parse_nopvspin(char *arg)
+{
+   xen_pvspin = false;
+   return 0;
+}
+early_param(xen_nopvspin, xen_parse_nopvspin);
+
 #ifdef CONFIG_XEN_DEBUG_FS
 
 static struct dentry *d_spin_debug;
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-10 Thread Jeremy Fitzhardinge
On 10/10/2011 07:01 AM, Stephan Diestelhorst wrote:
 On Monday 10 October 2011, 07:00:50 Stephan Diestelhorst wrote:
 On Thursday 06 October 2011, 13:40:01 Jeremy Fitzhardinge wrote:
 On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
 On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
 Which certainly should *work*, but from a conceptual standpoint, isn't
 it just *much* nicer to say we actually know *exactly* what the upper
 bits were.
 Well, we really do NOT want atomicity here. What we really rather want
 is sequentiality: free the lock, make the update visible, and THEN
 check if someone has gone sleeping on it.

 Atomicity only conveniently enforces that the three do not happen in a
 different order (with the store becoming visible after the checking
 load).

 This does not have to be atomic, since spurious wakeups are not a
 problem, in particular not with the FIFO-ness of ticket locks.

 For that the fence, additional atomic etc. would be IMHO much cleaner
 than the crazy overflow logic.
 All things being equal I'd prefer lock-xadd just because its easier to
 analyze the concurrency for, crazy overflow tests or no.  But if
 add+mfence turned out to be a performance win, then that would obviously
 tip the scales.

 However, it looks like locked xadd is also has better performance:  on
 my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
 than locked xadd, so that pretty much settles it unless you think
 there'd be a dramatic difference on an AMD system.
 Indeed, the fences are usually slower than locked RMWs, in particular,
 if you do not need to add an instruction. I originally missed that
 amazing stunt the GCC pulled off with replacing the branch with carry
 flag magic. It seems that two twisted minds have found each other
 here :)

 One of my concerns was adding a branch in here... so that is settled,
 and if everybody else feels like this is easier to reason about...
 go ahead :) (I'll keep my itch to myself then.)
 Just that I can't... if performance is a concern, adding the LOCK
 prefix to the addb outperforms the xadd significantly:

Hm, yes.  So using the lock prefix on add instead of the mfence?  Hm.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-10 Thread Jeremy Fitzhardinge
On 10/10/2011 12:32 AM, Ingo Molnar wrote:
 * Jeremy Fitzhardinge jer...@goop.org wrote:

 On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
 However, it looks like locked xadd is also has better performance:  on
 my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
 than locked xadd, so that pretty much settles it unless you think
 there'd be a dramatic difference on an AMD system.
 Konrad measures add+mfence is about 65% slower on AMD Phenom as well.
 xadd also results in smaller/tighter code, right?

Not particularly, mostly because of the overflow-into-the-high-part
compensation.  But its only a couple of extra instructions, and no
conditionals, so I don't think it would have any concrete effect.

But, as Stephen points out, perhaps locked add is preferable to locked
xadd, since it also has the same barrier as mfence but has
(significantly!) better performance than either mfence or locked xadd...

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-06 Thread Jeremy Fitzhardinge
On 10/06/2011 07:04 AM, Stephan Diestelhorst wrote:
 On Wednesday 28 September 2011, 14:49:56 Linus Torvalds wrote:
 Which certainly should *work*, but from a conceptual standpoint, isn't
 it just *much* nicer to say we actually know *exactly* what the upper
 bits were.
 Well, we really do NOT want atomicity here. What we really rather want
 is sequentiality: free the lock, make the update visible, and THEN
 check if someone has gone sleeping on it.

 Atomicity only conveniently enforces that the three do not happen in a
 different order (with the store becoming visible after the checking
 load).

 This does not have to be atomic, since spurious wakeups are not a
 problem, in particular not with the FIFO-ness of ticket locks.

 For that the fence, additional atomic etc. would be IMHO much cleaner
 than the crazy overflow logic.

All things being equal I'd prefer lock-xadd just because its easier to
analyze the concurrency for, crazy overflow tests or no.  But if
add+mfence turned out to be a performance win, then that would obviously
tip the scales.

However, it looks like locked xadd is also has better performance:  on
my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
than locked xadd, so that pretty much settles it unless you think
there'd be a dramatic difference on an AMD system.

(On Nehalem it was much less dramatic 2% difference, but still in favour
of locked xadd.)

This is with dumb-as-rocks run it in a loop with time benchmark, but
the results are not very subtle.

J
#include stdio.h

struct {
	unsigned char flag;
	unsigned char val;
} l;

int main(int argc, char **argv)
{
	int i;

	for (i = 0; i  1; i++) {
		l.val += 2;
		asm volatile(mfence : : : memory);
		if (l.flag)
			break;
		asm volatile( : : : memory);
	}

	return 0;
}
#include stdio.h

union {
	struct {
		unsigned char val;
		unsigned char flag;
	};
	unsigned short lock;
} l = { 0,0 };

int main(int argc, char **argv)
{
	int i;

	for (i = 0; i  1; i++) {
		unsigned short inc = 2;
		if (l.val = (0x100 - 2))
			inc += -1  8;
		asm volatile(lock; xadd %1,%0 : +m (l.lock), +r (inc) : );
		if (inc  0x100)
			break;
		asm volatile( : : : memory);
	}

	return 0;
}


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-10-06 Thread Jeremy Fitzhardinge
On 10/06/2011 10:40 AM, Jeremy Fitzhardinge wrote:
 However, it looks like locked xadd is also has better performance:  on
 my Sandybridge laptop (2 cores, 4 threads), the add+mfence is 20% slower
 than locked xadd, so that pretty much settles it unless you think
 there'd be a dramatic difference on an AMD system.

Konrad measures add+mfence is about 65% slower on AMD Phenom as well.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V4 10/11] xen/pvticketlock: allow interrupts to be enabled while blocking

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a spinlock which ends up going into
the slow state (invalidating the per-cpu lock and want values),
then when the interrupt handler returns the event channel will
remain pending so the poll will return immediately, causing it to
return out to the main spinlock loop.
Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |   48 --
 1 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 0a552ec..fc506e6 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -106,11 +106,28 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
__ticket_t want)
 
start = spin_time_start();
 
-   /* Make sure interrupts are disabled to ensure that these
-  per-cpu values are not overwritten. */
+   /*
+* Make sure an interrupt handler can't upset things in a
+* partially setup state.
+*/
local_irq_save(flags);
 
+   /*
+* We don't really care if we're overwriting some other
+* (lock,want) pair, as that would mean that we're currently
+* in an interrupt context, and the outer context had
+* interrupts enabled.  That has already kicked the VCPU out
+* of xen_poll_irq(), so it will just return spuriously and
+* retry with newly setup (lock,want).
+*
+* The ordering protocol on this is that the lock pointer
+* may only be set non-NULL if the want ticket is correct.
+* If we're updating want, we must first clear lock.
+*/
+   w-lock = NULL;
+   smp_wmb();
w-want = want;
+   smp_wmb();
w-lock = lock;
 
/* This uses set_bit, which atomic and therefore a barrier */
@@ -124,21 +141,36 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
__ticket_t want)
/* Only check lock once pending cleared */
barrier();
 
-   /* Mark entry to slowpath before doing the pickup test to make
-  sure we don't deadlock with an unlocker. */
+   /*
+* Mark entry to slowpath before doing the pickup test to make
+* sure we don't deadlock with an unlocker.
+*/
__ticket_enter_slowpath(lock);
 
-   /* check again make sure it didn't become free while
-  we weren't looking  */
+   /*
+* check again make sure it didn't become free while
+* we weren't looking 
+*/
if (ACCESS_ONCE(lock-tickets.head) == want) {
ADD_STATS(taken_slow_pickup, 1);
goto out;
}
 
+   /* Allow interrupts while blocked */
+   local_irq_restore(flags);
+
+   /*
+* If an interrupt happens here, it will leave the wakeup irq
+* pending, which will cause xen_poll_irq() to return
+* immediately.
+*/
+
/* Block until irq becomes pending (or perhaps a spurious wakeup) */
xen_poll_irq(irq);
ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
+   local_irq_save(flags);
+
kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
@@ -160,7 +192,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, 
__ticket_t next)
for_each_cpu(cpu, waiting_cpus) {
const struct xen_lock_waiting *w = per_cpu(lock_waiting, cpu);
 
-   if (w-lock == lock  w-want == next) {
+   /* Make sure we read lock before want */
+   if (ACCESS_ONCE(w-lock) == lock 
+   ACCESS_ONCE(w-want) == next) {
ADD_STATS(released_slow_kicked, 1);
xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
break;
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V4 11/11] xen: enable PV ticketlocks on HVM Xen

2011-10-04 Thread Jeremy Fitzhardinge
From: Stefano Stabellini stefano.stabell...@eu.citrix.com

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/smp.c |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index 4dec905..2d01aeb 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -552,4 +552,5 @@ void __init xen_hvm_smp_init(void)
smp_ops.cpu_die = xen_hvm_cpu_die;
smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
smp_ops.send_call_func_single_ipi = 
xen_smp_send_call_function_single_ipi;
+   xen_init_spinlocks();
 }
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V4 07/11] x86/pvticketlock: use callee-save for lock_spinning

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |2 +-
 arch/x86/include/asm/paravirt_types.h |2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |2 +-
 arch/x86/xen/spinlock.c   |3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 76cae7a..50281c7 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -752,7 +752,7 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+   PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 005e24d..5e0c138 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include asm/spinlock_types.h
 
 struct pv_lock_ops {
-   void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+   struct paravirt_callee_save lock_spinning;
void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c 
b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-   .lock_spinning = paravirt_nop,
+   .lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 1e21c99..431d231 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -145,6 +145,7 @@ out:
 
spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -204,7 +205,7 @@ void __init xen_init_spinlocks(void)
return;
}
 
-   pv_lock_ops.lock_spinning = xen_lock_spinning;
+   pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V4 09/11] x86/ticketlock: add slowpath logic

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

UnlockerLocker
test for lock pickup
- fail
unlock
test slowpath
- false
set slowpath flags
block

Whereas this works in any ordering:

UnlockerLocker
set slowpath flags
test for lock pickup
- fail
block
unlock
test slowpath
- true, kick

If the unlocker finds that the lock has the slowpath flag set but it is
actually uncontended (ie, head == tail, so nobody is waiting), then it
clears the slowpath flag.

The unlock code uses a locked xadd to atomically update the head counter
and fetch the tail to read the slowpath flag.  Since head is in the
least-significant position, there's a possibility that it could overflow
into tail.  If this is about to happen, then we can also add -1 to
tail to compensate for the carry overlflow.  This is safe because while
we hold the lock, we own head, so we can inspect it without risk of
it changing.

(Unfortunately this means we need to do a locked instruction when
unlocking with PV ticketlocks.  However, if PV ticketlocks are not
enabled, then the old non-locked add is the only unlocking code.)

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.
Thanks to Stephan Diestelhorst for commenting on some code which relied
on an inaccurate reading of the x86 memory ordering rules.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
Signed-off-by: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
Cc: Stephan Diestelhorst stephan.diestelho...@amd.com
---
 arch/x86/include/asm/paravirt.h   |2 +-
 arch/x86/include/asm/spinlock.h   |   90 +
 arch/x86/include/asm/spinlock_types.h |2 +
 arch/x86/kernel/paravirt-spinlocks.c  |3 +
 arch/x86/xen/spinlock.c   |6 ++
 5 files changed, 81 insertions(+), 22 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 50281c7..13b3d8b 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -755,7 +755,7 @@ static __always_inline void __ticket_lock_spinning(struct 
arch_spinlock *lock, _
PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index dd155f7..7dbe028 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -1,11 +1,14 @@
 #ifndef _ASM_X86_SPINLOCK_H
 #define _ASM_X86_SPINLOCK_H
 
+#include linux/jump_label.h
 #include linux/atomic.h
 #include asm/page.h
 #include asm/processor.h
 #include linux/compiler.h
 #include asm/paravirt.h
+#include asm/bitops.h
+
 /*
  * Your basic SMP spinlocks, allowing only a single CPU anywhere
  *
@@ -40,29 +43,27 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD (1  11)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+extern struct jump_label_key paravirt_ticketlocks_enabled;
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
+static inline void __ticket_enter_slowpath(arch_spinlock_t *lock)
 {
+   set_bit(0, (volatile unsigned long *)lock-tickets.tail);
 }
 
-static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static __always_inline void __ticket_lock_spinning(arch_spinlock_t *lock, 
__ticket_t ticket)
 {
 }
 
-#endif /* CONFIG_PARAVIRT_SPINLOCKS */
-
-
-/* 
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct

[PATCH RFC V4 08/11] x86/pvticketlock: when paravirtualizing ticket locks, increment by 2

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a is in slowpath state bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h   |   10 +-
 arch/x86/include/asm/spinlock_types.h |   10 +-
 2 files changed, 14 insertions(+), 6 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f0d6a59..dd155f7 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct 
arch_spinlock *lock, __t
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-   register struct __raw_tickets inc = { .tail = 1 };
+   register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
inc = xadd(lock-tickets, inc);
 
@@ -104,7 +104,7 @@ static __always_inline int 
arch_spin_trylock(arch_spinlock_t *lock)
if (old.tickets.head != old.tickets.tail)
return 0;
 
-   new.head_tail = old.head_tail + (1  TICKET_SHIFT);
+   new.head_tail = old.head_tail + (TICKET_LOCK_INC  TICKET_SHIFT);
 
/* cmpxchg is a full barrier, so nothing can move before it */
return cmpxchg(lock-head_tail, old.head_tail, new.head_tail) == 
old.head_tail;
@@ -112,9 +112,9 @@ static __always_inline int 
arch_spin_trylock(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-   __ticket_t next = lock-tickets.head + 1;
+   __ticket_t next = lock-tickets.head + TICKET_LOCK_INC;
 
-   __add(lock-tickets.head, 1, UNLOCK_LOCK_PREFIX);
+   __add(lock-tickets.head, TICKET_LOCK_INC, UNLOCK_LOCK_PREFIX);
__ticket_unlock_kick(lock, next);
 }
 
@@ -129,7 +129,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t 
*lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
-   return ((tmp.tail - tmp.head)  TICKET_MASK)  1;
+   return ((tmp.tail - tmp.head)  TICKET_MASK)  TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h 
b/arch/x86/include/asm/spinlock_types.h
index dbe223d..aa9a205 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include linux/types.h
 
-#if (CONFIG_NR_CPUS  256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC  2
+#else
+#define __TICKET_LOCK_INC  1
+#endif
+
+#if (CONFIG_NR_CPUS  (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT   (sizeof(__ticket_t) * 8)
 #define TICKET_MASK((__ticket_t)((1  TICKET_SHIFT) - 1))
 
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V4 05/11] xen/pvticketlock: Xen implementation for PV ticket locks

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |  287 +++
 1 files changed, 43 insertions(+), 244 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 23af06a..f6133c5 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-   u64 taken;
u32 taken_slow;
-   u32 taken_slow_nested;
u32 taken_slow_pickup;
u32 taken_slow_spurious;
-   u32 taken_slow_irqenable;
 
-   u64 released;
u32 released_slow;
u32 released_slow_kicked;
 
 #define HISTO_BUCKETS  30
-   u32 histo_spin_total[HISTO_BUCKETS+1];
-   u32 histo_spin_spinning[HISTO_BUCKETS+1];
u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-   u64 time_total;
-   u64 time_spinning;
u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1  10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-   spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_total);
-   spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,84 @@ static inline u64 spin_time_start(void)
return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-   unsigned char lock; /* 0 - free; 1 - locked */
-   unsigned short spinners;/* count of waiting cpus */
+struct xen_lock_waiting {
+   struct arch_spinlock *lock;
+   __ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   return xl-lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   /* Not strictly true; this is only the count of contended
-  lock-takers entering the slow path. */
-   return xl-spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-   u8 old = 1;
-
-   asm(xchgb %b0,%1
-   : +q (old), +m (xl-lock) : : memory);
-
-   return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-   struct xen_spinlock *prev;
-
-   prev = __this_cpu_read(lock_spinners);
-   __this_cpu_write(lock_spinners, xl);
-
-   wmb();  /* set lock of interest before count */
-
-   asm(LOCK_PREFIX  incw %0
-   : +m (xl-spinners) : : memory);
-
-   return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct 
xen_spinlock *prev)
-{
-   asm(LOCK_PREFIX  decw %0
-   : +m (xl-spinners) : : memory);
-   wmb

[PATCH RFC V4 02/11] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

The code size expands somewhat, and its probably better to just call
a function rather than inline it.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/Kconfig |3 +++
 kernel/Kconfig.locks |2 +-
 2 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6a47bb2..1f03f82 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -585,6 +585,9 @@ config PARAVIRT_SPINLOCKS
 
  If you are unsure how to answer this question, answer N.
 
+config ARCH_NOINLINE_SPIN_UNLOCK
+   def_bool PARAVIRT_SPINLOCKS
+
 config PARAVIRT_CLOCK
bool
 
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 5068e2a..584637b 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -125,7 +125,7 @@ config INLINE_SPIN_LOCK_IRQSAVE
 ARCH_INLINE_SPIN_LOCK_IRQSAVE
 
 config INLINE_SPIN_UNLOCK
-   def_bool !DEBUG_SPINLOCK  (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)
+   def_bool !DEBUG_SPINLOCK  (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)  
!ARCH_NOINLINE_SPIN_UNLOCK
 
 config INLINE_SPIN_UNLOCK_BH
def_bool !DEBUG_SPINLOCK  ARCH_INLINE_SPIN_UNLOCK_BH
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V4 01/11] x86/spinlock: replace pv spinlocks with pv ticketlocks

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |   30 ++-
 arch/x86/include/asm/paravirt_types.h |   10 ++
 arch/x86/include/asm/spinlock.h   |   50 ++--
 arch/x86/include/asm/spinlock_types.h |4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +
 arch/x86/xen/spinlock.c   |7 -
 6 files changed, 56 insertions(+), 60 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index a7d2db9..76cae7a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -750,36 +750,14 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP)  defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+   PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
- unsigned long flags)
-{
-   PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-   return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+   PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 8e8b9a4..005e24d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include asm/spinlock_types.h
+
 struct pv_lock_ops {
-   int (*spin_is_locked)(struct arch_spinlock *lock);
-   int (*spin_is_contended)(struct arch_spinlock *lock);
-   void (*spin_lock)(struct arch_spinlock *lock);
-   void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long 
flags);
-   int (*spin_trylock)(struct arch_spinlock *lock);
-   void (*spin_unlock)(struct arch_spinlock *lock);
+   void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+   void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index a82c2bf..5efd2f9 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -37,6 +37,32 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD (1  11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket

[PATCH RFC V4 00/11] Paravirtualized ticketlocks

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

[ Changes since last posting:

  - Stephan Diestelhorst stephan.diestelho...@amd.com pointed out
that my old unlock code was unsound, and could lead to deadlocks
(at least in principle).  The new unlock code is definitely sound,
but likely slower as it introduces a locked xadd; this seems
unavoidable.  However, when PV ticketlocks are not enabled, the
unlock code is as it normally would be (a single unlocked add),
and it uses the jump-label machinery to make the selection at
runtime.
]

NOTE: this series is available in:
  git://github.com/jsgf/linux-xen.git upstream/pvticketlock-slowflag
and is based on the previously posted ticketlock cleanup series in
  git://github.com/jsgf/linux-xen.git upstream/ticketlock-cleanup

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct next
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into slowpath state.

- When releasing a lock, if it is in slowpath state, the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The slowpath state is stored in the LSB of the within the lock tail
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a small ticket can deal with 128 CPUs, and large ticket
32768).

This series provides a Xen implementation, but it should be
straightforward to add a KVM implementation as well.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in slowpath
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
inc = xadd(lock-tickets, inc);
inc.tail = ~TICKET_SLOWPATH_FLAG;

if (likely(inc.head == inc.tail))
goto out;

for (;;) {
unsigned count = SPIN_THRESHOLD;

do {
if (ACCESS_ONCE(lock-tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out:barrier();

which results in:
push   %rbp
mov%rsp,%rbp

mov$0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f   # Slowpath if lock in contention

pop%rbp
retq   

### SLOWPATH START
1:  and$-2,%edx
movzbl %dl,%esi

2:  mov$0x800,%eax
jmp4f

3:  pause  
sub$0x1,%eax
je 5f

4:  movzbl (%rdi),%ecx
cmp%cl,%dl
jne3b

pop%rbp
retq   

5:  callq  *__ticket_lock_spinning
jmp2b
### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

push   %rbp
mov%rsp,%rbp

mov$0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f

pop%rbp
retq   

### SLOWPATH START
1:  pause  
movzbl (%rdi),%eax
cmp%dl,%al
jne1b

pop%rbp
retq   
### SLOWPATH END

The unlock code is complicated by the need to both add to the lock's
head and fetch the slowpath flag from tail.  This version of the
patch uses a locked xadd to do this, along with a correction to
prevent an overflow in head from

[PATCH RFC V4 03/11] x86/ticketlock: collapse a layer of functions

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h |   35 +--
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 5efd2f9..f0d6a59 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct 
arch_spinlock *lock, __t
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
register struct __raw_tickets inc = { .tail = 1 };
 
@@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct 
arch_spinlock *lock)
 out:   barrier();  /* make sure nothing creeps before the lock is 
taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
arch_spinlock_t old, new;
 
@@ -110,7 +110,7 @@ static __always_inline int 
__ticket_spin_trylock(arch_spinlock_t *lock)
return cmpxchg(lock-head_tail, old.head_tail, new.head_tail) == 
old.head_tail;
 }
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
__ticket_t next = lock-tickets.head + 1;
 
@@ -118,46 +118,21 @@ static __always_inline void 
__ticket_spin_unlock(arch_spinlock_t *lock)
__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return ((tmp.tail - tmp.head)  TICKET_MASK)  1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-   __ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-   return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-   __ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
  unsigned long flags)
 {
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V4 06/11] xen/pvticketlocks: add xen_nopvspin parameter to disable xen pv ticketlocks

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |   14 ++
 1 files changed, 14 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index f6133c5..1e21c99 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -195,12 +195,26 @@ void xen_uninit_lock_cpu(int cpu)
unbind_from_irqhandler(per_cpu(lock_kicker_irq, cpu), NULL);
 }
 
+static bool xen_pvspin __initdata = true;
+
 void __init xen_init_spinlocks(void)
 {
+   if (!xen_pvspin) {
+   printk(KERN_DEBUG xen: PV spinlocks disabled\n);
+   return;
+   }
+
pv_lock_ops.lock_spinning = xen_lock_spinning;
pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
+static __init int xen_parse_nopvspin(char *arg)
+{
+   xen_pvspin = false;
+   return 0;
+}
+early_param(xen_nopvspin, xen_parse_nopvspin);
+
 #ifdef CONFIG_XEN_DEBUG_FS
 
 static struct dentry *d_spin_debug;
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH RFC V4 04/11] xen: defer spinlock setup until boot CPU setup

2011-10-04 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

There's no need to do it at very early init, and doing it there
makes it impossible to use the jump_label machinery.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/smp.c |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index e79dbb9..4dec905 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -200,6 +200,7 @@ static void __init xen_smp_prepare_boot_cpu(void)
 
xen_filter_cpu_maps();
xen_setup_vcpu_info_placement();
+   xen_init_spinlocks();
 }
 
 static void __init xen_smp_prepare_cpus(unsigned int max_cpus)
@@ -513,7 +514,6 @@ void __init xen_smp_init(void)
 {
smp_ops = xen_smp_ops;
xen_fill_possible_map();
-   xen_init_spinlocks();
 }
 
 static void __init xen_hvm_smp_prepare_cpus(unsigned int max_cpus)
-- 
1.7.6.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 09:10 AM, Linus Torvalds wrote:
 On Wed, Sep 28, 2011 at 8:55 AM, Jan Beulich jbeul...@suse.com wrote:
 just use lock xaddw there too.
 I'm afraid that's not possible, as that might carry from the low 8 bits
 into the upper 8 ones, which must be avoided.
 Oh damn, you're right. So I guess the right way to do things is with
 cmpxchg, but some nasty mfence setup could do it too.

Could do something like:

if (ticket-head = 254)
prev = xadd(ticket-head_tail, 0xff02);
else
prev = xadd(ticket-head_tail, 0x0002);

to compensate for the overflow.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 06:58 AM, Stephan Diestelhorst wrote:
 I have tested this and have not seen it fail on publicly released AMD
 systems. But as I have tried to point out, this does not mean it is
 safe to do in software, because future microarchtectures may have more
 capable forwarding engines.

Sure.

 Have you tested this, or is this just from code analysis (which I
 agree with after reviewing the ordering rules in the Intel manual).
 We have found a similar issue in Novell's PV ticket lock implementation
 during internal product testing.

Jan may have picked it up from an earlier set of my patches.

 Since you want to get that addb out to global memory before the second
 read, either use a LOCK prefix for it, add an MFENCE between addb and
 movzwl, or use a LOCKed instruction that will have a fencing effect
 (e.g., to top-of-stack)between addb and movzwl.
 Hm.  I don't really want to do any of those because it will probably
 have a significant effect on the unlock performance; I was really trying
 to avoid adding any more locked instructions.  A previous version of the
 code had an mfence in here, but I hit on the idea of using aliasing to
 get the ordering I want - but overlooked the possible effect of store
 forwarding.
 Well, I'd be curious about the actual performance impact. If the store
 needs to commit to memory due to aliasing anyways, this would slow down
 execution, too. After all it is better to write working than fast code,
 no? ;-)

Rule of thumb is that AMD tends to do things like lock and fence more
efficiently than Intel - at least historically.  I don't know if that's
still true for current Intel microarchitectures.

 I guess it comes down to throwing myself on the efficiency of some kind
 of fence instruction.  I guess an lfence would be sufficient; is that
 any more efficient than a full mfence?
 An lfence should not be sufficient, since that essentially is a NOP on
 WB memory. You really want a full fence here, since the store needs to
 be published before reading the lock with the next load.

The Intel manual reads:

Reads cannot pass earlier LFENCE and MFENCE instructions.
Writes cannot pass earlier LFENCE, SFENCE, and MFENCE instructions.
LFENCE instructions cannot pass earlier reads.

Which I interpreted as meaning that an lfence would prevent forwarding. 
But I guess it doesn't say lfence instructions cannot pass earlier
writes, which means that the lfence could logically happen before the
write, thereby allowing forwarding?  Or should I be reading this some
other way?

 Could you give me a pointer to AMD's description of the ordering rules?
 They should be in AMD64 Architecture Programmer's Manual Volume 2:
 System Programming, Section 7.2 Multiprocessor Memory Access Ordering.

 http://developer.amd.com/documentation/guides/pages/default.aspx#manuals

 Let me know if you have some clarifying suggestions. We are currently
 revising these documents...

I find the English descriptions of these kinds of things frustrating to
read because of ambiguities in the precise meaning of words like pass,
ahead, behind in these contexts.  I find the prose useful to get an
overview, but when I have a specific question I wonder if something more
formal would be useful.
I guess it's implied that anything that is not prohibited by the
ordering rules is allowed, but it wouldn't hurt to say it explicitly.
That said, the AMD description seems clearer and more explicit than the
Intel manual (esp since it specifically discusses the problem here).

Thanks,
J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
 On 09/28/2011 10:22 AM, Linus Torvalds wrote:
 On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge jer...@goop.org wrote:
 Could do something like:

if (ticket-head = 254)
prev = xadd(ticket-head_tail, 0xff02);
else
prev = xadd(ticket-head_tail, 0x0002);

 to compensate for the overflow.
 Oh wow. You havge an even more twisted mind than I do.

 I guess that will work, exactly because we control head and thus can
 know about the overflow in the low byte. But boy is that ugly ;)

 But at least you wouldn't need to do the loop with cmpxchg. So it's
 twisted and ugly, but migth be practical.

 I suspect it should be coded as -254 in order to use a short immediate
 if that is even possible...

I'm about to test:

static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
{
if (TICKET_SLOWPATH_FLAG  
unlikely(arch_static_branch(paravirt_ticketlocks_enabled))) {
arch_spinlock_t prev;
__ticketpair_t inc = TICKET_LOCK_INC;

if (lock-tickets.head = (1  TICKET_SHIFT) - TICKET_LOCK_INC)
inc += -1  TICKET_SHIFT;

prev.head_tail = xadd(lock-head_tail, inc);

if (prev.tickets.tail  TICKET_SLOWPATH_FLAG)
__ticket_unlock_slowpath(lock, prev);
} else
__ticket_unlock_release(lock);
}

Which, frankly, is not something I particularly want to put my name to.

It makes gcc go into paroxysms of trickiness:

 4a8:   80 3f fecmpb   $0xfe,(%rdi)
 4ab:   19 f6   sbb%esi,%esi
 4ad:   66 81 e6 00 01  and$0x100,%si
 4b2:   66 81 ee fe 00  sub$0xfe,%si
 4b7:   f0 66 0f c1 37  lock xadd %si,(%rdi)

...which is pretty neat, actually.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 11:08 AM, Stephan Diestelhorst wrote:
 On Wednesday 28 September 2011 19:50:08 Jeremy Fitzhardinge wrote:
 On 09/28/2011 10:24 AM, H. Peter Anvin wrote:
 On 09/28/2011 10:22 AM, Linus Torvalds wrote:
 On Wed, Sep 28, 2011 at 9:47 AM, Jeremy Fitzhardinge jer...@goop.org 
 wrote:
 Could do something like:

if (ticket-head = 254)
prev = xadd(ticket-head_tail, 0xff02);
else
prev = xadd(ticket-head_tail, 0x0002);

 to compensate for the overflow.
 Oh wow. You havge an even more twisted mind than I do.

 I guess that will work, exactly because we control head and thus can
 know about the overflow in the low byte. But boy is that ugly ;)

 But at least you wouldn't need to do the loop with cmpxchg. So it's
 twisted and ugly, but migth be practical.

 I suspect it should be coded as -254 in order to use a short immediate
 if that is even possible...
 I'm about to test:

 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
  if (TICKET_SLOWPATH_FLAG  
 unlikely(arch_static_branch(paravirt_ticketlocks_enabled))) {
  arch_spinlock_t prev;
  __ticketpair_t inc = TICKET_LOCK_INC;

  if (lock-tickets.head = (1  TICKET_SHIFT) - TICKET_LOCK_INC)
  inc += -1  TICKET_SHIFT;

  prev.head_tail = xadd(lock-head_tail, inc);

  if (prev.tickets.tail  TICKET_SLOWPATH_FLAG)
  __ticket_unlock_slowpath(lock, prev);
  } else
  __ticket_unlock_release(lock);
 }

 Which, frankly, is not something I particularly want to put my name to.
 I must have missed the part when this turned into the propose-the-
 craziest-way-that-this-still-works.contest :)

 What is wrong with converting the original addb into a lock addb? The
 crazy wrap around tricks add a conditional and lots of headache. The
 lock addb/w is clean. We are paying an atomic in both cases, so I just
 don't see the benefit of the second solution.

Well, it does end up generating surprisingly nice code.  And to be
honest, being able to do the unlock and atomically fetch the flag as one
operation makes it much easier to reason about.

I'll do a locked add variant as well to see how it turns out.

Do you think locked add is better than unlocked + mfence?

Thanks,
J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-28 Thread Jeremy Fitzhardinge
On 09/28/2011 11:49 AM, Linus Torvalds wrote:
 But I don't care all *that* deeply. I do agree that the xaddw trick is
 pretty tricky. I just happen to think that it's actually *less* tricky
 than read the upper bits separately and depend on subtle ordering
 issues with another writer that happens at the same time on another
 CPU.

 So I can live with either form - as long as it works. I think it might
 be easier to argue that the xaddw is guaranteed to work, because all
 values at all points are unarguably atomic (yeah, we read the lower
 bits nonatomically, but as the owner of the lock we know that nobody
 else can write them).

Exactly.  I just did a locked add variant, and while the code looks a
little simpler, it definitely has more actual complexity to analyze.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-27 Thread Jeremy Fitzhardinge
On 09/27/2011 02:34 AM, Stephan Diestelhorst wrote:
 On Wednesday 14 September 2011, 17:31:32 Jeremy Fitzhardinge wrote:
 This series replaces the existing paravirtualized spinlock mechanism
 with a paravirtualized ticketlock mechanism.
 [...] 
 The unlock code is very straightforward:
  prev = *lock;
  __ticket_unlock_release(lock);
  if (unlikely(__ticket_in_slowpath(lock)))
  __ticket_unlock_slowpath(lock, prev);

 which generates:
  push   %rbp
  mov%rsp,%rbp

 movzwl (%rdi),%esi
  addb   $0x2,(%rdi)
 movzwl (%rdi),%eax
  testb  $0x1,%ah
  jne1f

  pop%rbp
  retq   

  ### SLOWPATH START
 1:   movzwl (%rdi),%edx
  movzbl %dh,%ecx
  mov%edx,%eax
  and$-2,%ecx # clear TICKET_SLOWPATH_FLAG
  mov%cl,%dh
  cmp%dl,%cl  # test to see if lock is uncontended
  je 3f

 2:   movzbl %dl,%esi
  callq  *__ticket_unlock_kick# kick anyone waiting
  pop%rbp
  retq   

 3:   lock cmpxchg %dx,(%rdi) # use cmpxchg to safely write back flag
  jmp2b
  ### SLOWPATH END
 [...]
 Thoughts? Comments? Suggestions?
 You have a nasty data race in your code that can cause a losing
 acquirer to sleep forever, because its setting the TICKET_SLOWPATH flag
 can race with the lock holder releasing the lock.

 I used the code for the slow path from the GIT repo.

 Let me try to point out an interleaving:

 Lock is held by one thread, contains 0x0200.

 _Lock holder_   _Acquirer_
 mov$0x200,%eax
 lock xadd %ax,(%rdi)
 // ax:= 0x0200, lock:= 0x0400
 ...
 // this guy spins for a while, reading
 // the lock
 ...
 //trying to free the lock
 movzwl (%rdi),%esi (esi:=0x0400)
 addb   $0x2,(%rdi) (LOCAL copy of lock is now: 0x0402)
 movzwl (%rdi),%eax (local forwarding from previous store: eax := 0x0402)
 testb  $0x1,%ah(no wakeup of anybody)
 jne1f

 callq  *__ticket_lock_spinning
   ...
   // __ticket_enter_slowpath(lock)
   lock or (%rdi), $0x100
   // (global view of lock := 0x0500)
   ...
   ACCESS_ONCE(lock-tickets.head) == want
   // (reads 0x00)
   ...
   xen_poll_irq(irq); // goes to sleep
 ...
 [addb   $0x2,(%rdi)]
 // (becomes globally visible only now! global view of lock := 0x0502)
 ...

 Your code is reusing the (just about) safe version of unlocking a
 spinlock without understanding the effect that close has on later
 memory ordering. It may work on CPUs that cannot do narrow - wide
 store to load forwarding and have to make the addb store visible
 globally. This is an implementation artifact of specific uarches, and
 you mustn't rely on it, since our specified memory model allows looser
 behaviour.

Ah, thanks for this observation.  I've seen this bug before when I
didn't pay attention to the unlock W vs flag R ordering at all, and I
was hoping the aliasing would be sufficient - and certainly this seems
to have been OK on my Intel systems.  But you're saying that it will
fail on current AMD systems?  Have you tested this, or is this just from
code analysis (which I agree with after reviewing the ordering rules in
the Intel manual).

 Since you want to get that addb out to global memory before the second
 read, either use a LOCK prefix for it, add an MFENCE between addb and
 movzwl, or use a LOCKed instruction that will have a fencing effect
 (e.g., to top-of-stack)between addb and movzwl.

Hm.  I don't really want to do any of those because it will probably
have a significant effect on the unlock performance; I was really trying
to avoid adding any more locked instructions.  A previous version of the
code had an mfence in here, but I hit on the idea of using aliasing to
get the ordering I want - but overlooked the possible effect of store
forwarding.

I guess it comes down to throwing myself on the efficiency of some kind
of fence instruction.  I guess an lfence would be sufficient; is that
any more efficient than a full mfence?  At least I can make it so that
its only present when pv ticket locks are actually in use, so it won't
affect the native case.

Could you give me a pointer to AMD's description of the ordering rules?

Thanks,
J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 04/10] x86/ticketlock: collapse a layer of functions

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h |   35 +--
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 860fc4b..98fe202 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -76,7 +76,7 @@ static __always_inline void __ticket_unlock_kick(struct 
arch_spinlock *lock, __t
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
register struct __raw_tickets inc = { .tail = 1 };
 
@@ -96,7 +96,7 @@ static __always_inline void __ticket_spin_lock(struct 
arch_spinlock *lock)
 out:   barrier();  /* make sure nothing creeps before the lock is 
taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
arch_spinlock_t old, new;
 
@@ -128,7 +128,7 @@ static __always_inline void 
__ticket_unlock_release(arch_spinlock_t *lock)
 }
 #endif
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
__ticket_t next = lock-tickets.head + 1;
 
@@ -136,46 +136,21 @@ static __always_inline void 
__ticket_spin_unlock(arch_spinlock_t *lock)
__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return ((tmp.tail - tmp.head)  TICKET_MASK)  1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-   __ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-   return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-   __ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
  unsigned long flags)
 {
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 01/10] x86/ticketlocks: remove obsolete comment

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

The note about partial registers is not really relevent now that we
rely on gcc to generate all the assembler.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h |4 
 1 files changed, 0 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f5695ee..972c260 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -49,10 +49,6 @@
  * issues and should be optimal for the uncontended case. Note the tail must be
  * in the high part, because a wide xadd increment of the low part would carry
  * up and contaminate the high part.
- *
- * With fewer than 2^8 possible CPUs, we can use x86's partial registers to
- * save some instructions and make the code more elegant. There really isn't
- * much between them in performance though, especially as locks are out of 
line.
  */
 static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
 {
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 02/10] x86/spinlocks: replace pv spinlocks with pv ticketlocks

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |   30 ++--
 arch/x86/include/asm/paravirt_types.h |   10 ++---
 arch/x86/include/asm/spinlock.h   |   59 ++---
 arch/x86/include/asm/spinlock_types.h |4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +---
 arch/x86/xen/spinlock.c   |7 +++-
 6 files changed, 63 insertions(+), 62 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index a7d2db9..76cae7a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -750,36 +750,14 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP)  defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+   PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
- unsigned long flags)
-{
-   PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-   return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+   PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 8e8b9a4..005e24d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include asm/spinlock_types.h
+
 struct pv_lock_ops {
-   int (*spin_is_locked)(struct arch_spinlock *lock);
-   int (*spin_is_contended)(struct arch_spinlock *lock);
-   void (*spin_lock)(struct arch_spinlock *lock);
-   void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long 
flags);
-   int (*spin_trylock)(struct arch_spinlock *lock);
-   void (*spin_unlock)(struct arch_spinlock *lock);
+   void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+   void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 972c260..860fc4b 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -37,6 +37,32 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD (1  11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket

[PATCH 05/10] xen/pvticketlock: Xen implementation for PV ticket locks

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |  287 +++
 1 files changed, 43 insertions(+), 244 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 23af06a..f6133c5 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-   u64 taken;
u32 taken_slow;
-   u32 taken_slow_nested;
u32 taken_slow_pickup;
u32 taken_slow_spurious;
-   u32 taken_slow_irqenable;
 
-   u64 released;
u32 released_slow;
u32 released_slow_kicked;
 
 #define HISTO_BUCKETS  30
-   u32 histo_spin_total[HISTO_BUCKETS+1];
-   u32 histo_spin_spinning[HISTO_BUCKETS+1];
u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-   u64 time_total;
-   u64 time_spinning;
u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1  10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-   spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_total);
-   spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,84 @@ static inline u64 spin_time_start(void)
return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-   unsigned char lock; /* 0 - free; 1 - locked */
-   unsigned short spinners;/* count of waiting cpus */
+struct xen_lock_waiting {
+   struct arch_spinlock *lock;
+   __ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   return xl-lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   /* Not strictly true; this is only the count of contended
-  lock-takers entering the slow path. */
-   return xl-spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-   u8 old = 1;
-
-   asm(xchgb %b0,%1
-   : +q (old), +m (xl-lock) : : memory);
-
-   return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-   struct xen_spinlock *prev;
-
-   prev = __this_cpu_read(lock_spinners);
-   __this_cpu_write(lock_spinners, xl);
-
-   wmb();  /* set lock of interest before count */
-
-   asm(LOCK_PREFIX  incw %0
-   : +m (xl-spinners) : : memory);
-
-   return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct 
xen_spinlock *prev)
-{
-   asm(LOCK_PREFIX  decw %0
-   : +m (xl-spinners) : : memory);
-   wmb

[PATCH 08/10] x86/ticketlock: add slowpath logic

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

UnlockerLocker
test for lock pickup
- fail
unlock
test slowpath
- false
set slowpath flags
block

Whereas this works in any ordering:

UnlockerLocker
set slowpath flags
test for lock pickup
- fail
block
unlock
test slowpath
- true, kick

If the unlocker finds that the lock has the slowpath flag set but it is
actually uncontended (ie, head == tail, so nobody is waiting), then it
clear the slowpath flag.

Note on memory access ordering:
When unlocking a ticketlock with PV callbacks enabled, unlock
first adds to the lock head, then checks to see if the slowpath
flag is set in the lock tail.

However, because reads are not ordered with respect to writes in
different memory locations, the CPU could perform the read before
updating head to release the lock.

This would deadlock with another CPU in the lock slowpath, as it will
set the slowpath flag before checking to see if the lock has been
released in the interim.

A heavyweight fix would be to stick a full mfence between the two.
However, a lighterweight fix is to simply make sure the flag tests
loads both head and tail of the lock in a single operation, thereby
making sure that it overlaps with the memory written by the unlock,
forcing the CPU to maintain ordering.

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

(Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.)

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
Signed-off-by: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
---
 arch/x86/include/asm/paravirt.h   |2 +-
 arch/x86/include/asm/spinlock.h   |   92 ++--
 arch/x86/include/asm/spinlock_types.h |2 +
 arch/x86/kernel/paravirt-spinlocks.c  |1 +
 arch/x86/xen/spinlock.c   |4 ++
 5 files changed, 82 insertions(+), 19 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 50281c7..13b3d8b 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -755,7 +755,7 @@ static __always_inline void __ticket_lock_spinning(struct 
arch_spinlock *lock, _
PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 40c90aa..c1f6981 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -40,29 +40,56 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD (1  11)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
+/*
+ * Return true if someone is in the slowpath on this lock.  This
+ * should only be used by the current lock-holder.
+ */
+static inline bool __ticket_in_slowpath(arch_spinlock_t *lock)
 {
+   /*
+* This deliberately reads both head and tail as a single
+* memory operation, and then tests the flag in tail.  This is
+* to guarantee that this read is ordered after the add to
+* head which does the unlock.  If we were to only read tail
+* to test the flag, then the CPU would be free to reorder the
+* read to before the write to head (since it is a different
+* memory location), which could cause a deadlock with someone
+* setting the flag before re-checking the lock availability.
+*/
+   return ACCESS_ONCE(lock-head_tail)  (TICKET_SLOWPATH_FLAG  
TICKET_SHIFT);
 }
 
-static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
+static inline void

[PATCH 00/10] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

[ Changes since last posting:
  - fix bugs exposed by the cold light of testing
- make the slow flag read in unlock cover the whole lock
  to force ordering WRT the unlock write
- when kicking on unlock, only look for the CPU *we* released
  (ie, head value the unlock resulted in), rather than re-reading
  the new head and kicking on that basis
  - enable PV ticketlocks in Xen HVM guests
]

NOTE: this series is available in:
  git://github.com/jsgf/linux-xen.git upstream/pvticketlock-slowflag
and is based on the previously posted ticketlock cleanup series in
  git://github.com/jsgf/linux-xen.git upstream/ticketlock-cleanup

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct next
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into slowpath state.

- When releasing a lock, if it is in slowpath state, the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The slowpath state is stored in the LSB of the within the lock
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a small ticket can deal with 128 CPUs, and large ticket
32768).

This series provides a Xen implementation, but it should be
straightforward to add a KVM implementation as well.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in slowpath
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
inc = xadd(lock-tickets, inc);
inc.tail = ~TICKET_SLOWPATH_FLAG;

if (likely(inc.head == inc.tail))
goto out;

for (;;) {
unsigned count = SPIN_THRESHOLD;

do {
if (ACCESS_ONCE(lock-tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out:barrier();

which results in:
push   %rbp
mov%rsp,%rbp

mov$0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f

pop%rbp
retq   

### SLOWPATH START
1:  and$-2,%edx
movzbl %dl,%esi

2:  mov$0x800,%eax
jmp4f

3:  pause  
sub$0x1,%eax
je 5f

4:  movzbl (%rdi),%ecx
cmp%cl,%dl
jne3b

pop%rbp
retq   

5:  callq  *__ticket_lock_spinning
jmp2b
### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

push   %rbp
mov%rsp,%rbp

mov$0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f

pop%rbp
retq   

### SLOWPATH START
1:  pause  
movzbl (%rdi),%eax
cmp%dl,%al
jne1b

pop%rbp
retq   
### SLOWPATH END

The unlock code is very straightforward:
prev = *lock;
__ticket_unlock_release(lock);
if (unlikely(__ticket_in_slowpath(lock)))
__ticket_unlock_slowpath(lock, prev);

which generates:
push   %rbp
mov%rsp,%rbp

movzwl (%rdi),%esi
addb   $0x2,(%rdi)
movzwl (%rdi),%eax
testb  $0x1,%ah

[PATCH 10/10] xen: enable PV ticketlocks on HVM Xen

2011-09-14 Thread Jeremy Fitzhardinge
From: Stefano Stabellini stefano.stabell...@eu.citrix.com

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/smp.c |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/smp.c b/arch/x86/xen/smp.c
index e79dbb9..bf958ce 100644
--- a/arch/x86/xen/smp.c
+++ b/arch/x86/xen/smp.c
@@ -552,4 +552,5 @@ void __init xen_hvm_smp_init(void)
smp_ops.cpu_die = xen_hvm_cpu_die;
smp_ops.send_call_func_ipi = xen_smp_send_call_function_ipi;
smp_ops.send_call_func_single_ipi = 
xen_smp_send_call_function_single_ipi;
+   xen_init_spinlocks();
 }
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 09/10] xen/pvticketlock: allow interrupts to be enabled while blocking

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

If we can enable interrupts while waiting for the lock to become
available, and we take an interrupt before entering the poll,
and the handler takes a spinlock which ends up going into
the slow state (invalidating the per-cpu lock and want values),
then when the interrupt handler returns the event channel will
remain pending so the poll will return immediately, causing it to
return out to the main spinlock loop.
Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |   48 --
 1 files changed, 41 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index c939723..7366b39 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -106,11 +106,28 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
__ticket_t want)
 
start = spin_time_start();
 
-   /* Make sure interrupts are disabled to ensure that these
-  per-cpu values are not overwritten. */
+   /*
+* Make sure an interrupt handler can't upset things in a
+* partially setup state.
+*/
local_irq_save(flags);
 
+   /*
+* We don't really care if we're overwriting some other
+* (lock,want) pair, as that would mean that we're currently
+* in an interrupt context, and the outer context had
+* interrupts enabled.  That has already kicked the VCPU out
+* of xen_poll_irq(), so it will just return spuriously and
+* retry with newly setup (lock,want).
+*
+* The ordering protocol on this is that the lock pointer
+* may only be set non-NULL if the want ticket is correct.
+* If we're updating want, we must first clear lock.
+*/
+   w-lock = NULL;
+   smp_wmb();
w-want = want;
+   smp_wmb();
w-lock = lock;
 
/* This uses set_bit, which atomic and therefore a barrier */
@@ -124,21 +141,36 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
__ticket_t want)
/* Only check lock once pending cleared */
barrier();
 
-   /* Mark entry to slowpath before doing the pickup test to make
-  sure we don't deadlock with an unlocker. */
+   /*
+* Mark entry to slowpath before doing the pickup test to make
+* sure we don't deadlock with an unlocker.
+*/
__ticket_enter_slowpath(lock);
 
-   /* check again make sure it didn't become free while
-  we weren't looking  */
+   /*
+* check again make sure it didn't become free while
+* we weren't looking 
+*/
if (ACCESS_ONCE(lock-tickets.head) == want) {
ADD_STATS(taken_slow_pickup, 1);
goto out;
}
 
+   /* Allow interrupts while blocked */
+   local_irq_restore(flags);
+
+   /*
+* If an interrupt happens here, it will leave the wakeup irq
+* pending, which will cause xen_poll_irq() to return
+* immediately.
+*/
+
/* Block until irq becomes pending (or perhaps a spurious wakeup) */
xen_poll_irq(irq);
ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
+   local_irq_save(flags);
+
kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
@@ -160,7 +192,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, 
__ticket_t next)
for_each_cpu(cpu, waiting_cpus) {
const struct xen_lock_waiting *w = per_cpu(lock_waiting, cpu);
 
-   if (w-lock == lock  w-want == next) {
+   /* Make sure we read lock before want */
+   if (ACCESS_ONCE(w-lock) == lock 
+   ACCESS_ONCE(w-want) == next) {
ADD_STATS(released_slow_kicked, 1);
xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
break;
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 03/10] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

The code size expands somewhat, and its probably better to just call
a function rather than inline it.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/Kconfig |3 +++
 kernel/Kconfig.locks |2 +-
 2 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6a47bb2..1f03f82 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -585,6 +585,9 @@ config PARAVIRT_SPINLOCKS
 
  If you are unsure how to answer this question, answer N.
 
+config ARCH_NOINLINE_SPIN_UNLOCK
+   def_bool PARAVIRT_SPINLOCKS
+
 config PARAVIRT_CLOCK
bool
 
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 5068e2a..584637b 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -125,7 +125,7 @@ config INLINE_SPIN_LOCK_IRQSAVE
 ARCH_INLINE_SPIN_LOCK_IRQSAVE
 
 config INLINE_SPIN_UNLOCK
-   def_bool !DEBUG_SPINLOCK  (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)
+   def_bool !DEBUG_SPINLOCK  (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)  
!ARCH_NOINLINE_SPIN_UNLOCK
 
 config INLINE_SPIN_UNLOCK_BH
def_bool !DEBUG_SPINLOCK  ARCH_INLINE_SPIN_UNLOCK_BH
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 06/10] x86/pvticketlock: use callee-save for lock_spinning

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |2 +-
 arch/x86/include/asm/paravirt_types.h |2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |2 +-
 arch/x86/xen/spinlock.c   |3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 76cae7a..50281c7 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -752,7 +752,7 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+   PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 005e24d..5e0c138 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include asm/spinlock_types.h
 
 struct pv_lock_ops {
-   void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+   struct paravirt_callee_save lock_spinning;
void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c 
b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-   .lock_spinning = paravirt_nop,
+   .lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index f6133c5..7a04950 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -145,6 +145,7 @@ out:
 
spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -197,7 +198,7 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-   pv_lock_ops.lock_spinning = xen_lock_spinning;
+   pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 07/10] x86/ticketlocks: when paravirtualizing ticket locks, increment by 2

2011-09-14 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a is in slowpath state bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h   |   16 
 arch/x86/include/asm/spinlock_types.h |   10 +-
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 98fe202..40c90aa 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,7 +78,7 @@ static __always_inline void __ticket_unlock_kick(struct 
arch_spinlock *lock, __t
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-   register struct __raw_tickets inc = { .tail = 1 };
+   register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
inc = xadd(lock-tickets, inc);
 
@@ -104,7 +104,7 @@ static __always_inline int 
arch_spin_trylock(arch_spinlock_t *lock)
if (old.tickets.head != old.tickets.tail)
return 0;
 
-   new.head_tail = old.head_tail + (1  TICKET_SHIFT);
+   new.head_tail = old.head_tail + (TICKET_LOCK_INC  TICKET_SHIFT);
 
/* cmpxchg is a full barrier, so nothing can move before it */
return cmpxchg(lock-head_tail, old.head_tail, new.head_tail) == 
old.head_tail;
@@ -113,24 +113,24 @@ static __always_inline int 
arch_spin_trylock(arch_spinlock_t *lock)
 #if (NR_CPUS  256)
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-   asm volatile(UNLOCK_LOCK_PREFIX incb %0
+   asm volatile(UNLOCK_LOCK_PREFIX addb %1, %0
 : +m (lock-head_tail)
-:
+: i (TICKET_LOCK_INC)
 : memory, cc);
 }
 #else
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-   asm volatile(UNLOCK_LOCK_PREFIX incw %0
+   asm volatile(UNLOCK_LOCK_PREFIX addw %1, %0
 : +m (lock-head_tail)
-:
+: i (TICKET_LOCK_INC)
 : memory, cc);
 }
 #endif
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-   __ticket_t next = lock-tickets.head + 1;
+   __ticket_t next = lock-tickets.head + TICKET_LOCK_INC;
 
__ticket_unlock_release(lock);
__ticket_unlock_kick(lock, next);
@@ -147,7 +147,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t 
*lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
-   return ((tmp.tail - tmp.head)  TICKET_MASK)  1;
+   return ((tmp.tail - tmp.head)  TICKET_MASK)  TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h 
b/arch/x86/include/asm/spinlock_types.h
index dbe223d..aa9a205 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -3,7 +3,13 @@
 
 #include linux/types.h
 
-#if (CONFIG_NR_CPUS  256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC  2
+#else
+#define __TICKET_LOCK_INC  1
+#endif
+
+#if (CONFIG_NR_CPUS  (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -11,6 +17,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT   (sizeof(__ticket_t) * 8)
 #define TICKET_MASK((__ticket_t)((1  TICKET_SHIFT) - 1))
 
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

2011-09-08 Thread Jeremy Fitzhardinge
On 09/08/2011 12:51 AM, Avi Kivity wrote:
 On 09/07/2011 10:09 PM, Jeremy Fitzhardinge wrote:
 On 09/07/2011 10:41 AM, Avi Kivity wrote:
   Hm, I'm interested to know what you're thinking in more detail. 
 Can you
   leave an NMI pending before you block in the same way you can with
   sti;halt with normal interrupts?
 
 
   Nope.  But you can do
 
  if (regs-rip in critical section)
  regs-rip = after_halt;
 
   and effectively emulate it.  The critical section is something like
 
   critical_section_start:
   if (woken_up)
   goto critical_section_end;
   hlt
   critical_section_end:

 Hm.  It's a pity you have to deliver an actual interrupt to implement
 the kick though.

 I don't think it's that expensive, especially compared to the
 double-context-switch and vmexit of the spinner going to sleep.  On
 AMD we do have to take an extra vmexit (on IRET) though.

Fair enough - so if the vcpu blocks itself, it ends up being rescheduled
in the NMI handler, which then returns to the lock slowpath.  And if its
a normal hlt, then you can also take interrupts if they're enabled while
spinning.

And if you get nested NMIs (since you can get multiple spurious kicks,
or from other NMI sources), then one NMI will get latched and any others
will get dropped?

 Well we could have a specialized sleep/wakeup hypercall pair like Xen,
 but I'd like to avoid it if at all possible.

Yeah, that's something that just falls out of the existing event channel
machinery, so it isn't something that I specifically added.  But it does
mean that you simply end up with a hypercall returning on kick, with no
real complexities.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

2011-09-07 Thread Jeremy Fitzhardinge
On 09/07/2011 10:09 AM, Avi Kivity wrote:
 On 09/07/2011 07:52 PM, Don Zickus wrote:
 
   May I ask how?  Detecting a back-to-back NMI?

 Pretty boring actually.  Currently we execute an NMI handler until
 one of
 them returns handled.  Then we stop.  This may cause us to miss an
 NMI in
 the case of multiple NMIs at once.  Now we are changing it to execute
 _all_ the handlers to make sure we didn't miss one.

 That's going to be pretty bad for kvm - those handlers become a lot
 more expensive since they involve reading MSRs.

How often are you going to get NMIs in a kvm guest?

   Even worse if we start using NMIs as a wakeup for pv spinlocks as
 provided by this patchset.

Hm, I'm interested to know what you're thinking in more detail.  Can you
leave an NMI pending before you block in the same way you can with
sti;halt with normal interrupts?

I was thinking you might want to do something with monitor/mwait to
implement the blocking/kick ops. (Handwave)

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

2011-09-07 Thread Jeremy Fitzhardinge
On 09/07/2011 10:41 AM, Avi Kivity wrote:
 Hm, I'm interested to know what you're thinking in more detail.  Can you
 leave an NMI pending before you block in the same way you can with
 sti;halt with normal interrupts?


 Nope.  But you can do

if (regs-rip in critical section)
regs-rip = after_halt;

 and effectively emulate it.  The critical section is something like

 critical_section_start:
 if (woken_up)
 goto critical_section_end;
 hlt
 critical_section_end:

Hm.  It's a pity you have to deliver an actual interrupt to implement
the kick though.


 I was thinking you might want to do something with monitor/mwait to
 implement the blocking/kick ops. (Handwave)


 monitor/mwait are incredibly expensive to virtualize since they
 require write-protecting a page, IPIs flying everywhere and flushing
 tlbs, not to mention my lovely hugepages being broken up mercilessly.

Or what about a futex-like hypercall?

J

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

2011-09-06 Thread Jeremy Fitzhardinge
On 09/06/2011 08:14 AM, Don Zickus wrote:
 On Fri, Sep 02, 2011 at 02:50:53PM -0700, Jeremy Fitzhardinge wrote:
 On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
 On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
 I know that its generally considered bad form, but there's at least one
 spinlock that's only taken from NMI context and thus hasn't got any
 deadlock potential.
 Which one? 
 arch/x86/kernel/traps.c:nmi_reason_lock

 It serializes NMI access to the NMI reason port across CPUs.
 Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
 ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
 so I guess there's at least some chance there could be a virtual
 emulated NMI.  Maybe?  Does qemu do that kind of thing?

 But, erm, does that even make sense?  I'm assuming the NMI reason port
 tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
 there's only a single reason port, then doesn't that mean that either 1)
 they all got the NMI for the same reason, or 2) having a single port is
 inherently racy?  How does the locking actually work there?
 The reason port is for an external/system NMI.  All the IPI-NMI don't need
 to access this register to process their handlers, ie perf.  I think in
 general the IOAPIC is configured to deliver the external NMI to one cpu,
 usually the bsp cpu.  However, there has been a slow movement to free the
 bsp cpu from exceptions like this to allow one to eventually hot-swap the
 bsp cpu.  The spin locks in that code were an attempt to be more abstract
 about who really gets the external NMI.  Of course SGI's box is setup to
 deliver an external NMI to all cpus to dump the stack when the system
 isn't behaving.

 This is a very low usage NMI (in fact almost all cases lead to loud
 console messages).

 Hope that clears up some of the confusion.

Hm, not really.

What does it mean if two CPUs go down that path?  Should one do some NMI
processing while the other waits around for it to finish, and then do
some NMI processing on its own?

It sounds like that could only happen if you reroute NMI from one CPU to
another while the first CPU is actually in the middle of processing an
NMI - in which case, shouldn't the code doing the re-routing be taking
the spinlock?

Or perhaps a spinlock isn't the right primitive to use at all?  Couldn't
the second CPU just set a flag/counter (using something like an atomic
add/cmpxchg/etc) to make the first CPU process the second NMI?

But on the other hand, I don't really care if you can say that this path
will never be called in a virtual machine.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

2011-09-06 Thread Jeremy Fitzhardinge
On 09/06/2011 11:27 AM, Don Zickus wrote:
 But on the other hand, I don't really care if you can say that this path
 will never be called in a virtual machine.
 Does virtual machines support hot remove of cpus?  Probably not
 considering bare-metal barely supports it.

The only reason you'd want to is to add/remove VCPUs as a mechanism of
resource control, so if you were removing a VCPU it wouldn't matter much
which one you choose.  In other words, there's no reason you'd ever need
to remove the BSP in favour of one of the other CPUs.

Anyway, I'm not going to lose any sleep over this issue.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks

2011-09-06 Thread Jeremy Fitzhardinge
On 09/02/2011 04:22 AM, Stefano Stabellini wrote:
 do you have a git tree somewhere with this series? 

git://github.com/jsgf/linux-xen.git upstream/pvticketlock-slowflag

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

2011-09-02 Thread Jeremy Fitzhardinge
On 09/02/2011 07:47 AM, Peter Zijlstra wrote:
 On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
 From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

 We need to make sure interrupts are disabled while we're relying on the
 contents of the per-cpu lock_waiting values, otherwise an interrupt
 handler could come in, try to take some other lock, block, and overwrite
 our values.
 Would this make it illegal to take a spinlock from NMI context?

That would be problematic.  But a Xen domain wouldn't be getting NMIs -
at least not standard x86 ones - so that's moot.

 I know that its generally considered bad form, but there's at least one
 spinlock that's only taken from NMI context and thus hasn't got any
 deadlock potential.

Which one?

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking

2011-09-02 Thread Jeremy Fitzhardinge
On 09/02/2011 07:48 AM, Peter Zijlstra wrote:
 On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
 +   /* Make sure an interrupt handler can't upset things in a
 +  partially setup state. */
 local_irq_save(flags);
  
 +   /*
 +* We don't really care if we're overwriting some other
 +* (lock,want) pair, as that would mean that we're currently
 +* in an interrupt context, and the outer context had
 +* interrupts enabled.  That has already kicked the VCPU out
 +* of xen_poll_irq(), so it will just return spuriously and
 +* retry with newly setup (lock,want).
 +*
 +* The ordering protocol on this is that the lock pointer
 +* may only be set non-NULL if the want ticket is correct.
 +* If we're updating want, we must first clear lock.
 +*/
 +   w-lock = NULL; 
 I mean, I don't much care about Xen code, but that's two different
 comment styles.

Yeah, that's the two line comment style next to big block comment
style - but you're right they look pretty bad juxtaposed like that.

J

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 06/13] x86/ticketlock: add slowpath logic

2011-09-02 Thread Jeremy Fitzhardinge
On 09/02/2011 11:46 AM, Eric Northup wrote:
 On Thu, Sep 1, 2011 at 5:54 PM, Jeremy Fitzhardinge jer...@goop.org wrote:
 From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

 Maintain a flag in both LSBs of the ticket lock which indicates whether
 anyone is in the lock slowpath and may need kicking when the current
 holder unlocks.  The flags are set when the first locker enters
 the slowpath, and cleared when unlocking to an empty queue.
 Are there actually two flags maintained?  I only see the one in the
 ticket tail getting set/cleared/tested.

Yeah, there's only one flag, so there's a spare bit in the other half.

j

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 11/13] x86/ticketlock: only do kick after doing unlock

2011-09-02 Thread Jeremy Fitzhardinge
On 09/02/2011 07:49 AM, Peter Zijlstra wrote:
 On Thu, 2011-09-01 at 17:55 -0700, Jeremy Fitzhardinge wrote:
 From: Srivatsa Vaddagiri va...@linux.vnet.ibm.com

 We must release the lock before checking to see if the lock is in
 slowpath or else there's a potential race where the lock enters the
 slow path after the unlocker has checked the slowpath flag, but before
 it has actually unlocked.

 Signed-off-by: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
 Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com 
 Wouldn't it be much better to fold it back so that this bug never
 happens when you bisect?

Yes indeed.

J

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks

2011-09-02 Thread Jeremy Fitzhardinge
On 09/02/2011 08:38 AM, Linus Torvalds wrote:
 On Thu, Sep 1, 2011 at 5:54 PM, Jeremy Fitzhardinge jer...@goop.org wrote:
 The inner part of ticket lock code becomes:
inc = xadd(lock-tickets, inc);
inc.tail = ~TICKET_SLOWPATH_FLAG;

for (;;) {
unsigned count = SPIN_THRESHOLD;

do {
if (inc.head == inc.tail)
goto out;
cpu_relax();
inc.head = ACCESS_ONCE(lock-tickets.head);
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
 Hmm. It strikes me that I don't think you should touch the
 TICKET_SLOWPATH_FLAG in the fastpath at all.

 Can't you just do this:

inc = xadd(lock-tickets, inc);
if (likely(inc.head == inc.tail))
  goto out;

### SLOWPATH ###
inc.tail = ~TICKET_SLOWPATH_FLAG;
for (;;) {
   .. as before ..

 which might alleviate the problem with the fastpath being polluted by
 all those silly slowpath things.  Hmm?

 (This assumes that TICKET_SLOWPATH_FLAG is never set in inc.head, so
 if it's set that equality check will fail. I didn't actually check if
 that assumption was correct)

Yes, nice idea.  That ends up making the overall code slightly longer,
but the fastpath becomes identical to the non-pv case:

mov$512,%ecx
lock xadd %cx,(%rdi)
movzbl %ch,%edx
cmp%cl,%dl
je 2f

### SLOWPATH START
and$-2,%edx
mov$8192,%eax
movzbl %dl,%esi
1:  cmp%dl,%cl
je 2f
pause  
dec%eax
mov(%rdi),%cl
jne1b
callq  __ticket_lock_spinning
mov$8192,%eax
jmp1b
### SLOWPATH ENDS

2:


It's especially nice that it also moves the spin counter and arg setup
into the slowpath code.

And that entire piece of slowpath code can be moved out into its own
function, so the fastpath becomes:

mov$512,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%esi
cmp%al,%sil
je 1f

movzbl %sil,%esi
callq  __ticket_lock_slow
1:

I don't know whether that fastpath code is small enough to consider
inlining everywhere?

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks

2011-09-02 Thread Jeremy Fitzhardinge
On 09/02/2011 01:27 PM, Linus Torvalds wrote:
 On Fri, Sep 2, 2011 at 1:07 PM, Jeremy Fitzhardinge jer...@goop.org wrote:
 I don't know whether that fastpath code is small enough to consider
 inlining everywhere?
 No.

 There's no point in inlining something that ends up containing a
 conditional function call: gcc will have to effectively save/restore
 registers around that thing anyway, so you lose a lot of the
 advantages of inlining. So I think it's better done as an out-of-line
 function, which I thought we did for spinlocks anyway.

Yes, lock currently out-of-line.

I should also make sure that unlock is also out of line when
paravirtualized.

 Also, do you run with CONFIG_OPTIMIZE_SIZE? Without that, gcc should
 be smart enough to make a likely() case be a fall-through.

Ah, I was wondering why I'd never seen likely/unlikely do anything
useful.  With OPTIMIZE_SIZE=n, there's no point in explicitly moving the
slowpath out to a separate function.

So the only downside with this variant is that it breaks my design
criteria of making the generated code look identical to the the original
code when CONFIG_PARAVIRT_SPINLOCKS=n.  But I don't know if that's an
actual downside in practice.

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

2011-09-02 Thread Jeremy Fitzhardinge
On 09/02/2011 01:47 PM, Peter Zijlstra wrote:
 On Fri, 2011-09-02 at 12:29 -0700, Jeremy Fitzhardinge wrote:
 I know that its generally considered bad form, but there's at least one
 spinlock that's only taken from NMI context and thus hasn't got any
 deadlock potential.
 Which one? 
 arch/x86/kernel/traps.c:nmi_reason_lock

 It serializes NMI access to the NMI reason port across CPUs.

Ah, OK.  Well, that will never happen in a PV Xen guest.  But PV
ticketlocks are equally applicable to an HVM Xen domain (and KVM guest),
so I guess there's at least some chance there could be a virtual
emulated NMI.  Maybe?  Does qemu do that kind of thing?

But, erm, does that even make sense?  I'm assuming the NMI reason port
tells the CPU why it got an NMI.  If multiple CPUs can get NMIs and
there's only a single reason port, then doesn't that mean that either 1)
they all got the NMI for the same reason, or 2) having a single port is
inherently racy?  How does the locking actually work there?

J
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 2/8] x86/ticketlock: don't inline _spin_unlock when using paravirt spinlocks

2011-09-02 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

The code size expands somewhat, and its probably better to just call
a function rather than inline it.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/Kconfig |3 +++
 kernel/Kconfig.locks |2 +-
 2 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 6a47bb2..1f03f82 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -585,6 +585,9 @@ config PARAVIRT_SPINLOCKS
 
  If you are unsure how to answer this question, answer N.
 
+config ARCH_NOINLINE_SPIN_UNLOCK
+   def_bool PARAVIRT_SPINLOCKS
+
 config PARAVIRT_CLOCK
bool
 
diff --git a/kernel/Kconfig.locks b/kernel/Kconfig.locks
index 5068e2a..584637b 100644
--- a/kernel/Kconfig.locks
+++ b/kernel/Kconfig.locks
@@ -125,7 +125,7 @@ config INLINE_SPIN_LOCK_IRQSAVE
 ARCH_INLINE_SPIN_LOCK_IRQSAVE
 
 config INLINE_SPIN_UNLOCK
-   def_bool !DEBUG_SPINLOCK  (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)
+   def_bool !DEBUG_SPINLOCK  (!PREEMPT || ARCH_INLINE_SPIN_UNLOCK)  
!ARCH_NOINLINE_SPIN_UNLOCK
 
 config INLINE_SPIN_UNLOCK_BH
def_bool !DEBUG_SPINLOCK  ARCH_INLINE_SPIN_UNLOCK_BH
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 3/8] x86/ticketlock: collapse a layer of functions

2011-09-02 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h |   35 +--
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index bafed3b..d1a3970 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -80,7 +80,7 @@ static __always_inline void __ticket_unlock_kick(struct 
arch_spinlock *lock, __t
  * save some instructions and make the code more elegant. There really isn't
  * much between them in performance though, especially as locks are out of 
line.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
register struct __raw_tickets inc = { .tail = 1 };
 
@@ -100,7 +100,7 @@ static __always_inline void __ticket_spin_lock(struct 
arch_spinlock *lock)
 out:   barrier();  /* make sure nothing creeps before the lock is 
taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
arch_spinlock_t old, new;
 
@@ -132,7 +132,7 @@ static __always_inline void 
__ticket_unlock_release(arch_spinlock_t *lock)
 }
 #endif
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
__ticket_t next = lock-tickets.head + 1;
 
@@ -140,46 +140,21 @@ static __always_inline void 
__ticket_spin_unlock(arch_spinlock_t *lock)
__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return ((tmp.tail - tmp.head)  TICKET_MASK)  1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-   __ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-   return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-   __ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
  unsigned long flags)
 {
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 4/8] xen/pvticketlock: Xen implementation for PV ticket locks

2011-09-02 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |  287 +++
 1 files changed, 43 insertions(+), 244 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 23af06a..f6133c5 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-   u64 taken;
u32 taken_slow;
-   u32 taken_slow_nested;
u32 taken_slow_pickup;
u32 taken_slow_spurious;
-   u32 taken_slow_irqenable;
 
-   u64 released;
u32 released_slow;
u32 released_slow_kicked;
 
 #define HISTO_BUCKETS  30
-   u32 histo_spin_total[HISTO_BUCKETS+1];
-   u32 histo_spin_spinning[HISTO_BUCKETS+1];
u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-   u64 time_total;
-   u64 time_spinning;
u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1  10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-   spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_total);
-   spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,84 @@ static inline u64 spin_time_start(void)
return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-   unsigned char lock; /* 0 - free; 1 - locked */
-   unsigned short spinners;/* count of waiting cpus */
+struct xen_lock_waiting {
+   struct arch_spinlock *lock;
+   __ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   return xl-lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   /* Not strictly true; this is only the count of contended
-  lock-takers entering the slow path. */
-   return xl-spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-   u8 old = 1;
-
-   asm(xchgb %b0,%1
-   : +q (old), +m (xl-lock) : : memory);
-
-   return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-   struct xen_spinlock *prev;
-
-   prev = __this_cpu_read(lock_spinners);
-   __this_cpu_write(lock_spinners, xl);
-
-   wmb();  /* set lock of interest before count */
-
-   asm(LOCK_PREFIX  incw %0
-   : +m (xl-spinners) : : memory);
-
-   return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct 
xen_spinlock *prev)
-{
-   asm(LOCK_PREFIX  decw %0
-   : +m (xl-spinners) : : memory);
-   wmb

[PATCH 1/8] x86/spinlocks: replace pv spinlocks with pv ticketlocks

2011-09-02 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |   30 ++--
 arch/x86/include/asm/paravirt_types.h |   10 ++---
 arch/x86/include/asm/spinlock.h   |   59 ++---
 arch/x86/include/asm/spinlock_types.h |4 --
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +---
 arch/x86/xen/spinlock.c   |7 +++-
 6 files changed, 63 insertions(+), 62 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index a7d2db9..76cae7a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -750,36 +750,14 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP)  defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+   PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
- unsigned long flags)
-{
-   PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-   return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+   PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 8e8b9a4..005e24d 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -327,13 +327,11 @@ struct pv_mmu_ops {
 };
 
 struct arch_spinlock;
+#include asm/spinlock_types.h
+
 struct pv_lock_ops {
-   int (*spin_is_locked)(struct arch_spinlock *lock);
-   int (*spin_is_contended)(struct arch_spinlock *lock);
-   void (*spin_lock)(struct arch_spinlock *lock);
-   void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long 
flags);
-   int (*spin_trylock)(struct arch_spinlock *lock);
-   void (*spin_unlock)(struct arch_spinlock *lock);
+   void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+   void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f5695ee..bafed3b 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -37,6 +37,32 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD (1  11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket

[PATCH 7/8] x86/ticketlock: add slowpath logic

2011-09-02 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Maintain a flag in the LSB of the ticket lock tail which indicates
whether anyone is in the lock slowpath and may need kicking when
the current holder unlocks.  The flags are set when the first locker
enters the slowpath, and cleared when unlocking to an empty queue (ie,
no contention).

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

UnlockerLocker
test for lock pickup
- fail
unlock
test slowpath
- false
set slowpath flags
block

Whereas this works in any ordering:

UnlockerLocker
set slowpath flags
test for lock pickup
- fail
block
unlock
test slowpath
- true, kick

Note: this code relies on gcc making sure that unlikely() code is out of
line of the fastpath, which only happens when OPTIMIZE_SIZE=n.  If it
doesn't the generated code isn't too bad, but its definitely suboptimal.

(Thanks to Srivatsa Vaddagiri for providing a bugfix to the original
version of this change, which has been folded in.)

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
Signed-off-by: Srivatsa Vaddagiri va...@linux.vnet.ibm.com
---
 arch/x86/include/asm/paravirt.h   |2 +-
 arch/x86/include/asm/spinlock.h   |   72 ++---
 arch/x86/include/asm/spinlock_types.h |2 +
 arch/x86/kernel/paravirt-spinlocks.c  |1 +
 arch/x86/xen/spinlock.c   |4 ++
 5 files changed, 65 insertions(+), 16 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 50281c7..13b3d8b 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -755,7 +755,7 @@ static __always_inline void __ticket_lock_spinning(struct 
arch_spinlock *lock, _
PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 7a1c0c4..64422f1 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -40,29 +40,46 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD (1  11)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
+/*
+ * Return true if someone is in the slowpath on this lock.  This
+ * should only be used by the current lock-holder.
+ */
+static inline bool __ticket_in_slowpath(struct arch_spinlock *lock)
 {
+   return !!(lock-tickets.tail  TICKET_SLOWPATH_FLAG);
 }
 
-static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
+static inline void __ticket_enter_slowpath(struct arch_spinlock *lock)
 {
+   if (sizeof(lock-tickets.tail) == sizeof(u8))
+   asm (LOCK_PREFIX orb %1, %0
+: +m (lock-tickets.tail)
+: i (TICKET_SLOWPATH_FLAG) : memory);
+   else
+   asm (LOCK_PREFIX orw %1, %0
+: +m (lock-tickets.tail)
+: i (TICKET_SLOWPATH_FLAG) : memory);
 }
 
-#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static inline bool __ticket_in_slowpath(struct arch_spinlock *lock)
+{
+   return false;
+}
 
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
+{
+}
 
-/* 
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t next)
+static inline void __ticket_unlock_kick(struct arch_spinlock *lock, __ticket_t 
ticket)
 {
-   if (unlikely(lock-tickets.tail != next))
-   ticket_unlock_kick(lock, next);
 }
 
+#endif /* CONFIG_PARAVIRT_SPINLOCKS */
+
+
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
@@ -85,15 +102,17 @@ static __always_inline void arch_spin_lock(struct 
arch_spinlock *lock)
register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
inc = xadd(lock-tickets, inc);
+   if (likely(inc.head == inc.tail))
+   goto out

[PATCH 8/8] xen/pvticketlock: allow interrupts to be enabled while blocking

2011-09-02 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

If interrupts were enabled when taking the spinlock, we can leave them
enabled while blocking to get the lock.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |   42 +++---
 1 files changed, 35 insertions(+), 7 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index c939723..d2335f88 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -106,11 +106,28 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
__ticket_t want)
 
start = spin_time_start();
 
-   /* Make sure interrupts are disabled to ensure that these
-  per-cpu values are not overwritten. */
+   /*
+* Make sure an interrupt handler can't upset things in a
+* partially setup state.
+*/
local_irq_save(flags);
 
+   /*
+* We don't really care if we're overwriting some other
+* (lock,want) pair, as that would mean that we're currently
+* in an interrupt context, and the outer context had
+* interrupts enabled.  That has already kicked the VCPU out
+* of xen_poll_irq(), so it will just return spuriously and
+* retry with newly setup (lock,want).
+*
+* The ordering protocol on this is that the lock pointer
+* may only be set non-NULL if the want ticket is correct.
+* If we're updating want, we must first clear lock.
+*/
+   w-lock = NULL;
+   smp_wmb();
w-want = want;
+   smp_wmb();
w-lock = lock;
 
/* This uses set_bit, which atomic and therefore a barrier */
@@ -124,21 +141,30 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
__ticket_t want)
/* Only check lock once pending cleared */
barrier();
 
-   /* Mark entry to slowpath before doing the pickup test to make
-  sure we don't deadlock with an unlocker. */
+   /*
+* Mark entry to slowpath before doing the pickup test to make
+* sure we don't deadlock with an unlocker.
+*/
__ticket_enter_slowpath(lock);
 
-   /* check again make sure it didn't become free while
-  we weren't looking  */
+   /*
+* check again make sure it didn't become free while
+* we weren't looking 
+*/
if (ACCESS_ONCE(lock-tickets.head) == want) {
ADD_STATS(taken_slow_pickup, 1);
goto out;
}
 
+   /* Allow interrupts while blocked */
+   local_irq_restore(flags);
+
/* Block until irq becomes pending (or perhaps a spurious wakeup) */
xen_poll_irq(irq);
ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
+   local_irq_save(flags);
+
kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
@@ -160,7 +186,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, 
__ticket_t next)
for_each_cpu(cpu, waiting_cpus) {
const struct xen_lock_waiting *w = per_cpu(lock_waiting, cpu);
 
-   if (w-lock == lock  w-want == next) {
+   /* Make sure we read lock before want */
+   if (ACCESS_ONCE(w-lock) == lock 
+   ACCESS_ONCE(w-want) == next) {
ADD_STATS(released_slow_kicked, 1);
xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
break;
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 5/8] x86/pvticketlock: use callee-save for lock_spinning

2011-09-02 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Although the lock_spinning calls in the spinlock code are on the
uncommon path, their presence can cause the compiler to generate many
more register save/restores in the function pre/postamble, which is in
the fast path.  To avoid this, convert it to using the pvops callee-save
calling convention, which defers all the save/restores until the actual
function is called, keeping the fastpath clean.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |2 +-
 arch/x86/include/asm/paravirt_types.h |2 +-
 arch/x86/kernel/paravirt-spinlocks.c  |2 +-
 arch/x86/xen/spinlock.c   |3 ++-
 4 files changed, 5 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 76cae7a..50281c7 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -752,7 +752,7 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
-   PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
+   PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
 static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 005e24d..5e0c138 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -330,7 +330,7 @@ struct arch_spinlock;
 #include asm/spinlock_types.h
 
 struct pv_lock_ops {
-   void (*lock_spinning)(struct arch_spinlock *lock, __ticket_t ticket);
+   struct paravirt_callee_save lock_spinning;
void (*unlock_kick)(struct arch_spinlock *lock, __ticket_t ticket);
 };
 
diff --git a/arch/x86/kernel/paravirt-spinlocks.c 
b/arch/x86/kernel/paravirt-spinlocks.c
index c2e010e..4251c1d 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -9,7 +9,7 @@
 
 struct pv_lock_ops pv_lock_ops = {
 #ifdef CONFIG_SMP
-   .lock_spinning = paravirt_nop,
+   .lock_spinning = __PV_IS_CALLEE_SAVE(paravirt_nop),
.unlock_kick = paravirt_nop,
 #endif
 };
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index f6133c5..7a04950 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -145,6 +145,7 @@ out:
 
spin_time_accum_blocked(start);
 }
+PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
 static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
@@ -197,7 +198,7 @@ void xen_uninit_lock_cpu(int cpu)
 
 void __init xen_init_spinlocks(void)
 {
-   pv_lock_ops.lock_spinning = xen_lock_spinning;
+   pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
 }
 
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/8] [PATCH RFC V2] Paravirtualized ticketlocks

2011-09-02 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

[ Changes since last posting:
  - fold all the cleanup/bugfix patches into their base patches
  - change spin_lock to make sure fastpath has no cruft in it
  - make sure it doesn't attempt to inline unlock
]

NOTE: this series is based on tip.git tip/x86/spinlocks

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct next
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into slowpath state.

- When releasing a lock, if it is in slowpath state, the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The slowpath state is stored in the LSB of the within the lock
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a small ticket can deal with 128 CPUs, and large ticket
32768).

This series provides a Xen implementation, but it should be
straightforward to add a KVM implementation as well.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.

The fast path (taking an uncontended lock which isn't in slowpath
state) is optimal, identical to the non-paravirtualized case.

The inner part of ticket lock code becomes:
inc = xadd(lock-tickets, inc);
inc.tail = ~TICKET_SLOWPATH_FLAG;

if (likely(inc.head == inc.tail))
goto out;

for (;;) {
unsigned count = SPIN_THRESHOLD;

do {
if (ACCESS_ONCE(lock-tickets.head) == inc.tail)
goto out;
cpu_relax();
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}
out:barrier();

which results in:
push   %rbp
mov%rsp,%rbp

mov$0x200,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f

pop%rbp
retq   

### SLOWPATH START
1:  and$-2,%edx
movzbl %dl,%esi

2:  mov$0x800,%eax
jmp4f

3:  pause  
sub$0x1,%eax
je 5f

4:  movzbl (%rdi),%ecx
cmp%cl,%dl
jne3b

pop%rbp
retq   

5:  callq  *__ticket_lock_spinning
jmp2b
### SLOWPATH END

with CONFIG_PARAVIRT_SPINLOCKS=n, the code has changed slightly, where
the fastpath case is straight through (taking the lock without
contention), and the spin loop is out of line:

push   %rbp
mov%rsp,%rbp

mov$0x100,%eax
lock xadd %ax,(%rdi)
movzbl %ah,%edx
cmp%al,%dl
jne1f

pop%rbp
retq   

### SLOWPATH START
1:  pause  
movzbl (%rdi),%eax
cmp%dl,%al
jne1b

pop%rbp
retq   
### SLOWPATH END

The unlock code is very straightforward:
__ticket_unlock_release(lock);
if (unlikely(__ticket_in_slowpath(lock)))
__ticket_unlock_slowpath(lock);

which generates:
push   %rbp
mov%rsp,%rbp

addb   $0x2,(%rdi)
testb  $0x1,0x1(%rdi)
jne1f

pop%rbp
retq   

### SLOWPATH START
1:  movzwl (%rdi),%edx
movzbl %dh,%ecx
mov%edx,%eax
and$-2,%ecx # clear TICKET_SLOWPATH_FLAG
mov%cl,%dh
cmp%dl,%cl  # test to see if lock is uncontended
je 3f

2:  movzbl %dl,%esi
callq  *__ticket_unlock_kick# kick anyone waiting
pop%rbp
retq   

3:  lock cmpxchg %dx,(%rdi) # use

[PATCH 03/13] xen/pvticketlock: Xen implementation for PV ticket locks

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Replace the old Xen implementation of PV spinlocks with and implementation
of xen_lock_spinning and xen_unlock_kick.

xen_lock_spinning simply registers the cpu in its entry in lock_waiting,
adds itself to the waiting_cpus set, and blocks on an event channel
until the channel becomes pending.

xen_unlock_kick searches the cpus in waiting_cpus looking for the one
which next wants this lock with the next ticket, if any.  If found,
it kicks it by making its event channel pending, which wakes it up.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |  281 ++-
 1 files changed, 36 insertions(+), 245 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 23af06a..c1bd84c 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -19,32 +19,21 @@
 #ifdef CONFIG_XEN_DEBUG_FS
 static struct xen_spinlock_stats
 {
-   u64 taken;
u32 taken_slow;
-   u32 taken_slow_nested;
u32 taken_slow_pickup;
u32 taken_slow_spurious;
-   u32 taken_slow_irqenable;
 
-   u64 released;
u32 released_slow;
u32 released_slow_kicked;
 
 #define HISTO_BUCKETS  30
-   u32 histo_spin_total[HISTO_BUCKETS+1];
-   u32 histo_spin_spinning[HISTO_BUCKETS+1];
u32 histo_spin_blocked[HISTO_BUCKETS+1];
 
-   u64 time_total;
-   u64 time_spinning;
u64 time_blocked;
 } spinlock_stats;
 
 static u8 zero_stats;
 
-static unsigned lock_timeout = 1  10;
-#define TIMEOUT lock_timeout
-
 static inline void check_zero(void)
 {
if (unlikely(zero_stats)) {
@@ -73,22 +62,6 @@ static void __spin_time_accum(u64 delta, u32 *array)
array[HISTO_BUCKETS]++;
 }
 
-static inline void spin_time_accum_spinning(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_spinning);
-   spinlock_stats.time_spinning += delta;
-}
-
-static inline void spin_time_accum_total(u64 start)
-{
-   u32 delta = xen_clocksource_read() - start;
-
-   __spin_time_accum(delta, spinlock_stats.histo_spin_total);
-   spinlock_stats.time_total += delta;
-}
-
 static inline void spin_time_accum_blocked(u64 start)
 {
u32 delta = xen_clocksource_read() - start;
@@ -105,214 +78,76 @@ static inline u64 spin_time_start(void)
return 0;
 }
 
-static inline void spin_time_accum_total(u64 start)
-{
-}
-static inline void spin_time_accum_spinning(u64 start)
-{
-}
 static inline void spin_time_accum_blocked(u64 start)
 {
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
-struct xen_spinlock {
-   unsigned char lock; /* 0 - free; 1 - locked */
-   unsigned short spinners;/* count of waiting cpus */
+struct xen_lock_waiting {
+   struct arch_spinlock *lock;
+   __ticket_t want;
 };
 
 static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
+static cpumask_t waiting_cpus;
 
-#if 0
-static int xen_spin_is_locked(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   return xl-lock != 0;
-}
-
-static int xen_spin_is_contended(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-
-   /* Not strictly true; this is only the count of contended
-  lock-takers entering the slow path. */
-   return xl-spinners != 0;
-}
-
-static int xen_spin_trylock(struct arch_spinlock *lock)
-{
-   struct xen_spinlock *xl = (struct xen_spinlock *)lock;
-   u8 old = 1;
-
-   asm(xchgb %b0,%1
-   : +q (old), +m (xl-lock) : : memory);
-
-   return old == 0;
-}
-
-static DEFINE_PER_CPU(struct xen_spinlock *, lock_spinners);
-
-/*
- * Mark a cpu as interested in a lock.  Returns the CPU's previous
- * lock of interest, in case we got preempted by an interrupt.
- */
-static inline struct xen_spinlock *spinning_lock(struct xen_spinlock *xl)
-{
-   struct xen_spinlock *prev;
-
-   prev = __this_cpu_read(lock_spinners);
-   __this_cpu_write(lock_spinners, xl);
-
-   wmb();  /* set lock of interest before count */
-
-   asm(LOCK_PREFIX  incw %0
-   : +m (xl-spinners) : : memory);
-
-   return prev;
-}
-
-/*
- * Mark a cpu as no longer interested in a lock.  Restores previous
- * lock of interest (NULL for none).
- */
-static inline void unspinning_lock(struct xen_spinlock *xl, struct 
xen_spinlock *prev)
-{
-   asm(LOCK_PREFIX  decw %0
-   : +m (xl-spinners) : : memory);
-   wmb();  /* decrement count before restoring lock */
-   __this_cpu_write(lock_spinners, prev);
-}
-
-static noinline int xen_spin_lock_slow(struct arch_spinlock *lock, bool 
irq_enable)
+static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
 {
-   struct xen_spinlock

[PATCH 05/13] x86/ticketlocks: when paravirtualizing ticket locks, increment by 2

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Increment ticket head/tails by 2 rather than 1 to leave the LSB free
to store a is in slowpath state bit.  This halves the number
of possible CPUs for a given ticket size, but this shouldn't matter
in practice - kernels built for 32k+ CPU systems are probably
specially built for the hardware rather than a generic distro
kernel.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h   |   16 
 arch/x86/include/asm/spinlock_types.h |   10 +-
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index c1d9617..6028b01 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -83,7 +83,7 @@ static __always_inline void __ticket_unlock_kick(struct 
arch_spinlock *lock, __t
  */
 static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
-   register struct __raw_tickets inc = { .tail = 1 };
+   register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
inc = xadd(lock-tickets, inc);
 
@@ -109,7 +109,7 @@ static __always_inline int 
arch_spin_trylock(arch_spinlock_t *lock)
if (old.tickets.head != old.tickets.tail)
return 0;
 
-   new.head_tail = old.head_tail + (1  TICKET_SHIFT);
+   new.head_tail = old.head_tail + (TICKET_LOCK_INC  TICKET_SHIFT);
 
/* cmpxchg is a full barrier, so nothing can move before it */
return cmpxchg(lock-head_tail, old.head_tail, new.head_tail) == 
old.head_tail;
@@ -118,24 +118,24 @@ static __always_inline int 
arch_spin_trylock(arch_spinlock_t *lock)
 #if (NR_CPUS  256)
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-   asm volatile(UNLOCK_LOCK_PREFIX incb %0
+   asm volatile(UNLOCK_LOCK_PREFIX addb %1, %0
 : +m (lock-head_tail)
-:
+: i (TICKET_LOCK_INC)
 : memory, cc);
 }
 #else
 static __always_inline void __ticket_unlock_release(arch_spinlock_t *lock)
 {
-   asm volatile(UNLOCK_LOCK_PREFIX incw %0
+   asm volatile(UNLOCK_LOCK_PREFIX addw %1, %0
 : +m (lock-head_tail)
-:
+: i (TICKET_LOCK_INC)
 : memory, cc);
 }
 #endif
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-   __ticket_t next = lock-tickets.head + 1;
+   __ticket_t next = lock-tickets.head + TICKET_LOCK_INC;
 
__ticket_unlock_release(lock);
__ticket_unlock_kick(lock, next);
@@ -152,7 +152,7 @@ static inline int arch_spin_is_contended(arch_spinlock_t 
*lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
-   return ((tmp.tail - tmp.head)  TICKET_MASK)  1;
+   return ((tmp.tail - tmp.head)  TICKET_MASK)  TICKET_LOCK_INC;
 }
 #define arch_spin_is_contended arch_spin_is_contended
 
diff --git a/arch/x86/include/asm/spinlock_types.h 
b/arch/x86/include/asm/spinlock_types.h
index 72e154e..0553c0b 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -7,7 +7,13 @@
 
 #include linux/types.h
 
-#if (CONFIG_NR_CPUS  256)
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+#define __TICKET_LOCK_INC  2
+#else
+#define __TICKET_LOCK_INC  1
+#endif
+
+#if (CONFIG_NR_CPUS  (256 / __TICKET_LOCK_INC))
 typedef u8  __ticket_t;
 typedef u16 __ticketpair_t;
 #else
@@ -15,6 +21,8 @@ typedef u16 __ticket_t;
 typedef u32 __ticketpair_t;
 #endif
 
+#define TICKET_LOCK_INC((__ticket_t)__TICKET_LOCK_INC)
+
 #define TICKET_SHIFT   (sizeof(__ticket_t) * 8)
 #define TICKET_MASK((__ticket_t)((1  TICKET_SHIFT) - 1))
 
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 10/13] xen/pvticket: allow interrupts to be enabled while blocking

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |   28 +---
 1 files changed, 25 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 2ed5d05..7b89439 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -106,11 +106,26 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
unsigned want)
 
start = spin_time_start();
 
-   /* Make sure interrupts are disabled to ensure that these
-  per-cpu values are not overwritten. */
+   /* Make sure an interrupt handler can't upset things in a
+  partially setup state. */
local_irq_save(flags);
 
+   /*
+* We don't really care if we're overwriting some other
+* (lock,want) pair, as that would mean that we're currently
+* in an interrupt context, and the outer context had
+* interrupts enabled.  That has already kicked the VCPU out
+* of xen_poll_irq(), so it will just return spuriously and
+* retry with newly setup (lock,want).
+*
+* The ordering protocol on this is that the lock pointer
+* may only be set non-NULL if the want ticket is correct.
+* If we're updating want, we must first clear lock.
+*/
+   w-lock = NULL;
+   smp_wmb();
w-want = want;
+   smp_wmb();
w-lock = lock;
 
/* This uses set_bit, which atomic and therefore a barrier */
@@ -135,10 +150,15 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
unsigned want)
goto out;
}
 
+   /* Allow interrupts while blocked */
+   local_irq_restore(flags);
+
/* Block until irq becomes pending (or perhaps a spurious wakeup) */
xen_poll_irq(irq);
ADD_STATS(taken_slow_spurious, !xen_test_irq_pending(irq));
 
+   local_irq_save(flags);
+
kstat_incr_irqs_this_cpu(irq, irq_to_desc(irq));
 
 out:
@@ -160,7 +180,9 @@ static void xen_unlock_kick(struct arch_spinlock *lock, 
unsigned next)
for_each_cpu(cpu, waiting_cpus) {
const struct xen_lock_waiting *w = per_cpu(lock_waiting, cpu);
 
-   if (w-lock == lock  w-want == next) {
+   /* Make sure we read lock before want */
+   if (ACCESS_ONCE(w-lock) == lock 
+   ACCESS_ONCE(w-want) == next) {
ADD_STATS(released_slow_kicked, 1);
xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
break;
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 13/13] x86/pvticketlock: use __ticket_t for pvop args

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Use __ticket_t for the ticket argument to the pvops, to prevent
unnecessary zero-extension in the calling code.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |6 --
 arch/x86/include/asm/spinlock_types.h |4 
 arch/x86/xen/spinlock.c   |4 ++--
 3 files changed, 6 insertions(+), 8 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index a6f2651..932a682 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -741,12 +741,14 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP)  defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
unsigned ticket)
+#include asm/spinlock_types.h
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, 
unsigned ticket)
+static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t ticket)
 {
PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock_types.h 
b/arch/x86/include/asm/spinlock_types.h
index 7b383e2..62ea99e 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -1,10 +1,6 @@
 #ifndef _ASM_X86_SPINLOCK_TYPES_H
 #define _ASM_X86_SPINLOCK_TYPES_H
 
-#ifndef __LINUX_SPINLOCK_TYPES_H
-# error please don't include this file directly
-#endif
-
 #include linux/types.h
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 7b89439..91b0940 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -92,7 +92,7 @@ static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
 static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
 static cpumask_t waiting_cpus;
 
-static void xen_lock_spinning(struct arch_spinlock *lock, unsigned want)
+static void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
int irq = __this_cpu_read(lock_kicker_irq);
struct xen_lock_waiting *w = __get_cpu_var(lock_waiting);
@@ -171,7 +171,7 @@ out:
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
 
-static void xen_unlock_kick(struct arch_spinlock *lock, unsigned next)
+static void xen_unlock_kick(struct arch_spinlock *lock, __ticket_t next)
 {
int cpu;
 
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 06/13] x86/ticketlock: add slowpath logic

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Maintain a flag in both LSBs of the ticket lock which indicates whether
anyone is in the lock slowpath and may need kicking when the current
holder unlocks.  The flags are set when the first locker enters
the slowpath, and cleared when unlocking to an empty queue.

In the specific implementation of lock_spinning(), make sure to set
the slowpath flags on the lock just before blocking.  We must do
this before the last-chance pickup test to prevent a deadlock
with the unlocker:

UnlockerLocker
test for lock pickup
- fail
test slowpath + unlock
- false
set slowpath flags
block

Whereas this works in any ordering:

UnlockerLocker
set slowpath flags
test for lock pickup
- fail
block
test slowpath + unlock
- true, kick

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h   |   41 +---
 arch/x86/include/asm/spinlock_types.h |2 +
 arch/x86/kernel/paravirt-spinlocks.c  |   37 +
 arch/x86/xen/spinlock.c   |4 +++
 4 files changed, 80 insertions(+), 4 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 6028b01..2135a48 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -41,7 +41,38 @@
 /* How long a lock should spin before we consider blocking */
 #define SPIN_THRESHOLD (1  11)
 
-#ifndef CONFIG_PARAVIRT_SPINLOCKS
+/* Only defined when CONFIG_PARAVIRT_SPINLOCKS defined, but may as
+ * well leave the prototype always visible.  */
+extern void __ticket_unlock_release_slowpath(struct arch_spinlock *lock);
+
+#ifdef CONFIG_PARAVIRT_SPINLOCKS
+
+/*
+ * Return true if someone is in the slowpath on this lock.  This
+ * should only be used by the current lock-holder.
+ */
+static inline bool __ticket_in_slowpath(struct arch_spinlock *lock)
+{
+   return !!(lock-tickets.tail  TICKET_SLOWPATH_FLAG);
+}
+
+static inline void __ticket_enter_slowpath(struct arch_spinlock *lock)
+{
+   if (sizeof(lock-tickets.tail) == sizeof(u8))
+   asm (LOCK_PREFIX orb %1, %0
+: +m (lock-tickets.tail)
+: i (TICKET_SLOWPATH_FLAG) : memory);
+   else
+   asm (LOCK_PREFIX orw %1, %0
+: +m (lock-tickets.tail)
+: i (TICKET_SLOWPATH_FLAG) : memory);
+}
+
+#else  /* !CONFIG_PARAVIRT_SPINLOCKS */
+static inline bool __ticket_in_slowpath(struct arch_spinlock *lock)
+{
+   return false;
+}
 
 static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
unsigned ticket)
 {
@@ -86,6 +117,7 @@ static __always_inline void arch_spin_lock(struct 
arch_spinlock *lock)
register struct __raw_tickets inc = { .tail = TICKET_LOCK_INC };
 
inc = xadd(lock-tickets, inc);
+   inc.tail = ~TICKET_SLOWPATH_FLAG;
 
for (;;) {
unsigned count = SPIN_THRESHOLD;
@@ -135,10 +167,11 @@ static __always_inline void 
__ticket_unlock_release(arch_spinlock_t *lock)
 
 static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
-   __ticket_t next = lock-tickets.head + TICKET_LOCK_INC;
 
-   __ticket_unlock_release(lock);
-   __ticket_unlock_kick(lock, next);
+   if (unlikely(__ticket_in_slowpath(lock)))
+   __ticket_unlock_release_slowpath(lock);
+   else
+   __ticket_unlock_release(lock);
 }
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
diff --git a/arch/x86/include/asm/spinlock_types.h 
b/arch/x86/include/asm/spinlock_types.h
index 0553c0b..7b383e2 100644
--- a/arch/x86/include/asm/spinlock_types.h
+++ b/arch/x86/include/asm/spinlock_types.h
@@ -9,8 +9,10 @@
 
 #ifdef CONFIG_PARAVIRT_SPINLOCKS
 #define __TICKET_LOCK_INC  2
+#define TICKET_SLOWPATH_FLAG   ((__ticket_t)1)
 #else
 #define __TICKET_LOCK_INC  1
+#define TICKET_SLOWPATH_FLAG   ((__ticket_t)0)
 #endif
 
 #if (CONFIG_NR_CPUS  (256 / __TICKET_LOCK_INC))
diff --git a/arch/x86/kernel/paravirt-spinlocks.c 
b/arch/x86/kernel/paravirt-spinlocks.c
index 4251c1d..21b6986 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -15,3 +15,40 @@ struct pv_lock_ops pv_lock_ops = {
 };
 EXPORT_SYMBOL(pv_lock_ops);
 
+
+/*
+ * If we're unlocking and we're leaving the lock uncontended (there's
+ * nobody else waiting for the lock), then we can clear the slowpath
+ * bits.  However, we need to be careful about this because someone
+ * may just be entering as we leave, and enter the slowpath.
+ */
+void __ticket_unlock_release_slowpath(struct

[PATCH 08/13] xen/pvticketlock: disable interrupts while blocking

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

We need to make sure interrupts are disabled while we're relying on the
contents of the per-cpu lock_waiting values, otherwise an interrupt
handler could come in, try to take some other lock, block, and overwrite
our values.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/xen/spinlock.c |8 
 1 files changed, 8 insertions(+), 0 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 4c46144..2ed5d05 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -98,6 +98,7 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
unsigned want)
struct xen_lock_waiting *w = __get_cpu_var(lock_waiting);
int cpu = smp_processor_id();
u64 start;
+   unsigned long flags;
 
/* If kicker interrupts not initialized yet, just spin */
if (irq == -1)
@@ -105,6 +106,10 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
unsigned want)
 
start = spin_time_start();
 
+   /* Make sure interrupts are disabled to ensure that these
+  per-cpu values are not overwritten. */
+   local_irq_save(flags);
+
w-want = want;
w-lock = lock;
 
@@ -139,6 +144,9 @@ static void xen_lock_spinning(struct arch_spinlock *lock, 
unsigned want)
 out:
cpumask_clear_cpu(cpu, waiting_cpus);
w-lock = NULL;
+
+   local_irq_restore(flags);
+
spin_time_accum_blocked(start);
 }
 PV_CALLEE_SAVE_REGS_THUNK(xen_lock_spinning);
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 07/13] x86/ticketlocks: tidy up __ticket_unlock_kick()

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

__ticket_unlock_kick() is now only called from known slowpaths, so there's
no need for it to do any checking of its own.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h |2 +-
 arch/x86/include/asm/spinlock.h |   14 --
 2 files changed, 1 insertions(+), 15 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index 622f3d6..b89699a 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -746,7 +746,7 @@ static inline void __ticket_lock_spinning(struct 
arch_spinlock *lock, unsigned t
PVOP_VCALLEE2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline void ticket_unlock_kick(struct arch_spinlock *lock, unsigned 
ticket)
+static inline void __ticket_unlock_kick(struct arch_spinlock *lock, unsigned 
ticket)
 {
PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 2135a48..365d787 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -78,23 +78,9 @@ static __always_inline void __ticket_lock_spinning(struct 
arch_spinlock *lock, u
 {
 }
 
-static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
unsigned ticket)
-{
-}
-
 #endif /* CONFIG_PARAVIRT_SPINLOCKS */
 
 
-/* 
- * If a spinlock has someone waiting on it, then kick the appropriate
- * waiting cpu.
- */
-static __always_inline void __ticket_unlock_kick(struct arch_spinlock *lock, 
__ticket_t next)
-{
-   if (unlikely(lock-tickets.tail != next))
-   ticket_unlock_kick(lock, next);
-}
-
 /*
  * Ticket locks are conceptually two parts, one indicating the current head of
  * the queue, and the other indicating the current tail. The lock is acquired
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 00/13] [PATCH RFC] Paravirtualized ticketlocks

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

This series replaces the existing paravirtualized spinlock mechanism
with a paravirtualized ticketlock mechanism.

Ticket locks have an inherent problem in a virtualized case, because
the vCPUs are scheduled rather than running concurrently (ignoring
gang scheduled vCPUs).  This can result in catastrophic performance
collapses when the vCPU scheduler doesn't schedule the correct next
vCPU, and ends up scheduling a vCPU which burns its entire timeslice
spinning.  (Note that this is not the same problem as lock-holder
preemption, which this series also addresses; that's also a problem,
but not catastrophic).

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

Currently we deal with this by having PV spinlocks, which adds a layer
of indirection in front of all the spinlock functions, and defining a
completely new implementation for Xen (and for other pvops users, but
there are none at present).

PV ticketlocks keeps the existing ticketlock implemenentation
(fastpath) as-is, but adds a couple of pvops for the slow paths:

- If a CPU has been waiting for a spinlock for SPIN_THRESHOLD
  iterations, then call out to the __ticket_lock_spinning() pvop,
  which allows a backend to block the vCPU rather than spinning.  This
  pvop can set the lock into slowpath state.

- When releasing a lock, if it is in slowpath state, the call
  __ticket_unlock_kick() to kick the next vCPU in line awake.  If the
  lock is no longer in contention, it also clears the slowpath flag.

The slowpath state is stored in the LSB of the within the lock
ticket.  This has the effect of reducing the max number of CPUs by
half (so, a small ticket can deal with 128 CPUs, and large ticket
32768).

This series provides a Xen implementation, but it should be
straightforward to add a KVM implementation as well.

Overall, it results in a large reduction in code, it makes the native
and virtualized cases closer, and it removes a layer of indirection
around all the spinlock functions.  The downside is that it does add a
few instructions into the fastpath in the native case.

Most of the heavy lifting code is in the slowpaths, but it does have
an effect on the fastpath code.  The inner part of ticket lock code
becomes:
inc = xadd(lock-tickets, inc);
inc.tail = ~TICKET_SLOWPATH_FLAG;

for (;;) {
unsigned count = SPIN_THRESHOLD;

do {
if (inc.head == inc.tail)
goto out;
cpu_relax();
inc.head = ACCESS_ONCE(lock-tickets.head);
} while (--count);
__ticket_lock_spinning(lock, inc.tail);
}

which results in:

  pushq   %rbp
  movq%rsp,%rbp

  movl$512, %ecx
  lock; xaddw %cx, (%rdi)   # claim ticket

  movzbl  %ch, %edx
  movl$2048, %eax   # set spin count
  andl$-2, %edx # mask off TICKET_SLOWPATH_FLAG
  movzbl  %dl, %esi

1:cmpb%dl, %cl  # compare head and tail
  je  2f# got it!

  ### BEGIN SLOWPATH
  rep; nop  # pause
  decl%eax  # dec count
  movb(%rdi), %cl   # re-fetch head
  jne 1b# try again

  call*pv_lock_ops  # call __ticket_lock_spinning
  movl$2048, %eax   # reload spin count
  jmp 1b
  ### END SLOWPATH

2:popq%rbp
  ret

with CONFIG_PARAVIRT_SPINLOCKS=n, the same code identical asm to the
current ticketlock code:

pushq   %rbp
movq%rsp, %rbp

movl$256, %eax
lock; xaddw %ax, (%rdi)

movzbl  %ah, %edx

1:  cmpb%dl, %al# compare head and tail
je  2f  # got it!

### BEGIN SLOWPATH
rep; nop# pause
movb(%rdi), %al # reload head
jmp 1b  # loop
### END SLOWPATH

2:  popq%rbp
ret

so the pv ticketlocks add 3 extra instructions to the fastpath, one of
which really doesn't need to be there (setting up the arg for the
slowpath function):
  movl$2048, %eax   # set spin count
  andl$-2, %edx # mask off SLOW_PATH_FLAG
  movzbl  %dl, %esi # set up __ticket_lock_spinning arg

The unlock code is very straightforward:
__ticket_unlock_release(lock);
if (unlikely(__ticket_in_slowpath(lock)))
__ticket_unlock_slowpath(lock);
which generates:
  addb $2, (%rdi)
  testb$1, 1(%rdi)
  je   1f
  call __ticket_unlock_slowpath
1:

which, while simple, is more complex than the simple incb (%rdi).
(I'm not sure whether its worth inlining this or not.)

Thoughts? Comments? Suggestions?

Thanks,
J

Jeremy Fitzhardinge (12):
  x86

[PATCH 09/13] x86/pvticketlocks: we only need to kick if there's waiters

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

If we're releasing the lock into an uncontended state, there's nobody
waiting on it, so there's no need to kick anyone.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/kernel/paravirt-spinlocks.c |3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/arch/x86/kernel/paravirt-spinlocks.c 
b/arch/x86/kernel/paravirt-spinlocks.c
index 21b6986..71b8557 100644
--- a/arch/x86/kernel/paravirt-spinlocks.c
+++ b/arch/x86/kernel/paravirt-spinlocks.c
@@ -47,8 +47,7 @@ void __ticket_unlock_release_slowpath(struct arch_spinlock 
*lock)
old.head_tail, new.head_tail) != old.head_tail) {
/* still people waiting */
__ticket_unlock_release(lock);
+   __ticket_unlock_kick(lock, new.tickets.head);
}
-
-   __ticket_unlock_kick(lock, new.tickets.head);
 }
 EXPORT_SYMBOL(__ticket_unlock_release_slowpath);
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 02/13] x86/ticketlock: collapse a layer of functions

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Now that the paravirtualization layer doesn't exist at the spinlock
level any more, we can collapse the __ticket_ functions into the arch_
functions.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/spinlock.h |   35 +--
 1 files changed, 5 insertions(+), 30 deletions(-)

diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index f5d9236..c1d9617 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -81,7 +81,7 @@ static __always_inline void __ticket_unlock_kick(struct 
arch_spinlock *lock, __t
  * save some instructions and make the code more elegant. There really isn't
  * much between them in performance though, especially as locks are out of 
line.
  */
-static __always_inline void __ticket_spin_lock(struct arch_spinlock *lock)
+static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
 {
register struct __raw_tickets inc = { .tail = 1 };
 
@@ -101,7 +101,7 @@ static __always_inline void __ticket_spin_lock(struct 
arch_spinlock *lock)
 out:   barrier();  /* make sure nothing creeps before the lock is 
taken */
 }
 
-static __always_inline int __ticket_spin_trylock(arch_spinlock_t *lock)
+static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
 {
arch_spinlock_t old, new;
 
@@ -133,7 +133,7 @@ static __always_inline void 
__ticket_unlock_release(arch_spinlock_t *lock)
 }
 #endif
 
-static __always_inline void __ticket_spin_unlock(arch_spinlock_t *lock)
+static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
 {
__ticket_t next = lock-tickets.head + 1;
 
@@ -141,46 +141,21 @@ static __always_inline void 
__ticket_spin_unlock(arch_spinlock_t *lock)
__ticket_unlock_kick(lock, next);
 }
 
-static inline int __ticket_spin_is_locked(arch_spinlock_t *lock)
+static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return !!(tmp.tail ^ tmp.head);
 }
 
-static inline int __ticket_spin_is_contended(arch_spinlock_t *lock)
+static inline int arch_spin_is_contended(arch_spinlock_t *lock)
 {
struct __raw_tickets tmp = ACCESS_ONCE(lock-tickets);
 
return ((tmp.tail - tmp.head)  TICKET_MASK)  1;
 }
-
-static inline int arch_spin_is_locked(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_locked(lock);
-}
-
-static inline int arch_spin_is_contended(arch_spinlock_t *lock)
-{
-   return __ticket_spin_is_contended(lock);
-}
 #define arch_spin_is_contended arch_spin_is_contended
 
-static __always_inline void arch_spin_lock(arch_spinlock_t *lock)
-{
-   __ticket_spin_lock(lock);
-}
-
-static __always_inline int arch_spin_trylock(arch_spinlock_t *lock)
-{
-   return __ticket_spin_trylock(lock);
-}
-
-static __always_inline void arch_spin_unlock(arch_spinlock_t *lock)
-{
-   __ticket_spin_unlock(lock);
-}
-
 static __always_inline void arch_spin_lock_flags(arch_spinlock_t *lock,
  unsigned long flags)
 {
-- 
1.7.6

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 01/13] x86/spinlocks: replace pv spinlocks with pv ticketlocks

2011-09-01 Thread Jeremy Fitzhardinge
From: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com

Rather than outright replacing the entire spinlock implementation in
order to paravirtualize it, keep the ticket lock implementation but add
a couple of pvops hooks on the slow patch (long spin on lock, unlocking
a contended lock).

Ticket locks have a number of nice properties, but they also have some
surprising behaviours in virtual environments.  They enforce a strict
FIFO ordering on cpus trying to take a lock; however, if the hypervisor
scheduler does not schedule the cpus in the correct order, the system can
waste a huge amount of time spinning until the next cpu can take the lock.

(See Thomas Friebel's talk Prevent Guests from Spinning Around
http://www.xen.org/files/xensummitboston08/LHP.pdf for more details.)

To address this, we add two hooks:
 - __ticket_spin_lock which is called after the cpu has been
   spinning on the lock for a significant number of iterations but has
   failed to take the lock (presumably because the cpu holding the lock
   has been descheduled).  The lock_spinning pvop is expected to block
   the cpu until it has been kicked by the current lock holder.
 - __ticket_spin_unlock, which on releasing a contended lock
   (there are more cpus with tail tickets), it looks to see if the next
   cpu is blocked and wakes it if so.

When compiled with CONFIG_PARAVIRT_SPINLOCKS disabled, a set of stub
functions causes all the extra code to go away.

Signed-off-by: Jeremy Fitzhardinge jeremy.fitzhardi...@citrix.com
---
 arch/x86/include/asm/paravirt.h   |   30 ++--
 arch/x86/include/asm/paravirt_types.h |8 +---
 arch/x86/include/asm/spinlock.h   |   59 ++---
 arch/x86/kernel/paravirt-spinlocks.c  |   15 +---
 arch/x86/xen/spinlock.c   |7 +++-
 5 files changed, 61 insertions(+), 58 deletions(-)

diff --git a/arch/x86/include/asm/paravirt.h b/arch/x86/include/asm/paravirt.h
index ebbc4d8..d88a813 100644
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -741,36 +741,14 @@ static inline void __set_fixmap(unsigned /* enum 
fixed_addresses */ idx,
 
 #if defined(CONFIG_SMP)  defined(CONFIG_PARAVIRT_SPINLOCKS)
 
-static inline int arch_spin_is_locked(struct arch_spinlock *lock)
+static inline void __ticket_lock_spinning(struct arch_spinlock *lock, unsigned 
ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_locked, lock);
+   PVOP_VCALL2(pv_lock_ops.lock_spinning, lock, ticket);
 }
 
-static inline int arch_spin_is_contended(struct arch_spinlock *lock)
+static inline void ticket_unlock_kick(struct arch_spinlock *lock, unsigned 
ticket)
 {
-   return PVOP_CALL1(int, pv_lock_ops.spin_is_contended, lock);
-}
-#define arch_spin_is_contended arch_spin_is_contended
-
-static __always_inline void arch_spin_lock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_lock, lock);
-}
-
-static __always_inline void arch_spin_lock_flags(struct arch_spinlock *lock,
- unsigned long flags)
-{
-   PVOP_VCALL2(pv_lock_ops.spin_lock_flags, lock, flags);
-}
-
-static __always_inline int arch_spin_trylock(struct arch_spinlock *lock)
-{
-   return PVOP_CALL1(int, pv_lock_ops.spin_trylock, lock);
-}
-
-static __always_inline void arch_spin_unlock(struct arch_spinlock *lock)
-{
-   PVOP_VCALL1(pv_lock_ops.spin_unlock, lock);
+   PVOP_VCALL2(pv_lock_ops.unlock_kick, lock, ticket);
 }
 
 #endif
diff --git a/arch/x86/include/asm/paravirt_types.h 
b/arch/x86/include/asm/paravirt_types.h
index 8288509..e9101c3 100644
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -321,12 +321,8 @@ struct pv_mmu_ops {
 
 struct arch_spinlock;
 struct pv_lock_ops {
-   int (*spin_is_locked)(struct arch_spinlock *lock);
-   int (*spin_is_contended)(struct arch_spinlock *lock);
-   void (*spin_lock)(struct arch_spinlock *lock);
-   void (*spin_lock_flags)(struct arch_spinlock *lock, unsigned long 
flags);
-   int (*spin_trylock)(struct arch_spinlock *lock);
-   void (*spin_unlock)(struct arch_spinlock *lock);
+   void (*lock_spinning)(struct arch_spinlock *lock, unsigned ticket);
+   void (*unlock_kick)(struct arch_spinlock *lock, unsigned ticket);
 };
 
 /* This contains all the paravirt structures: we get a convenient
diff --git a/arch/x86/include/asm/spinlock.h b/arch/x86/include/asm/spinlock.h
index 3549e6c..f5d9236 100644
--- a/arch/x86/include/asm/spinlock.h
+++ b/arch/x86/include/asm/spinlock.h
@@ -38,6 +38,32 @@
 # define UNLOCK_LOCK_PREFIX
 #endif
 
+/* How long a lock should spin before we consider blocking */
+#define SPIN_THRESHOLD (1  11)
+
+#ifndef CONFIG_PARAVIRT_SPINLOCKS
+
+static __always_inline void __ticket_lock_spinning(struct arch_spinlock *lock, 
unsigned ticket)
+{
+}
+
+static __always_inline void ticket_unlock_kick(struct arch_spinlock *lock, 
unsigned ticket)
+{
+}
+
+#endif

  1   2   >