Re: [PATCH 0/9] qspinlock stuff -v15

2015-03-30 Thread Waiman Long

On 03/30/2015 12:29 PM, Peter Zijlstra wrote:

On Mon, Mar 30, 2015 at 12:25:12PM -0400, Waiman Long wrote:

I did it differently in my PV portion of the qspinlock patch. Instead of
just waking up the CPU, the new lock holder will check if the new queue head
has been halted. If so, it will set the slowpath flag for the halted queue
head in the lock so as to wake it up at unlock time. This should eliminate
your concern of dong twice as many VMEXIT in an overcommitted scenario.

We can still do that on top of all this right? As you might have
realized I'm a fan of gradual complexity :-)


Of course. I am just saying that the concern can be addressed with some 
additional code change.


-Longman
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] qspinlock stuff -v15

2015-03-30 Thread Waiman Long

On 03/27/2015 10:07 AM, Konrad Rzeszutek Wilk wrote:

On Thu, Mar 26, 2015 at 09:21:53PM +0100, Peter Zijlstra wrote:

On Wed, Mar 25, 2015 at 03:47:39PM -0400, Konrad Rzeszutek Wilk wrote:

Ah nice. That could be spun out as a seperate patch to optimize the existing
ticket locks I presume.

Yes I suppose we can do something similar for the ticket and patch in
the right increment. We'd need to restructure the code a bit, but
its not fundamentally impossible.

We could equally apply the head hashing to the current ticket
implementation and avoid the current bitmap iteration.


Now with the old pv ticketlock code an vCPU would only go to sleep once and
be woken up when it was its turn. With this new code it is woken up twice
(and twice it goes to sleep). With an overcommit scenario this would imply
that we will have at least twice as many VMEXIT as with the previous code.

An astute observation, I had not considered that.

Thank you.

I presume when you did benchmarking this did not even register? Thought
I wonder if it would if you ran the benchmark for a week or so.

You presume I benchmarked :-) I managed to boot something virt and run
hackbench in it. I wouldn't know a representative virt setup if I ran
into it.

The thing is, we want this qspinlock for real hardware because its
faster and I really want to avoid having to carry two spinlock
implementations -- although I suppose that if we really really have to
we could.

In some way you already have that - for virtualized environments where you
don't have an PV mechanism you just use the byte spinlock - which is good.

And switching to PV ticketlock implementation after boot.. ugh. I feel your 
pain.

What if you used an PV bytelock implemenation? The code you posted already
'sprays' all the vCPUS to wake up. And that is exactly what you need for PV
bytelocks - well, you only need to wake up the vCPUS that have gone to sleep
waiting on an specific 'struct spinlock' and just stash those in an per-cpu
area. The old Xen spinlock code (Before 3.11?) had this.

Just an idea thought.


The current code should have just waken up one sleeping vCPU. We 
shouldn't want to wake up all of them and have almost all except one go 
back to sleep. I think the PV bytelock you suggest is workable. It 
should also simplify the implementation. It is just a matter of how much 
we value the fairness attribute of the PV ticket or queue spinlock 
implementation that we have.


-Longman
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] qspinlock stuff -v15

2015-03-30 Thread Peter Zijlstra
On Mon, Mar 30, 2015 at 12:25:12PM -0400, Waiman Long wrote:
 I did it differently in my PV portion of the qspinlock patch. Instead of
 just waking up the CPU, the new lock holder will check if the new queue head
 has been halted. If so, it will set the slowpath flag for the halted queue
 head in the lock so as to wake it up at unlock time. This should eliminate
 your concern of dong twice as many VMEXIT in an overcommitted scenario.

We can still do that on top of all this right? As you might have
realized I'm a fan of gradual complexity :-)
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] qspinlock stuff -v15

2015-03-30 Thread Waiman Long

On 03/25/2015 03:47 PM, Konrad Rzeszutek Wilk wrote:

On Mon, Mar 16, 2015 at 02:16:13PM +0100, Peter Zijlstra wrote:

Hi Waiman,

As promised; here is the paravirt stuff I did during the trip to BOS last week.

All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).

The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.

I ran this using the virtme tool (thanks Andy) on my laptop with a 4x
overcommit on vcpus (16 vcpus as compared to the 4 my laptop actually has) and
it both booted and survived a hackbench run (perf bench sched messaging -g 20
-l 5000).

So while the paravirt code isn't the most optimal code ever conceived it does 
work.

Also, the paravirt patching includes replacing the call with movb $0, %arg1
for the native case, which should greatly reduce the cost of having
CONFIG_PARAVIRT_SPINLOCKS enabled on actual hardware.

Ah nice. That could be spun out as a seperate patch to optimize the existing
ticket locks I presume.


The goal is to replace ticket spinlock by queue spinlock. We may not 
want to support 2 different spinlock implementations in the kernel.




Now with the old pv ticketlock code an vCPU would only go to sleep once and
be woken up when it was its turn. With this new code it is woken up twice
(and twice it goes to sleep). With an overcommit scenario this would imply
that we will have at least twice as many VMEXIT as with the previous code.


I did it differently in my PV portion of the qspinlock patch. Instead of 
just waking up the CPU, the new lock holder will check if the new queue 
head has been halted. If so, it will set the slowpath flag for the 
halted queue head in the lock so as to wake it up at unlock time. This 
should eliminate your concern of dong twice as many VMEXIT in an 
overcommitted scenario.


BTW, I did some qspinlock vs. ticketspinlock benchmarks using AIM7 
high_systime workload on a 4-socket IvyBridge-EX system (60 cores, 120 
threads) with some interesting results.


In term of the performance benefit of this patch, I ran the
high_systime workload (which does a lot of fork() and exit())
at various load levels (500, 1000, 1500 and 2000 users) on a
4-socket IvyBridge-EX bare-metal system (60 cores, 120 threads)
with intel_pstate driver and performance scaling governor. The JPM
(jobs/minutes) and execution time results were as follows:

Kernel  JPMExecution Time
--  -----
At 500 users:
 3.19118857.1426.25s
3.19-qspinlock134889.7523.13s
% change +13.5%-11.9%

At 1000 users:
 3.19204255.3230.55s
 3.19-qspinlock239631.3426.04s
% change +17.3%-14.8%

At 1500 users:
 3.19177272.7352.80s
 3.19-qspinlock326132.4028.70s
% change +84.0%-45.6%

At 2000 users:
 3.19196690.3163.45s
 3.19-qspinlock341730.5636.52s
% change +73.7%-42.4%

It turns out that this workload was causing quite a lot of spinlock
contention in the vanilla 3.19 kernel. The performance advantage of
this patch increases with heavier loads.

With the powersave governor, the JPM data were as follows:

Users3.19 3.19-qspinlock  % Change
---  
 500  112635.38  132596.69   +17.7%
1000  171240.40  240369.80   +40.4%
1500  130507.53  324436.74  +148.6%
2000  175972.93  341637.01   +94.1%

With the qspinlock patch, there wasn't too much difference in
performance between the 2 scaling governors. Without this patch,
the powersave governor was much slower than the performance governor.

By disabling the intel_pstate driver and use acpi_cpufreq instead,
the benchmark performance (JPM) at 1000 users level for the performance
and ondemand governors were:

  Governor  3.193.19-qspinlock   % Change
    --   
  performance   124949.94   219950.65+76.0%
  ondemand  4838.90   206690.96+4171%

The performance was just horrible when there was significant spinlock
contention with the ondemand governor. There was also significant
run-to-run variation.  A second run of the same benchmark gave a result
of 22115 JPMs. With the qspinlock patch, however, the performance was
much more stable under different cpufreq drivers and governors. That
is not the case with the default ticket spinlock implementation.

The %CPU times spent on spinlock contention (from perf) with the
performance governor and the intel_pstate driver were:

  Kernel Function3.19 kernel3.19-qspinlock kernel
  ------

Re: [PATCH 0/9] qspinlock stuff -v15

2015-03-27 Thread Raghavendra K T

On 03/16/2015 06:46 PM, Peter Zijlstra wrote:

Hi Waiman,

As promised; here is the paravirt stuff I did during the trip to BOS last week.

All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).

The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.

I ran this using the virtme tool (thanks Andy) on my laptop with a 4x
overcommit on vcpus (16 vcpus as compared to the 4 my laptop actually has) and
it both booted and survived a hackbench run (perf bench sched messaging -g 20
-l 5000).

So while the paravirt code isn't the most optimal code ever conceived it does 
work.

Also, the paravirt patching includes replacing the call with movb $0, %arg1
for the native case, which should greatly reduce the cost of having
CONFIG_PARAVIRT_SPINLOCKS enabled on actual hardware.

I feel that if someone were to do a Xen patch we can go ahead and merge this
stuff (finally!).

These patches do not implement the paravirt spinlock debug stats currently
implemented (separately) by KVM and Xen, but that should not be too hard to do
on top and in the 'generic' code -- no reason to duplicate all that.

Of course; once this lands people can look at improving the paravirt nonsense.



last time I had reported some hangs in kvm case, and I can confirm that
the current set of patches works fine.

Feel free to add
Tested-by: Raghavendra K T raghavendra...@linux.vnet.ibm.com #kvm pv

As far as performance is concerned (with my 16core +ht machine having
16vcpu guests [ even w/ , w/o the lfsr hash patchset ]), I do not see
any significant observations to report, though I understand that we
could see much more benefit with large number of vcpus because of
possible reduction in cache bouncing.






--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] qspinlock stuff -v15

2015-03-27 Thread Konrad Rzeszutek Wilk
On Thu, Mar 26, 2015 at 09:21:53PM +0100, Peter Zijlstra wrote:
 On Wed, Mar 25, 2015 at 03:47:39PM -0400, Konrad Rzeszutek Wilk wrote:
  Ah nice. That could be spun out as a seperate patch to optimize the existing
  ticket locks I presume.
 
 Yes I suppose we can do something similar for the ticket and patch in
 the right increment. We'd need to restructure the code a bit, but
 its not fundamentally impossible.
 
 We could equally apply the head hashing to the current ticket
 implementation and avoid the current bitmap iteration.
 
  Now with the old pv ticketlock code an vCPU would only go to sleep once and
  be woken up when it was its turn. With this new code it is woken up twice 
  (and twice it goes to sleep). With an overcommit scenario this would imply
  that we will have at least twice as many VMEXIT as with the previous code.
 
 An astute observation, I had not considered that.

Thank you.
 
  I presume when you did benchmarking this did not even register? Thought
  I wonder if it would if you ran the benchmark for a week or so.
 
 You presume I benchmarked :-) I managed to boot something virt and run
 hackbench in it. I wouldn't know a representative virt setup if I ran
 into it.
 
 The thing is, we want this qspinlock for real hardware because its
 faster and I really want to avoid having to carry two spinlock
 implementations -- although I suppose that if we really really have to
 we could.

In some way you already have that - for virtualized environments where you
don't have an PV mechanism you just use the byte spinlock - which is good.

And switching to PV ticketlock implementation after boot.. ugh. I feel your 
pain.

What if you used an PV bytelock implemenation? The code you posted already
'sprays' all the vCPUS to wake up. And that is exactly what you need for PV
bytelocks - well, you only need to wake up the vCPUS that have gone to sleep
waiting on an specific 'struct spinlock' and just stash those in an per-cpu
area. The old Xen spinlock code (Before 3.11?) had this.

Just an idea thought.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] qspinlock stuff -v15

2015-03-26 Thread Peter Zijlstra
On Wed, Mar 25, 2015 at 03:47:39PM -0400, Konrad Rzeszutek Wilk wrote:
 Ah nice. That could be spun out as a seperate patch to optimize the existing
 ticket locks I presume.

Yes I suppose we can do something similar for the ticket and patch in
the right increment. We'd need to restructure the code a bit, but
its not fundamentally impossible.

We could equally apply the head hashing to the current ticket
implementation and avoid the current bitmap iteration.

 Now with the old pv ticketlock code an vCPU would only go to sleep once and
 be woken up when it was its turn. With this new code it is woken up twice 
 (and twice it goes to sleep). With an overcommit scenario this would imply
 that we will have at least twice as many VMEXIT as with the previous code.

An astute observation, I had not considered that.

 I presume when you did benchmarking this did not even register? Thought
 I wonder if it would if you ran the benchmark for a week or so.

You presume I benchmarked :-) I managed to boot something virt and run
hackbench in it. I wouldn't know a representative virt setup if I ran
into it.

The thing is, we want this qspinlock for real hardware because its
faster and I really want to avoid having to carry two spinlock
implementations -- although I suppose that if we really really have to
we could.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] qspinlock stuff -v15

2015-03-25 Thread Konrad Rzeszutek Wilk
On Mon, Mar 16, 2015 at 02:16:13PM +0100, Peter Zijlstra wrote:
 Hi Waiman,
 
 As promised; here is the paravirt stuff I did during the trip to BOS last 
 week.
 
 All the !paravirt patches are more or less the same as before (the only real
 change is the copyright lines in the first patch).
 
 The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
 convoluted and I've no real way to test that but it should be stright fwd to
 make work.
 
 I ran this using the virtme tool (thanks Andy) on my laptop with a 4x
 overcommit on vcpus (16 vcpus as compared to the 4 my laptop actually has) and
 it both booted and survived a hackbench run (perf bench sched messaging -g 20
 -l 5000).
 
 So while the paravirt code isn't the most optimal code ever conceived it does 
 work.
 
 Also, the paravirt patching includes replacing the call with movb $0, %arg1
 for the native case, which should greatly reduce the cost of having
 CONFIG_PARAVIRT_SPINLOCKS enabled on actual hardware.

Ah nice. That could be spun out as a seperate patch to optimize the existing
ticket locks I presume.

Now with the old pv ticketlock code an vCPU would only go to sleep once and
be woken up when it was its turn. With this new code it is woken up twice 
(and twice it goes to sleep). With an overcommit scenario this would imply
that we will have at least twice as many VMEXIT as with the previous code.

I presume when you did benchmarking this did not even register? Thought
I wonder if it would if you ran the benchmark for a week or so.

 
 I feel that if someone were to do a Xen patch we can go ahead and merge this
 stuff (finally!).
 
 These patches do not implement the paravirt spinlock debug stats currently
 implemented (separately) by KVM and Xen, but that should not be too hard to do
 on top and in the 'generic' code -- no reason to duplicate all that.
 
 Of course; once this lands people can look at improving the paravirt nonsense.
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 0/9] qspinlock stuff -v15

2015-03-19 Thread David Vrabel
On 16/03/15 13:16, Peter Zijlstra wrote:
 
 I feel that if someone were to do a Xen patch we can go ahead and merge this
 stuff (finally!).

This seems work for me, but I've not got time to give it a more thorough
testing.

You can fold this into your series.

There doesn't seem to be a way to disable QUEUE_SPINLOCKS when supported by
the arch, is this intentional?  If so, the existing ticketlock code could go.

David

8--
x86/xen: paravirt support for qspinlocks

Provide the wait and kick ops necessary for paravirt-aware queue
spinlocks.

Signed-off-by: David Vrabel david.vra...@citrix.com
---
 arch/x86/xen/spinlock.c |   40 +---
 1 file changed, 37 insertions(+), 3 deletions(-)

diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 956374c..b019b2a 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -95,17 +95,43 @@ static inline void spin_time_accum_blocked(u64 start)
 }
 #endif  /* CONFIG_XEN_DEBUG_FS */
 
+static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
+static DEFINE_PER_CPU(char *, irq_name);
+static bool xen_pvspin = true;
+
+#ifdef CONFIG_QUEUE_SPINLOCK
+
+#include asm/qspinlock.h
+
+PV_CALLEE_SAVE_REGS_THUNK(__pv_queue_spin_unlock);
+
+static void xen_qlock_wait(u8 *ptr, u8 val)
+{
+   int irq = __this_cpu_read(lock_kicker_irq);
+
+   xen_clear_irq_pending(irq);
+
+   barrier();
+
+   if (READ_ONCE(*ptr) == val)
+   xen_poll_irq(irq);
+}
+
+static void xen_qlock_kick(int cpu)
+{
+   xen_send_IPI_one(cpu, XEN_SPIN_UNLOCK_VECTOR);
+}
+
+#else
+
 struct xen_lock_waiting {
struct arch_spinlock *lock;
__ticket_t want;
 };
 
-static DEFINE_PER_CPU(int, lock_kicker_irq) = -1;
-static DEFINE_PER_CPU(char *, irq_name);
 static DEFINE_PER_CPU(struct xen_lock_waiting, lock_waiting);
 static cpumask_t waiting_cpus;
 
-static bool xen_pvspin = true;
 __visible void xen_lock_spinning(struct arch_spinlock *lock, __ticket_t want)
 {
int irq = __this_cpu_read(lock_kicker_irq);
@@ -217,6 +243,7 @@ static void xen_unlock_kick(struct arch_spinlock *lock, 
__ticket_t next)
}
}
 }
+#endif /* !QUEUE_SPINLOCK */
 
 static irqreturn_t dummy_handler(int irq, void *dev_id)
 {
@@ -280,8 +307,15 @@ void __init xen_init_spinlocks(void)
return;
}
printk(KERN_DEBUG xen: PV spinlocks enabled\n);
+#ifdef CONFIG_QUEUE_SPINLOCK
+   pv_lock_ops.queue_spin_lock_slowpath = __pv_queue_spin_lock_slowpath;
+   pv_lock_ops.queue_spin_unlock = PV_CALLEE_SAVE(__pv_queue_spin_unlock);
+   pv_lock_ops.wait = xen_qlock_wait;
+   pv_lock_ops.kick = xen_qlock_kick;
+#else
pv_lock_ops.lock_spinning = PV_CALLEE_SAVE(xen_lock_spinning);
pv_lock_ops.unlock_kick = xen_unlock_kick;
+#endif
 }
 
 /*
-- 
1.7.10.4

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 0/9] qspinlock stuff -v15

2015-03-19 Thread Peter Zijlstra
On Thu, Mar 19, 2015 at 06:01:34PM +, David Vrabel wrote:
 This seems work for me, but I've not got time to give it a more thorough
 testing.
 
 You can fold this into your series.

Thanks!

 There doesn't seem to be a way to disable QUEUE_SPINLOCKS when supported by
 the arch, is this intentional?  If so, the existing ticketlock code could go.

Yeah, its left as a rudiment such that if we find issues with the
qspinlock code we can 'revert' with a trivial patch. If no issues show
up we can rip out all the old code in a subsequent release.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/9] qspinlock stuff -v15

2015-03-18 Thread Waiman Long

On 03/16/2015 09:16 AM, Peter Zijlstra wrote:

Hi Waiman,

As promised; here is the paravirt stuff I did during the trip to BOS last week.

All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).

The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.

I ran this using the virtme tool (thanks Andy) on my laptop with a 4x
overcommit on vcpus (16 vcpus as compared to the 4 my laptop actually has) and
it both booted and survived a hackbench run (perf bench sched messaging -g 20
-l 5000).

So while the paravirt code isn't the most optimal code ever conceived it does 
work.

Also, the paravirt patching includes replacing the call with movb $0, %arg1
for the native case, which should greatly reduce the cost of having
CONFIG_PARAVIRT_SPINLOCKS enabled on actual hardware.

I feel that if someone were to do a Xen patch we can go ahead and merge this
stuff (finally!).

These patches do not implement the paravirt spinlock debug stats currently
implemented (separately) by KVM and Xen, but that should not be too hard to do
on top and in the 'generic' code -- no reason to duplicate all that.

Of course; once this lands people can look at improving the paravirt nonsense.



Thanks for sending this out. I have no problem with the !paravirt patch. 
I do have some comments on the paravirt one which I will reply individually.


Cheers,
Longman
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Xen-devel] [PATCH 0/9] qspinlock stuff -v15

2015-03-16 Thread David Vrabel
On 16/03/15 13:16, Peter Zijlstra wrote:
 Hi Waiman,
 
 As promised; here is the paravirt stuff I did during the trip to BOS last 
 week.
 
 All the !paravirt patches are more or less the same as before (the only real
 change is the copyright lines in the first patch).
 
 The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
 convoluted and I've no real way to test that but it should be stright fwd to
 make work.
 
 I ran this using the virtme tool (thanks Andy) on my laptop with a 4x
 overcommit on vcpus (16 vcpus as compared to the 4 my laptop actually has) and
 it both booted and survived a hackbench run (perf bench sched messaging -g 20
 -l 5000).
 
 So while the paravirt code isn't the most optimal code ever conceived it does 
 work.
 
 Also, the paravirt patching includes replacing the call with movb $0, %arg1
 for the native case, which should greatly reduce the cost of having
 CONFIG_PARAVIRT_SPINLOCKS enabled on actual hardware.
 
 I feel that if someone were to do a Xen patch we can go ahead and merge this
 stuff (finally!).

I can look at this.  It looks pretty straight-forward.

 These patches do not implement the paravirt spinlock debug stats currently
 implemented (separately) by KVM and Xen, but that should not be too hard to do
 on top and in the 'generic' code -- no reason to duplicate all that.

I think this is fine.

David
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH 0/9] qspinlock stuff -v15

2015-03-16 Thread Peter Zijlstra
Hi Waiman,

As promised; here is the paravirt stuff I did during the trip to BOS last week.

All the !paravirt patches are more or less the same as before (the only real
change is the copyright lines in the first patch).

The paravirt stuff is 'simple' and KVM only -- the Xen code was a little more
convoluted and I've no real way to test that but it should be stright fwd to
make work.

I ran this using the virtme tool (thanks Andy) on my laptop with a 4x
overcommit on vcpus (16 vcpus as compared to the 4 my laptop actually has) and
it both booted and survived a hackbench run (perf bench sched messaging -g 20
-l 5000).

So while the paravirt code isn't the most optimal code ever conceived it does 
work.

Also, the paravirt patching includes replacing the call with movb $0, %arg1
for the native case, which should greatly reduce the cost of having
CONFIG_PARAVIRT_SPINLOCKS enabled on actual hardware.

I feel that if someone were to do a Xen patch we can go ahead and merge this
stuff (finally!).

These patches do not implement the paravirt spinlock debug stats currently
implemented (separately) by KVM and Xen, but that should not be too hard to do
on top and in the 'generic' code -- no reason to duplicate all that.

Of course; once this lands people can look at improving the paravirt nonsense.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html