RE: VM performance issue in KVM guests.

2010-04-18 Thread Zhang, Xiantao
Srivatsa Vaddagiri wrote:
 On Thu, Apr 15, 2010 at 03:33:18PM +0200, Peter Zijlstra wrote:
 On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
 
 Certainly that has even greater potential for Linux guests.  Note
 that we spin on mutexes now, so we need to prevent preemption while
 the lock owner is running.
 
 either that, or disable spinning on (para) virt kernels. Para virt
 kernels could possibly extend the thing by also checking to see if
 the owner's vcpu is running.
 
 I suspect we will need a combination of both approaches, given that
 we will not be able to avoid preempting guests in their critical
 section always (too long critical sections or real-time tasks wanting
 to preempt). Other idea is to gang-schedule VCPUs of the same guest
 as much as possible? 
Gang-scheduling maybe the ideal solution to solve the issue, and has to change 
host's scheduler a lot to implement it, and it maybe hard to be upstream.  So 
can we figure out an easy way(maybe not best) for this ? 
Xiantao
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-17 Thread Avi Kivity

On 04/16/2010 05:27 AM, Zhang, Xiantao wrote:




When vcpus are pinned to pcpus, there is a 50% chance that a guest's
vcpus will be co-scheduled and spinlocks will perform will.

When vcpus are not pinned, but affine wakeups are disabled, there is a
33% chance that vcpus will be co-scheduled.

When vcpus are not pinned and affine wakeups are enabled there is a 0%
chance that vcpus will be co-scheduled.

Keeping both vcpus on the same core actually makes sense since they
can communicate through the local cache faster than across cores.
What we need is to make sure that they don't spin.

Windows 2008 can report spinlock spinning through a hypercall.  Can
you hook to that interface and see if it happens regularly?
Altenatively use a PLE capable host and trace the kvm_vcpu_on_spin()
function.
 

We only tried windows 2003 for the experiments, and have no data related to 
windows 2008.
But maybe we can have  a try later.  Anyway, the key point is we have to 
enhance the scheduler to let it
Know which threads are vcpu threads to avoid perf loss in this case.
   


I have two worries about this approach:

1.  Affine wakeups were introduced for a reason; if we disable them 
(even just for vcpus), we lost something.  Maybe we can tune the 
mechanism not to fail, instead of disabling it.


2.  Affine wakeups are a scheduler internal detail.  How do we explain 
what it does?  the scheduler may not have affine wakeups in a few years, 
yet we'll have an ABI to disable them.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-17 Thread Avi Kivity

On 04/15/2010 04:33 PM, Peter Zijlstra wrote:

On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
   

Certainly that has even greater potential for Linux guests.  Note that
we spin on mutexes now, so we need to prevent preemption while the lock
owner is running.
 

either that, or disable spinning on (para) virt kernels.


What would you do instead?

Note we can't disable spinning on Windows or pre 2.6.36 kernels.


Para virt
kernels could possibly extend the thing by also checking to see if the
owner's vcpu is running.
   


Certainly that's worth doing.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-16 Thread Peter Zijlstra
On Thu, 2010-04-15 at 09:43 -0700, Srivatsa Vaddagiri wrote:
 On Thu, Apr 15, 2010 at 03:33:18PM +0200, Peter Zijlstra wrote:
  On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
   
   Certainly that has even greater potential for Linux guests.  Note that 
   we spin on mutexes now, so we need to prevent preemption while the lock 
   owner is running. 
  
  either that, or disable spinning on (para) virt kernels. Para virt
  kernels could possibly extend the thing by also checking to see if the
  owner's vcpu is running.
 
 I suspect we will need a combination of both approaches, given that we will 
 not
 be able to avoid preempting guests in their critical section always (too long
 critical sections or real-time tasks wanting to preempt). Other idea is to
 gang-schedule VCPUs of the same guest as much as possible?

Except gang scheduling is a scalability nightmare waiting to happen. I
much prefer this hint thing.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-15 Thread Avi Kivity

On 04/15/2010 07:58 AM, Srivatsa Vaddagiri wrote:
On Sun, Apr 11, 2010 at 11:40 PM, Avi Kivity a...@redhat.com 
mailto:a...@redhat.com wrote:


The current handing of PLE is very suboptimal.  With proper
directed yield we should be much better there.



Hi Avi,
  By directed yield, do you mean transfer the timeslice of 
one thread (which is contending for a lock) to another thread (which 
is holding a lock)?


It's a priority transfer (in CFS terms, vruntime) (we don't know who 
holds the lock, so we pick a co-vcpu at random).


If at that point in time, the lock-holder thread/VCPU is actually not 
running currently, ie it is at the back of the runqueue, would it help 
much? In such case, it will take time for the lock holder to run again 
and the default timeslice it would have got could have been sufficient 
to release the lock?


The idea is to increase the chances to the target vcpu to run, and to 
decrease the changes of the spinner to run (hopefully they change places).




I am also working on a prototype for some other technique here - to 
avoid preempting guest threads/VCPUs in the middle of their 
(spin-lock) critical section. This requires guest to hint host when 
there are in such a section. [1] has shown 33% improvement to an 
apache benchmark based on this idea.




Certainly that has even greater potential for Linux guests.  Note that 
we spin on mutexes now, so we need to prevent preemption while the lock 
owner is running.



--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-15 Thread Peter Zijlstra
On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
 
 Certainly that has even greater potential for Linux guests.  Note that 
 we spin on mutexes now, so we need to prevent preemption while the lock 
 owner is running. 

either that, or disable spinning on (para) virt kernels. Para virt
kernels could possibly extend the thing by also checking to see if the
owner's vcpu is running.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-15 Thread Srivatsa Vaddagiri
On Thu, Apr 15, 2010 at 03:33:18PM +0200, Peter Zijlstra wrote:
 On Thu, 2010-04-15 at 11:18 +0300, Avi Kivity wrote:
  
  Certainly that has even greater potential for Linux guests.  Note that 
  we spin on mutexes now, so we need to prevent preemption while the lock 
  owner is running. 
 
 either that, or disable spinning on (para) virt kernels. Para virt
 kernels could possibly extend the thing by also checking to see if the
 owner's vcpu is running.

I suspect we will need a combination of both approaches, given that we will not
be able to avoid preempting guests in their critical section always (too long
critical sections or real-time tasks wanting to preempt). Other idea is to
gang-schedule VCPUs of the same guest as much as possible?

- vatsa
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: VM performance issue in KVM guests.

2010-04-15 Thread Zhang, Xiantao
Avi Kivity wrote:
 On 04/14/2010 06:24 AM, Zhang, Xiantao wrote:
 
 Spin loops need to be addressed first, they are known to kill
 performance in overcommit situations.
 
 
 Even in overcommit case, if vcpu threads of one qemu are not
 scheduled or pulled to the same logical processor, the performance
 drop is tolerant like Xen's case today. But for KVM, it has to
 suffer from additional performance loss, since host's scheduler
 actively pulls these vcpu threads together.
 
 
 
 Can you quantify this loss?  Give examples of what happens?
 
 For example, one machine is configured with 2 pCPUs and there are
 two Windows guests running on the machine, and each guest is
 cconfigured with 2 vcpus and one webbench server runs in it.  
 If use host's default scheduler, webbench's performance is very bad,
 but if pin each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can
 see 5-10X performance improvement with same CPU utilization.  
 In addition, we also see kvm's perf scalability is also impacted in
 large systems, for some performance experiments, kvm's perf begins
 to drop when vCPU is overcommitted and pCPU are saturated, but once
 the wake_up_affine feature is switched off in scheduler, kvm's perf
 can keep rising in this case.
 
 
 Ok.  This is probably due to spinlock contention.

Yes, exactly. 

 When vcpus are pinned to pcpus, there is a 50% chance that a guest's
 vcpus will be co-scheduled and spinlocks will perform will.
 
 When vcpus are not pinned, but affine wakeups are disabled, there is a
 33% chance that vcpus will be co-scheduled.
 
 When vcpus are not pinned and affine wakeups are enabled there is a 0%
 chance that vcpus will be co-scheduled.
 
 Keeping both vcpus on the same core actually makes sense since they
 can communicate through the local cache faster than across cores. 
 What we need is to make sure that they don't spin.
 
 Windows 2008 can report spinlock spinning through a hypercall.  Can
 you hook to that interface and see if it happens regularly? 
 Altenatively use a PLE capable host and trace the kvm_vcpu_on_spin()
 function. 
We only tried windows 2003 for the experiments, and have no data related to 
windows 2008. 
But maybe we can have  a try later.  Anyway, the key point is we have to 
enhance the scheduler to let it 
Know which threads are vcpu threads to avoid perf loss in this case.
Xiantao
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-14 Thread Avi Kivity

On 04/14/2010 06:24 AM, Zhang, Xiantao wrote:



Spin loops need to be addressed first, they are known to kill
performance in overcommit situations.

 

Even in overcommit case, if vcpu threads of one qemu are not
scheduled or pulled to the same logical processor, the performance
drop is tolerant like Xen's case today. But for KVM, it has to
suffer from additional performance loss, since host's scheduler
actively pulls these vcpu threads together.


   

Can you quantify this loss?  Give examples of what happens?
 

For example, one machine is configured with 2 pCPUs and there are two Windows 
guests running on the machine, and each guest is cconfigured with 2 vcpus and 
one webbench server runs in it.
If use host's default scheduler, webbench's performance is very bad, but if pin 
each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can see 5-10X performance 
improvement with same CPU utilization.
In addition, we also see kvm's perf scalability is also impacted in large 
systems, for some performance experiments, kvm's perf begins to drop when vCPU 
is overcommitted and pCPU are saturated, but once the wake_up_affine feature is 
switched off in scheduler, kvm's perf can keep rising in this case.
   


Ok.  This is probably due to spinlock contention.

When vcpus are pinned to pcpus, there is a 50% chance that a guest's 
vcpus will be co-scheduled and spinlocks will perform will.


When vcpus are not pinned, but affine wakeups are disabled, there is a 
33% chance that vcpus will be co-scheduled.


When vcpus are not pinned and affine wakeups are enabled there is a 0% 
chance that vcpus will be co-scheduled.


Keeping both vcpus on the same core actually makes sense since they can 
communicate through the local cache faster than across cores.  What we 
need is to make sure that they don't spin.


Windows 2008 can report spinlock spinning through a hypercall.  Can you 
hook to that interface and see if it happens regularly?  Altenatively 
use a PLE capable host and trace the kvm_vcpu_on_spin() function.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-13 Thread Avi Kivity

On 04/13/2010 03:50 AM, Zhang, Xiantao wrote:

Avi Kivity wrote:
   

On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
 
   

What was the performance hit?  What was your I/O setup (image
format, using aio?)

 

The issue only happens when vcpu number is over-committed(e.g.
vcpu/pcpu2) and physical cpus are saturated. For example,  when run
webbench in windows OS in this case, its performance drops by 80%.
In our experiment, we are using image file through virtio, and I
think aio should be used by default also.

   

Is this on a machine that does pause-loop exits?  The current handing
of PLE is very suboptimal.  With proper directed yield we should be
much better there.

Without PLE, we need paravirtualized spinlocks, no way around it.
 

PLE has the ability to eliminate the issue at some extent, and pv solution 
should be helpful also.  But for windows guests running on machines without 
PLE, we still needs to enhance host side to resolve the issue.
   


Well, was this on a machine with PLE or without PLE?


Spin loops need to be addressed first, they are known to kill
performance in overcommit situations.
 

Even in overcommit case, if vcpu threads of one qemu are not scheduled or 
pulled to the same logical processor, the performance drop is tolerant like 
Xen's case today. But for KVM, it has to suffer from additional performance 
loss, since host's scheduler actively pulls these vcpu threads together.

   


Can you quantify this loss?  Give examples of what happens?


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: VM performance issue in KVM guests.

2010-04-13 Thread Zhang, Xiantao
Avi Kivity wrote:
 On 04/13/2010 03:50 AM, Zhang, Xiantao wrote:
 Avi Kivity wrote:
 
 On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
 
 
 What was the performance hit?  What was your I/O setup (image
 format, using aio?) 
 
 
 The issue only happens when vcpu number is over-committed(e.g.
 vcpu/pcpu2) and physical cpus are saturated. For example,  when
 run webbench in windows OS in this case, its performance drops by
 80%. In our experiment, we are using image file through virtio,
 and I think aio should be used by default also.
 
 
 Is this on a machine that does pause-loop exits?  The current
 handing of PLE is very suboptimal.  With proper directed yield we
 should be much better there. 
 
 Without PLE, we need paravirtualized spinlocks, no way around it.
 
 PLE has the ability to eliminate the issue at some extent, and pv
 solution should be helpful also.  But for windows guests running on
 machines without PLE, we still needs to enhance host side to resolve
 the issue.   
 
 
 Well, was this on a machine with PLE or without PLE?

I am saying the machine has no PLE feature support. Even with PLE feature 
support, there is still performance loss due to PLE's cost. 
 
 Spin loops need to be addressed first, they are known to kill
 performance in overcommit situations.
 
 Even in overcommit case, if vcpu threads of one qemu are not
 scheduled or pulled to the same logical processor, the performance
 drop is tolerant like Xen's case today. But for KVM, it has to
 suffer from additional performance loss, since host's scheduler
 actively pulls these vcpu threads together.
 
 
 Can you quantify this loss?  Give examples of what happens?

For example, one machine is configured with 2 pCPUs and there are two Windows 
guests running on the machine, and each guest is cconfigured with 2 vcpus and 
one webbench server runs in it. 
If use host's default scheduler, webbench's performance is very bad, but if pin 
each geust's vCPU0 to pCPU0 and vCPU1 to pCPU1, we can see 5-10X performance 
improvement with same CPU utilization. 
In addition, we also see kvm's perf scalability is also impacted in large 
systems, for some performance experiments, kvm's perf begins to drop when vCPU 
is overcommitted and pCPU are saturated, but once the wake_up_affine feature is 
switched off in scheduler, kvm's perf can keep rising in this case.   
Xiantao

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-12 Thread Avi Kivity

On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:



What was the performance hit?  What was your I/O setup (image format,
using aio?)
 

The issue only happens when vcpu number is over-committed(e.g. vcpu/pcpu2) and 
physical cpus are saturated. For example,  when run webbench in windows OS in this 
case, its performance drops by 80%.  In our experiment, we are using image file 
through virtio, and I think aio should be used by default also.
   


Is this on a machine that does pause-loop exits?  The current handing of 
PLE is very suboptimal.  With proper directed yield we should be much 
better there.


Without PLE, we need paravirtualized spinlocks, no way around it.


After analysis about Linux scheduler, we found it is indeed caused
by the known features of Linux schduler, such as AFFINE_WAKEUPS,
SYNC_WAKEUPS etc. With these features on, linux schduler often tries
to schedule the vcpu threads of one guests to one same logical
processor when vcpus are over-committed and logical processors are
saturated. Once the vcpu threads of one VM are scheduled to the same
LP, system performance drops dramatically with some workloads(like
webbench running in windows OS).

   

Were the affine wakeups due to the kernel (emulated guest IPIs) or
qemu?
 

We have basic guesses about the reasone, one is wakeup affinity between vcpu 
threads due to IPI, and the other is wakeup affinity between io theads and vcpu 
threads.
   


It would be good to find out.


Most likely it also hits non-virtualized loads as well.  If the
scheduler pulls two long-running threads to the same cpu, performance
will take a hit.
 

Since the hit only happens when physical cpus are saturated, and sheduling 
non-virtualized multiple threads of one process to same processor can benefit 
the performance due to cache share or other affinities, but you know it hurts 
performance a lot once schedule two vcpu theads to a same processor due to 
mutual spin-lock in guests.
   


Spin loops need to be addressed first, they are known to kill 
performance in overcommit situations.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: VM performance issue in KVM guests.

2010-04-12 Thread Zhang, Xiantao
Avi Kivity wrote:
 On 04/12/2010 05:04 AM, Zhang, Xiantao wrote:
 
 What was the performance hit?  What was your I/O setup (image
 format, using aio?) 
 
 The issue only happens when vcpu number is over-committed(e.g.
 vcpu/pcpu2) and physical cpus are saturated. For example,  when run
 webbench in windows OS in this case, its performance drops by 80%. 
 In our experiment, we are using image file through virtio, and I
 think aio should be used by default also.
 
 
 Is this on a machine that does pause-loop exits?  The current handing
 of PLE is very suboptimal.  With proper directed yield we should be
 much better there.
 
 Without PLE, we need paravirtualized spinlocks, no way around it.

PLE has the ability to eliminate the issue at some extent, and pv solution 
should be helpful also.  But for windows guests running on machines without 
PLE, we still needs to enhance host side to resolve the issue.

 After analysis about Linux scheduler, we found it is indeed caused
 by the known features of Linux schduler, such as AFFINE_WAKEUPS,
 SYNC_WAKEUPS etc. With these features on, linux schduler often
 tries to schedule the vcpu threads of one guests to one same
 logical processor when vcpus are over-committed and logical
 processors are saturated. Once the vcpu threads of one VM are
 scheduled to the same LP, system performance drops dramatically
 with some workloads(like webbench running in windows OS).
 
 Since the hit only happens when physical cpus are saturated, and
 sheduling non-virtualized multiple threads of one process to same
 processor can benefit the performance due to cache share or other
 affinities, but you know it hurts performance a lot once schedule
 two vcpu theads to a same processor due to mutual spin-lock in
 guests. 
 
 Spin loops need to be addressed first, they are known to kill
 performance in overcommit situations.

Even in overcommit case, if vcpu threads of one qemu are not scheduled or 
pulled to the same logical processor, the performance drop is tolerant like 
Xen's case today. But for KVM, it has to suffer from additional performance 
loss, since host's scheduler actively pulls these vcpu threads together. 
Xiantao 

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: VM performance issue in KVM guests.

2010-04-11 Thread Zhang, Xiantao
Avi Kivity wrote:
 (copying lkml and some scheduler folk)
 
 On 04/10/2010 11:16 AM, Zhang, Xiantao wrote:
 Hi, all
We are working on the scalability work for KVM guests, and found
one big issue exists in linux scheduler and it may impact guest's
 performance and scalability a lot for some special workloads running
 in VM.  In the current Linux scheduler, there are some features to
 enhance App's performance which are defined in the file
 kvm.git/kernel/sched_features.h. Certainly, they are mostly
 beneficial optimizations to improve system's performance, but
 unluckily, some of them may hurt VM's performance and scalablity in
 KVM case We know that if two or more vcpus of one guests are
 scheduled to one same logical processor,  same CPU utilization may
 generate less valid output due mutual lock in VM's OS than that are
 scheduled to different logical processors  .And we also know that
 VM's vcpus are emulated or executed through the threads of Qemu for
 KVM.  If the vcpu threads of qemu are often pulled to one same
 logical processor by some features of Linux scheduler, kvm
 guests'performance may be hurt a lot.  In our performance testing, 
 the results also show this performance bottleneck due to this issue.
 
 What was the performance hit?  What was your I/O setup (image format,
 using aio?)

The issue only happens when vcpu number is over-committed(e.g. vcpu/pcpu2) and 
physical cpus are saturated. For example,  when run webbench in windows OS in 
this case, its performance drops by 80%.  In our experiment, we are using image 
file through virtio, and I think aio should be used by default also. 


 After analysis about Linux scheduler, we found it is indeed caused
 by the known features of Linux schduler, such as AFFINE_WAKEUPS,
 SYNC_WAKEUPS etc. With these features on, linux schduler often tries
 to schedule the vcpu threads of one guests to one same logical
 processor when vcpus are over-committed and logical processors are
 saturated. Once the vcpu threads of one VM are scheduled to the same
 LP, system performance drops dramatically with some workloads(like
 webbench running in windows OS).   
 
 
 Were the affine wakeups due to the kernel (emulated guest IPIs) or
 qemu? 

We have basic guesses about the reasone, one is wakeup affinity between vcpu 
threads due to IPI, and the other is wakeup affinity between io theads and vcpu 
threads. 

 To verify this finding, we also worked out a simple patch
 attached in the mail to dynamially switch off the two sheduler
 features mentioned above when scheduler knows the scheduling tasks
 are vcpu threads, and we found the the whole system's performance
 and scalability are improved a lot.  Certatinly, this patch is not
 good for upstream, but it can enlighten us to think how to optimize
 Linux scheduler and we also want to initiate the discussion about
 how to make LINUX's scheduler more friendly to virtualization. 
 Besides, this issue maybe not only kvm's special issue, insteadly it
 should be a common issue for host-based VMs, and we also expect that
 we can have an elegant solution to thoroughly resolve the
 performance or scalability gap compared with hypervisor-based VMs.  
 
 
 Most likely it also hits non-virtualized loads as well.  If the
 scheduler pulls two long-running threads to the same cpu, performance
 will take a hit.

Since the hit only happens when physical cpus are saturated, and sheduling 
non-virtualized multiple threads of one process to same processor can benefit 
the performance due to cache share or other affinities, but you know it hurts 
performance a lot once schedule two vcpu theads to a same processor due to 
mutual spin-lock in guests. 
Xiantao
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: VM performance issue in KVM guests.

2010-04-10 Thread Avi Kivity

(copying lkml and some scheduler folk)

On 04/10/2010 11:16 AM, Zhang, Xiantao wrote:

Hi, all
   We are working on the scalability work for KVM guests, and found one big 
issue exists in linux scheduler and it may impact guest's performance and 
scalability a lot for some special workloads running in VM.  In the current 
Linux scheduler, there are some features to enhance App's performance which are 
defined in the file kvm.git/kernel/sched_features.h. Certainly, they are mostly 
beneficial optimizations to improve system's performance, but unluckily, some 
of them may hurt VM's performance and scalablity in KVM case
   We know that if two or more vcpus of one guests are scheduled to one same 
logical processor,  same CPU utilization may generate less valid output due 
mutual lock in VM's OS than that are scheduled to different logical processors  
.And we also know that VM's vcpus are emulated or executed through the threads 
of Qemu for KVM.  If the vcpu threads of qemu are often pulled to one same 
logical processor by some features of Linux scheduler, kvm guests'performance 
may be hurt a lot.  In our performance testing,  the results also show this 
performance bottleneck due to this issue.


What was the performance hit?  What was your I/O setup (image format, 
using aio?)



After analysis about Linux scheduler, we found it is indeed caused by the known 
features of Linux schduler, such as AFFINE_WAKEUPS, SYNC_WAKEUPS etc. With 
these features on, linux schduler often tries to schedule the vcpu threads of 
one guests to one same logical processor when vcpus are over-committed and 
logical processors are saturated. Once the vcpu threads of one VM are scheduled 
to the same LP, system performance drops dramatically with some workloads(like 
webbench running in windows OS).
   


Were the affine wakeups due to the kernel (emulated guest IPIs) or qemu?


To verify this finding, we also worked out a simple patch attached in the 
mail to dynamially switch off the two sheduler features mentioned above when 
scheduler knows the scheduling tasks are vcpu threads, and we found the the 
whole system's performance and scalability are improved a lot.  Certatinly, 
this patch is not good for upstream, but it can enlighten us to think how to 
optimize Linux scheduler and we also want to initiate the discussion about how 
to make LINUX's scheduler more friendly to virtualization.  Besides, this issue 
maybe not only kvm's special issue, insteadly it should be a common issue for 
host-based VMs, and we also expect that we can have an elegant solution to 
thoroughly resolve the performance or scalability gap compared with 
hypervisor-based VMs.
   


Most likely it also hits non-virtualized loads as well.  If the 
scheduler pulls two long-running threads to the same cpu, performance 
will take a hit.



--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html