On Saturday 06 November 2010 08:27:15 Marcelo Tosatti wrote:
> On Thu, Nov 04, 2010 at 02:15:16PM +0800, Sheng Yang wrote:
> > Here is the latest series of MSI-X mask supporting patches.
> > 
> > The bigest change from last version is, in order to reduce the
> > complexity, I moved all mask bit operation to kernel, including disabled
> > entries. This addressed two concerns:
> > 1. KVM and QEmu each own a part of mask bit operation.
> > 2. QEmu need accessing the real hardware to get the mask bit information.
> > 
> > So now QEmu have to use kernel API to get mask bit information. Though it
> > would be slower than direct accessing the real hardware's MMIO, the
> > direct accessing is unacceptable beacuse in fact the host OS own the
> > device. The host OS can access the device without notifying the
> > guest(and don't need to do so). Userspace shouldn't penetrate the host
> > OS layer to directly operate the real hardware, otherwise would cause
> > guest confusion.
> > 
> > Also I discard the userspace mask operation as well, after showed the
> > performance number.
> > 
> > This version also removed the capability enabling mechanism. Because we
> > want to use the struct kvm_assigned_msix_entry with new IOCTL, so there
> > is no compatible issue.
> > 
> > Please review. And I would run more test with current patch. So far so
> > good.
> 
> It would be good to know where the performance issue is with the entire
> implementation of mask bit in QEMU.
> 
> http://www.mail-archive.com/kvm@vger.kernel.org/msg42652.html.
> 
> All you mentioned in the past was "there was high CPU utilization when
> running in QEMU" and decided to start implementing in kernel. But AFAICS
> you did not really look into where the problem was...

We've analysed the same issue in Xen, and believed it was caused by exiting to 
the 
QEmu everytime when guest want to access the mask bit(and some specific kernel 
want 
to do this desperately). So we provided the patches to Xen, and it worked very 
well. 

The cost of exiting to the userspace in KVM is much smaller than Xen side, but 
it 
still much slower than in-kernel. I've shown the performance data of kernel and 
userspace here:

http://ns.spinics.net/lists/kvm/msg43712.html

We can easily get 20% performance gain using kernel.

--
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to