On 10/04/2010 11:12 AM, Michael S. Tsirkin wrote:
On Mon, Oct 04, 2010 at 09:01:14AM -0500, Anthony Liguori wrote:
On 10/04/2010 03:04 AM, Avi Kivity wrote:
On 10/04/2010 03:18 AM, Anthony Liguori wrote:
On 10/03/2010 09:28 AM, Michael S. Tsirkin wrote:
This is using eventfd as well.
Sorry, I meant irqfd.
I've tried using irqfd in userspace.  It hurts performance quite
a bit compared to doing an ioctl so I would suspect this too.

A last_used_idx or similar mechanism should help performance
quite a bit on top of ioeventfd too.

Any idea why?  While irqfd does quite a bit of extra locking, it
shouldn't be that bad.
Not really.  It was somewhat counter intuitive.

A worthwhile experiment might be to do some layering violations and
have vhost do an irq injection via an ioctl and see what the
performance delta is.
I think you don't even need to try that hard.
Just comment this line:
//   proxy->pci_dev.msix_mask_notifier = virtio_pci_mask_notifier;
this is what switches to irqfd when msi vector is unmasked.

That drops to userspace though for all irqs, no?

Or did you mean that commenting that line out improves performance demonstrating the overhead of irqfd?

Regards,

Anthony Liguori

  I suspect it could give vhost a nice boost.

Regards,

Anthony Liguori

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to