Ben-Ami Yassour wrote:
On Wed, 2008-07-16 at 17:36 +0300, Avi Kivity wrote:
Ben-Ami Yassour wrote:
In last few tests that we made with PCI-passthrough and VT-d using
iperf, we were able to get the same throughput as on native OS with a 1G
NIC
Excellent!

 (with higher CPU utilization).
How much higher?

Here are some numbers for running iperf -l 1M:

e1000 NIC (behind a PCI bridge)
                       Bandwidth (Mbit/sec)    CPU utilization
Native OS                   771                      18%
Native OS with VT-d 760 18% KVM VT-d 390 95% KVM VT-d with direct mmio 770 84% KVM emulated 57 100%

What about virtio?  Also, which emulated is this?

That CPU utilization is extremely high and somewhat illogical if native w/vt-d has almost no CPU impact. Have you run oprofile yet or have any insight into where CPU is being burnt?

What does kvm_stat look like? I wonder if there are a large number of PIO exits. What does the interrupt count look like on native vs. KVM with VT-d?

Regards,

Anthony Liguori

Comment: its not clear to me why the native linux can not get closer to 1G for 
this NIC,
(I verified that its not external network issues). But clearly we shouldn't hope to get more then the host does with a KVM guest (especially if the guest and host are the same OS as in this case...).

e1000e NIC (onboard)
                       Bandwidth (Mbit/sec)    CPU utilization
Native OS                   915                      18%
Native OS with VT-d         915                      18%
KVM VT-d with direct mmio   914                      98%

Clearly we need to try and improve the CPU utilization, but I think that this is good enough for the first phase.

The following patches are the PCI-passthrough patches that Amit sent
(re-based on the last kvm tree), followed by a few improvements and the
VT-d extension.
I am also sending the userspace patches: the patch that Amit sent for
PCI passthrough and the direct-mmio extension for userspace (note that
without the direct mmio extension we get less then half the throughput).
Is mmio passthrough the reason for the performance improvement? If not, what was the problem?

Direct mmio was definitely a major improvement, without it we got half the 
throughput,
as you can see above.
In addition patch 4/8 improves the interrupt handling and removes unnecessary 
locks,
and I assume that it also fixed performance issues (I did not investigate 
exactly in what way).

Regards,
Ben



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to