Re: PCI passthrough with VT-d - native performance

2008-07-16 Thread Ben-Ami Yassour
On Wed, 2008-07-16 at 17:36 +0300, Avi Kivity wrote:
 Ben-Ami Yassour wrote:
  In last few tests that we made with PCI-passthrough and VT-d using
  iperf, we were able to get the same throughput as on native OS with a 1G
  NIC
 
 Excellent!
 
   (with higher CPU utilization).

 
 How much higher?

Here are some numbers for running iperf -l 1M:

e1000 NIC (behind a PCI bridge)
   Bandwidth (Mbit/sec)CPU utilization
Native OS   771  18%
Native OS with VT-d 760  18% 
KVM VT-d390  95% 
KVM VT-d with direct mmio   770  84%
KVM emulated 57 100%  

Comment: its not clear to me why the native linux can not get closer to 1G for 
this NIC,
(I verified that its not external network issues). But clearly we shouldn't 
hope to 
get more then the host does with a KVM guest (especially if the guest and host 
are the 
same OS as in this case...).

e1000e NIC (onboard)
   Bandwidth (Mbit/sec)CPU utilization
Native OS   915  18%
Native OS with VT-d 915  18%
KVM VT-d with direct mmio   914  98%

Clearly we need to try and improve the CPU utilization, but I think that this 
is good enough 
for the first phase.

 
  The following patches are the PCI-passthrough patches that Amit sent
  (re-based on the last kvm tree), followed by a few improvements and the
  VT-d extension.
  I am also sending the userspace patches: the patch that Amit sent for
  PCI passthrough and the direct-mmio extension for userspace (note that
  without the direct mmio extension we get less then half the throughput).

 
 Is mmio passthrough the reason for the performance improvement?  If not, 
 what was the problem?
 
Direct mmio was definitely a major improvement, without it we got half the 
throughput,
as you can see above.
In addition patch 4/8 improves the interrupt handling and removes unnecessary 
locks,
and I assume that it also fixed performance issues (I did not investigate 
exactly in what way).

Regards,
Ben


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI passthrough with VT-d - native performance

2008-07-16 Thread Avi Kivity

Ben-Ami Yassour wrote:

 (with higher CPU utilization).
  
  

How much higher?



Here are some numbers for running iperf -l 1M:

e1000 NIC (behind a PCI bridge)
   Bandwidth (Mbit/sec)CPU utilization
Native OS   771  18%
Native OS with VT-d 760  18% 
KVM VT-d390  95% 
KVM VT-d with direct mmio   770  84%
KVM emulated 57 100%  


Comment: its not clear to me why the native linux can not get closer to 1G for 
this NIC,
(I verified that its not external network issues). But clearly we shouldn't hope to 
get more then the host does with a KVM guest (especially if the guest and host are the 
same OS as in this case...).


e1000e NIC (onboard)
   Bandwidth (Mbit/sec)CPU utilization
Native OS   915  18%
Native OS with VT-d 915  18%
KVM VT-d with direct mmio   914  98%

Clearly we need to try and improve the CPU utilization, but I think that this is good enough 
for the first phase.


  


Agree;  part of the higher utilization is of course not the fault of the 
device assignment code, rather it is ordinary virtualization overhead.  
We'll have to tune this.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI passthrough with VT-d - native performance

2008-07-16 Thread Anthony Liguori

Ben-Ami Yassour wrote:

On Wed, 2008-07-16 at 17:36 +0300, Avi Kivity wrote:
  

Ben-Ami Yassour wrote:


In last few tests that we made with PCI-passthrough and VT-d using
iperf, we were able to get the same throughput as on native OS with a 1G
NIC
  

Excellent!



 (with higher CPU utilization).
  
  

How much higher?



Here are some numbers for running iperf -l 1M:

e1000 NIC (behind a PCI bridge)
   Bandwidth (Mbit/sec)CPU utilization
Native OS   771  18%
Native OS with VT-d 760  18% 
KVM VT-d390  95% 
KVM VT-d with direct mmio   770  84%
KVM emulated 57 100%  
  


What about virtio?  Also, which emulated is this?

That CPU utilization is extremely high and somewhat illogical if native 
w/vt-d has almost no CPU impact.  Have you run oprofile yet or have any 
insight into where CPU is being burnt?


What does kvm_stat look like?  I wonder if there are a large number of 
PIO exits.  What does the interrupt count look like on native vs. KVM 
with VT-d?


Regards,

Anthony Liguori


Comment: its not clear to me why the native linux can not get closer to 1G for 
this NIC,
(I verified that its not external network issues). But clearly we shouldn't hope to 
get more then the host does with a KVM guest (especially if the guest and host are the 
same OS as in this case...).


e1000e NIC (onboard)
   Bandwidth (Mbit/sec)CPU utilization
Native OS   915  18%
Native OS with VT-d 915  18%
KVM VT-d with direct mmio   914  98%

Clearly we need to try and improve the CPU utilization, but I think that this is good enough 
for the first phase.


  

The following patches are the PCI-passthrough patches that Amit sent
(re-based on the last kvm tree), followed by a few improvements and the
VT-d extension.
I am also sending the userspace patches: the patch that Amit sent for
PCI passthrough and the direct-mmio extension for userspace (note that
without the direct mmio extension we get less then half the throughput).
  
  
Is mmio passthrough the reason for the performance improvement?  If not, 
what was the problem?




Direct mmio was definitely a major improvement, without it we got half the 
throughput,
as you can see above.
In addition patch 4/8 improves the interrupt handling and removes unnecessary 
locks,
and I assume that it also fixed performance issues (I did not investigate 
exactly in what way).

Regards,
Ben


  


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI passthrough with VT-d - native performance

2008-07-16 Thread Avi Kivity

Ben-Ami Yassour wrote:
  
That CPU utilization is extremely high and somewhat illogical if native 
w/vt-d has almost no CPU impact.  Have you run oprofile yet or have any 
insight into where CPU is being burnt?


What does kvm_stat look like?  I wonder if there are a large number of 
PIO exits.  What does the interrupt count look like on native vs. KVM 
with VT-d?


Regards,

Anthony Liguori




These are all good points and questions, I agree that we need to take a deeper 
look into the performance issues, but I think that we need to merge with 
the main KVM tree first.
  


It would be good to get the host interrupt rate, to confirm that the 
host isn't flooded with interrupts.  A deeper analysis can wait.



--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html


RE: PCI passthrough with VT-d - native performance

2008-07-16 Thread Han, Weidong
Anthony Liguori wrote:
 Ben-Ami Yassour wrote:
 On Wed, 2008-07-16 at 17:36 +0300, Avi Kivity wrote:
 
 Ben-Ami Yassour wrote:
 
 In last few tests that we made with PCI-passthrough and VT-d using
 iperf, we were able to get the same throughput as on native OS
 with a 1G NIC 
 
 Excellent!
 
 
  (with higher CPU utilization).
 
 
 How much higher?
 
 
 Here are some numbers for running iperf -l 1M:
 
 e1000 NIC (behind a PCI bridge)
Bandwidth (Mbit/sec)CPU utilization
 Native OS   771  18%
 Native OS with VT-d 760  18%
 KVM VT-d390  95%
 KVM VT-d with direct mmio   770  84%
 KVM emulated 57 100%
 
 
 What about virtio?  Also, which emulated is this?
 
 That CPU utilization is extremely high and somewhat illogical if
 native w/vt-d has almost no CPU impact.  Have you run oprofile yet or
 have any insight into where CPU is being burnt?
 
 What does kvm_stat look like?  I wonder if there are a large number of
 PIO exits.  What does the interrupt count look like on native vs. KVM
 with VT-d?
 

e1000 NIC doesn't use PIO. 

Randy (Weidong)