Re: KVM inside Oracle VM

2012-03-20 Thread Al Patel
So you really mean to run Qemu inside Oracle VM? For KVM, the module
needs to run on host kernel.
If you are in the VM kernel already, KVM does not provide you with
hardware acceleration.



On Tue, Mar 20, 2012 at 4:34 AM, Paolo Bonzini pbonz...@redhat.com wrote:
 Il 19/03/2012 21:06, Sever Apostu ha scritto:
 Any chance anyone has any feedback about KVM installed inside a Xen guest ?
 It's really a Xen question more than a KVM question.

 Thank you for the reply, Paolo!

 Most likely both will suffer and I am prepared to live with that, but
 is it conceptually possible?

 It depends on whether Xen supports it.  KVM, just like any other part of
 a Xen virtual machine, is just a user of Xen in your scenario.  That's
 why I said it's a Xen question.

 AFAIK Xen added support for nested virtual machines only very recently,
 I'm not even sure it's in any released version.

 Paolo
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


vhost-net configuration/performance question

2012-03-12 Thread Al Patel
Hi Folks,


We have some fundamental questions on vhost-net performance.

we are testing kvm network I/O performance by connecting two
server class machines back to back
(cisco UCS boxes with 32GB memory and 24 cores).

We tested with and without vhost-net.

Qemu version 0.15
latest libvirt version
Fedora core 16

virtio drivers are used in both cases.

We used netperf UDP_STREAM for the test from client on UCS to
server in the VM.

for 64B packets, we are seeing throughput of 122 Mbps (mega bits) for
system using vhost.
  throughput of 146Mbps for system without vhost.

For 256B packets, vhost-net gives throughput of 482Mbps and
non-vhost gives 404 Mbps

The server had 1GE interfaces connected back to back.

We have the vhost_net module loaded for the vhost test case.

The interface configuration we have in domain xml is:

interface type='network'
   source network='mvnet'/
   model type='virtio'/
   driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='on'/
   address type='pci' domain='0x' bus='0x00' slot='0x03' function='0x0'/
/interface


The qemu command parameters for network interface is:
-netdev tap,fd=23,id=hostnet0,vhost=on,vhostfd=24 -device
virtio-net-pci,tx=bh,ioeventfd=on,event_idx=on,
netdev=hostnet0,id=net0,mac=52:54:00:ba:4f:3d,bus=pci.0,
addr=0x3



Question: why are we seeing such a low throughput for 64B packets?
Is there a sample test scenario/machine description for the 8x
improvement observed and documented
at:  http://www.linux-kvm.org/page/VhostNet



Appreciate any pointers in the right path.

thx
-a
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Question on qemu-kvm 1.0

2012-03-05 Thread Al Patel
Hi ,

We have been using qemu/kvm 0.12.5 (unchanged with stock kernel 2.6.32).

I just upgraded to qemu/kvm-1.0 and see noticable difference in packet I/O.

I want to understand the enhancements in 1.0 that leads to better performance.

Can you give me some pointers?

Off the bat I see new event code. From observation, I see that the
qemu-kvm process is
taking a whole lot less CPU.

Thanks
/a
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Question on qemu-kvm 1.0

2012-03-05 Thread Al Patel
Side note: I am not using vhost-net yet. I am reading from the blogs
that vhost-net gives much better performance.
I am putting another system up with vhost-net support to measure this.

Appreciate the pointers for previous question.

/a

On Mon, Mar 5, 2012 at 11:17 AM, Al Patel alps@gmail.com wrote:
 Hi ,

 We have been using qemu/kvm 0.12.5 (unchanged with stock kernel 2.6.32).

 I just upgraded to qemu/kvm-1.0 and see noticable difference in packet I/O.

 I want to understand the enhancements in 1.0 that leads to better performance.

 Can you give me some pointers?

 Off the bat I see new event code. From observation, I see that the
 qemu-kvm process is
 taking a whole lot less CPU.

 Thanks
 /a
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Question on vhost-net features in latest kernel vs 2.6.32

2012-02-22 Thread Al Patel
Hi,

We are currently using 2.6.32 kernel and looks like the vhost-net
feature came in 2.6.34.

Using Qemu based I/O seems to be too taxing on our system (network I/O
is the primary application).

We are taking the vhost/net.c code from 2.6.34 kernel, but comparing
to 3.2 kernel, the new code seems to be
having quite some changes.


Is there some high level overview of the major changes for vhost from
2.6.34 to 3.2 kernel?

If we were to use 3.2 based code, what other features would we need to
import? Any kernel
dependencies?

Thanks
-a
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html