Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm KVM

2014-01-15 Thread Barak Wasserstrom
Ying-Shiuan Pan,
Thanks again - few questions.
1. Can you refer to my question about tap offload features? In the guest i
can see that eth0 has all offload features disabled and cannot be enabled.
I suspect this is related to the tap configuration in the host.
2. I can see that virtio-net notifies KVM upon each received packet, even
though the guest implements NAPI. This causes lots of switches from user
space to hypervisor. Isn't there support for RX packet coalescing in QEMU's
virtio-net?
3. What is your best TX, RX iperf results today on Cortex A15?

Regards,
Barak


On Wed, Jan 15, 2014 at 4:42 AM, Ying-Shiuan Pan
yingshiuan@gmail.comwrote:



 
 Best Regards,
 潘穎軒Ying-Shiuan Pan


 2014/1/14 Barak Wasserstrom wba...@gmail.com

 Ying-Shiuan Pan,
 Thanks again - please see few questions below.

 Regards,
 Barak


 On Tue, Jan 14, 2014 at 5:37 AM, Ying-Shiuan Pan 
 yingshiuan@gmail.com wrote:

 Hi, Barak,

 Hope the following info can help you

 1.
 HOST:
  http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git
 http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git
 branch: v3.10-arndale
 config: arch/arm/configs/exynos5_arndale_defconfig
 dtb: arch/arm/boot/dts/exynos5250-arndale.dtb
 rootfs: Ubuntu 13.10

 GUEST:
 Official 3.12
  config: arch/arm/configs/vexpress_defconfig  with virtio-devices enabled
 dtb: arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb
 rootfs: Ubuntu 12.04

 2.
 We are still developing it in progress and will try to open source asap.
 The main purpose of that patch is to introduce the ioeventfd into kvm-arm

 [Barak] Do you have any estimation about when you can release these
 patches?

 Actually, No. I will discuss with my boss about the release plan.

  [Barak] Is this required for enabling vhost-net?

 Yes, it is because vhost-net relies on ioeventfd to get kick request from
 front-end driver.




 3. as mentioned in 1.

 4. qemu-1.6.0

 5. We ported part of guest/host notifiers of virtio-pci to virtio-mmio

 [Barak] Any patches available for this?

 I did not see any.. but there might be somebody is also developing this..

 [Barak] Is this required for enabling vhost-net?

 Yes. Without those notifiers, you will see the error messages as you
 mentioned below.




 6. /usr/bin/qemu-system-arm -enable-kvm -kernel /root/nfs/zImage -m 128
 --machine vexpress-a15 -cpu cortex-a15 -drive
 file=/root/nfs/guest-1G-precise-vm1.img,id=virtio-blk,if=none,cache=none
 -device virtio-blk-device,drive=virtio-blk -append earlyprintk=ttyAMA0
 console=ttyAMA0 root=/dev/vda rw 
 ip=192.168.101.101::192.168.101.1:vm1:eth0:off
 --no-log -dtb /root/nfs/vexpress-v2p-ca15-tc1.dtb --nographic -chardev
 socket,id=mon,path=/root/vm1.monitor,server,nowait -mon
 chardev=mon,id=monitor,mode=readline -device
 virtio-net-device,netdev=net0,mac=52:54:00:12:34:01 -netdev
 type=tap,id=net0,script=/root/nfs/net.sh,downscript=no,vhost=off

 [Barak] Could you share /root/nfs/net.sh with me?

 Sorry, I forgot that.
 ---
 #!/bin/sh
 ifconfig $1 0.0.0.0
 brctl addif virbr0 $1
 ---

 virbr0 is a bridge created by manual. The setup steps of virbr0 are also
 provided:
 brctl create virbr0
 brctl addif virbr0 eth0
 ifconfig virbr0 [ETH0_IP]
 ifconfig eth0 0.0.0.0

 [Barak] In the guest i can see that eth0 has all offload features disabled
 and cannot be enabled. I suspect this is related to the tap configuration
 in the host. Do you have any ideas?



 vhost-net could be truned on by changing the last parameter vhost=on.

 [Barak] When enabling vhost i get errors in qemu, do you know what might
 be the reason?
 [Barak] qemu-system-arm: binding does not support guest notifiers
 [Barak] qemu-system-arm: unable to start vhost net: 38: falling back on
 userspace virtio

 QEMU requires host/guest notifiers to setup vhost-net, but currently
 virtio-mmio does not support yet.
 That's why you got those error messages.





 --
 Ying-Shiuan Pan,
 H Div., CCMA, ITRI, TW


 
 Best Regards,
 潘穎軒Ying-Shiuan Pan


 2014/1/13 Barak Wasserstrom wba...@gmail.com

 Ying-Shiuan Pan,
 Your experiments with arndale Exynos-5250 board can help me greatly
 and i would really appreciate if you share with me the following
 information:
 1. Which Linux kernel did you use for the host and for the guest?
 2. Which Linux kernel patches did you use for KVM?
 3. Which config files did you use for both the host and guest?
 4. Which QEMU did you use?
 5. Which QEMU patches did you use?
 6. What is the exact command line you used for invoking the guest, with
 and without vhost-net?

 Many thanks in advance!

 Regards,
 Barak



 On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan 
 yingshiuan@gmail.com wrote:

 Hi, Barak,

 We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it
 requires some patches in qemu and kvm, of course). It works (without irqfd
 support), however, the performance does not increase much. The throughput
 (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps

Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm KVM

2014-01-14 Thread Barak Wasserstrom
Ying-Shiuan Pan,
Thanks again - please see few questions below.

Regards,
Barak


On Tue, Jan 14, 2014 at 5:37 AM, Ying-Shiuan Pan
yingshiuan@gmail.comwrote:

 Hi, Barak,

 Hope the following info can help you

 1.
 HOST:
  http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git
 http://git.linaro.org/people/christoffer.dall/linux-kvm-arm.git
 branch: v3.10-arndale
 config: arch/arm/configs/exynos5_arndale_defconfig
 dtb: arch/arm/boot/dts/exynos5250-arndale.dtb
 rootfs: Ubuntu 13.10

 GUEST:
 Official 3.12
 config: arch/arm/configs/vexpress_defconfig  with virtio-devices enabled
 dtb: arch/arm/boot/dts/vexpress-v2p-ca15-tc1.dtb
 rootfs: Ubuntu 12.04

 2.
 We are still developing it in progress and will try to open source asap.
 The main purpose of that patch is to introduce the ioeventfd into kvm-arm

[Barak] Do you have any estimation about when you can release these
patches?
[Barak] Is this required for enabling vhost-net?



 3. as mentioned in 1.

 4. qemu-1.6.0

 5. We ported part of guest/host notifiers of virtio-pci to virtio-mmio

[Barak] Any patches available for this?
[Barak] Is this required for enabling vhost-net?



 6. /usr/bin/qemu-system-arm -enable-kvm -kernel /root/nfs/zImage -m 128
 --machine vexpress-a15 -cpu cortex-a15 -drive
 file=/root/nfs/guest-1G-precise-vm1.img,id=virtio-blk,if=none,cache=none
 -device virtio-blk-device,drive=virtio-blk -append earlyprintk=ttyAMA0
 console=ttyAMA0 root=/dev/vda rw 
 ip=192.168.101.101::192.168.101.1:vm1:eth0:off
 --no-log -dtb /root/nfs/vexpress-v2p-ca15-tc1.dtb --nographic -chardev
 socket,id=mon,path=/root/vm1.monitor,server,nowait -mon
 chardev=mon,id=monitor,mode=readline -device
 virtio-net-device,netdev=net0,mac=52:54:00:12:34:01 -netdev
 type=tap,id=net0,script=/root/nfs/net.sh,downscript=no,vhost=off

[Barak] Could you share /root/nfs/net.sh with me?
[Barak] In the guest i can see that eth0 has all offload features disabled
and cannot be enabled. I suspect this is related to the tap configuration
in the host. Do you have any ideas?



 vhost-net could be truned on by changing the last parameter vhost=on.

[Barak] When enabling vhost i get errors in qemu, do you know what might be
the reason?
[Barak] qemu-system-arm: binding does not support guest notifiers
[Barak] qemu-system-arm: unable to start vhost net: 38: falling back on
userspace virtio




 --
 Ying-Shiuan Pan,
 H Div., CCMA, ITRI, TW


 
 Best Regards,
 潘穎軒Ying-Shiuan Pan


 2014/1/13 Barak Wasserstrom wba...@gmail.com

 Ying-Shiuan Pan,
 Your experiments with arndale Exynos-5250 board can help me greatly and
 i would really appreciate if you share with me the following information:
 1. Which Linux kernel did you use for the host and for the guest?
 2. Which Linux kernel patches did you use for KVM?
 3. Which config files did you use for both the host and guest?
 4. Which QEMU did you use?
 5. Which QEMU patches did you use?
 6. What is the exact command line you used for invoking the guest, with
 and without vhost-net?

 Many thanks in advance!

 Regards,
 Barak



 On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan 
 yingshiuan@gmail.com wrote:

 Hi, Barak,

 We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it
 requires some patches in qemu and kvm, of course). It works (without irqfd
 support), however, the performance does not increase much. The throughput
 (iperf) of virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively.
 I thought the result are because both virtio-net and vhost-net almost
 reached the limitation of 100Mbps Ethernet.

 The good news is that we even ported vhost-net in our kvm-a9 hypervisor
 (refer:
 http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
 and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
 increased from 323Mbps to 435Mbps.

 --
 Ying-Shiuan Pan,
 H Div., CCMA, ITRI, TW


 
 Best Regards,
 潘穎軒Ying-Shiuan Pan


 2014/1/13 Peter Maydell peter.mayd...@linaro.org

 On 12 January 2014 21:49, Barak Wasserstrom wba...@gmail.com wrote:
  Thanks - I got virtio-net-device running now, but performance is
 terrible.
  When i look at the guest's ethernet interface features (ethtool -k
 eth0) i
  see all offload features are disabled.
  I'm using a virtual tap on the host (tap0 bridged to eth3).
  On the tap i also see all offload features are disabled, while on br0
 and
  eth3 i see the expected offload features.
  Can this explain the terrible performance i'm facing?
  If so, how can this be changed?
  If not, what else can cause such bad performance?
  Do you know if vhost_net can be used on ARM Cortex A15 host/guest,
 even
  though the guest doesn't support PCI  MSIX?

 I have no idea, I'm afraid. I don't have enough time available to
 investigate performance issues at the moment; if you find anything
 specific you can submit patches...

 thanks
 -- PMM







Re: [Qemu-devel] [PATCH v2 0/6] Add netmap backend offloadings support

2014-01-14 Thread Barak Wasserstrom
Vincenzo,
I'm using a tap interface and in the guest virtual device i see all
offloading features are disabled, even though they are enabled in the
physical device.
Perhaps you can help? See below related information:

Bridge to the physical interface in the host:
---
brctl addbr br0
brctl  addif br0 eth3
---

/etc/qemu-ifup:
---
#!/bin/sh
set -x

switch=br0

if [ -n $1 ];then
/usr/bin/sudo /usr/sbin/tunctl -u `whoami` -t $1
/usr/bin/sudo /sbin/ip link set $1 up
sleep 0.5s
/usr/bin/sudo /sbin/brctl addif $switch $1
exit 0
else
echo Error: no interface specified
exit 1
fi
---

Activation command:
---
qemu-system-arm -enable-kvm  -M vexpress-a15  -serial /dev/ttyS1 -append
'root=/dev/vda rw console=ttyAMA0 rootwait earlyprintk' -nographic -kernel
/guest/zImage_vexpress -dtb /guest/vexpress-v2p-ca15_a7.dtb -drive
if=none,file=/guest/arm-wheezy.img,id=foo -device
virtio-blk-device,drive=foo -device
virtio-net-device,netdev=net0,mac=DE:AD:BE:EF:F4:E5 -netdev tap,id=net0
---

Physical interface features (ethtool -k eth3):
---
Features for eth3:
rx-checksumming: on
tx-checksumming: on
tx-checksum-ipv4: on
tx-checksum-ip-generic: off [fixed]
tx-checksum-ipv6: on
tx-checksum-fcoe-crc: off [fixed]
tx-checksum-sctp: off [fixed]
scatter-gather: on
tx-scatter-gather: on
tx-scatter-gather-fraglist: off [fixed]
tcp-segmentation-offload: on
tx-tcp-segmentation: on
tx-tcp-ecn-segmentation: on
tx-tcp6-segmentation: on
udp-fragmentation-offload: off [fixed]
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: off [fixed]
rx-vlan-offload: off [fixed]
tx-vlan-offload: off [fixed]
ntuple-filters: on
receive-hashing: on
highdma: on
rx-vlan-filter: off [fixed]
vlan-challenged: off [fixed]
tx-lockless: off [fixed]
netns-local: off [fixed]
tx-gso-robust: off [fixed]
tx-fcoe-segmentation: off [fixed]
tx-gre-segmentation: off [fixed]
tx-udp_tnl-segmentation: off [fixed]
fcoe-mtu: off [fixed]
tx-nocache-copy: on
loopback: off [fixed]
rx-fcs: off [fixed]
rx-all: off [fixed]
tx-vlan-stag-hw-insert: off [fixed]
rx-vlan-stag-hw-parse: off [fixed]
rx-vlan-stag-filter: off [fixed]
---

Virtual device features in the guest (ethtool -k eth0):
---
Features for eth0:

rx-checksumming: off [fixed]

tx-checksumming: off

tx-checksum-ipv4: off [fixed]

tx-checksum-ip-generic: off [fixed]

tx-checksum-ipv6: off [fixed]

tx-checksum-fcoe-crc: off [fixed]

tx-checksum-sctp: off [fixed]

scatter-gather: off

tx-scatter-gather: off [fixed]

tx-scatter-gather-fraglist: off [fixed]

tcp-segmentation-offload: off

tx-tcp-segmentation: off [fixed]

tx-tcp-ecn-segmentation: off [fixed]

tx-tcp6-segmentation: off [fixed]

udp-fragmentation-offload: off [fixed]

generic-segmentation-offload: off [requested on]

generic-receive-offload: on

large-receive-offload: off [fixed]

rx-vlan-offload: off [fixed]

tx-vlan-offload: off [fixed]

ntuple-filters: off [fixed]

receive-hashing: off [fixed]

highdma: on [fixed]

rx-vlan-filter: off [fixed]

vlan-challenged: off [fixed]

tx-lockless: off [fixed]

netns-local: off [fixed]

tx-gso-robust: off [fixed]

tx-fcoe-segmentation: off [fixed]

tx-gre-segmentation: off [fixed]

tx-ipip-segmentation: off [fixed]

tx-sit-segmentation: off [fixed]

tx-udp_tnl-segmentation: off [fixed]

tx-mpls-segmentation: off [fixed]

fcoe-mtu: off [fixed]

tx-nocache-copy: off

loopback: off [fixed]

rx-fcs: off [fixed]

rx-all: off [fixed]

tx-vlan-stag-hw-insert: off [fixed]

rx-vlan-stag-hw-parse: off [fixed]

rx-vlan-stag-filter: off [fixed]

l2-fwd-offload: off [fixed]

---

Regards,
Barak



On Tue, Jan 14, 2014 at 12:59 PM, Vincenzo Maffione v.maffi...@gmail.comwrote:

 The purpose of this patch series is to add offloadings support
 (TSO/UFO/CSUM) to the netmap network backend, and make it possible
 for the paravirtual network frontends (virtio-net and vmxnet3) to
 use it.
 In order to achieve this, these patches extend the existing
 net.h interface to add abstract operations through which a network
 frontend can manipulate backend offloading features, instead of
 directly calling TAP-specific functions.

 Guest-to-guest performance before this patches for 

Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm KVM

2014-01-13 Thread Barak Wasserstrom
Ying-Shiuan Pan,
Your experiments with arndale Exynos-5250 board can help me greatly and i
would really appreciate if you share with me the following information:
1. Which Linux kernel did you use for the host and for the guest?
2. Which Linux kernel patches did you use for KVM?
3. Which config files did you use for both the host and guest?
4. Which QEMU did you use?
5. Which QEMU patches did you use?
6. What is the exact command line you used for invoking the guest, with and
without vhost-net?

Many thanks in advance!

Regards,
Barak



On Mon, Jan 13, 2014 at 5:47 AM, Ying-Shiuan Pan
yingshiuan@gmail.comwrote:

 Hi, Barak,

 We've tried vhost-net in kvm-arm on arndale Exynos-5250 board (it requires
 some patches in qemu and kvm, of course). It works (without irqfd support),
 however, the performance does not increase much. The throughput (iperf) of
 virtio-net and vhost-net are 93.5Mbps and 93.6Mbps respectively. I thought
 the result are because both virtio-net and vhost-net almost reached the
 limitation of 100Mbps Ethernet.

 The good news is that we even ported vhost-net in our kvm-a9 hypervisor
 (refer:
 http://academic.odysci.com/article/1010113020064758/evaluation-of-a-server-grade-software-only-arm-hypervisor),
 and the throughput of vhost-net on that platform (with 1Gbps Ethernet)
 increased from 323Mbps to 435Mbps.

 --
 Ying-Shiuan Pan,
 H Div., CCMA, ITRI, TW


 
 Best Regards,
 潘穎軒Ying-Shiuan Pan


 2014/1/13 Peter Maydell peter.mayd...@linaro.org

 On 12 January 2014 21:49, Barak Wasserstrom wba...@gmail.com wrote:
  Thanks - I got virtio-net-device running now, but performance is
 terrible.
  When i look at the guest's ethernet interface features (ethtool -k
 eth0) i
  see all offload features are disabled.
  I'm using a virtual tap on the host (tap0 bridged to eth3).
  On the tap i also see all offload features are disabled, while on br0
 and
  eth3 i see the expected offload features.
  Can this explain the terrible performance i'm facing?
  If so, how can this be changed?
  If not, what else can cause such bad performance?
  Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even
  though the guest doesn't support PCI  MSIX?

 I have no idea, I'm afraid. I don't have enough time available to
 investigate performance issues at the moment; if you find anything
 specific you can submit patches...

 thanks
 -- PMM





Re: [Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm KVM

2014-01-12 Thread Barak Wasserstrom
Peter,
Thanks - I got virtio-net-device running now, but performance is terrible.
When i look at the guest's ethernet interface features (ethtool -k eth0) i
see all offload features are disabled.
I'm using a virtual tap on the host (tap0 bridged to eth3).
On the tap i also see all offload features are disabled, while on br0 and
eth3 i see the expected offload features.
Can this explain the terrible performance i'm facing?
If so, how can this be changed?
If not, what else can cause such bad performance?
Do you know if vhost_net can be used on ARM Cortex A15 host/guest, even
though the guest doesn't support PCI  MSIX?

Regards,
Barak

On Sun, Jan 12, 2014 at 11:15 PM, Peter Maydell peter.mayd...@linaro.orgwrote:

 On 9 January 2014 12:25, Barak Wasserstrom wba...@gmail.com wrote:
  Hi,
  I would like to utilize virtio-net and vhost_net on an ARM Cortex A15
  machine using qemu-system-arm  KVM.
  I have few questions:
  1. Do i need to build qemu-system-arm myself, or apt-get install it?
 When i
  apt-get install it i get KVM not supported for this target. kvm
  accelerator does not exist. No accelerator found!.

 This sounds like either:
  (1) you're using too old a version of QEMU and need a newer one
  (2) you configured QEMU without KVM support

 Provided you have QEMU 1.6 or later it shouldn't matter whose
 version you're using.

  2. Do i need to execute qemu-system-arm directly or through virsh? Does
 it
  matter?

 I know nothing about virsh but I don't expect it matters. It's
 probably easier to get things working by running qemu-system-arm
 directly first, before you try to work out how to get virsh to start
 qemu with the correct arguments.

  3. Must i use a machine that supports PCI controller or not? And if so,
  which machine supports it? I saw that 'virt' and 'vexpress' don't support
  it.

 No. For KVM to work you need to use an A15 guest CPU; there
 are no A15 boards in QEMU which have a PCI controller. So
 instead you have to use the vexpress-a15 or virt machine's
 virtio-mmio support. Note that generally the command line syntax
 for this is different from that used by x86: you need to create
 virtio-*-device devices, not virtio-* or virtio-*-pci devices, and you
 can't rely on shorthands like if=virtio. So for instance for a block
 device you need
   -drive if=none,file=root,id=foo -device virtio-blk-device,drive=foo

 thanks
 -- PMM



[Qemu-devel] Using virtio-net and vhost_net on an ARM machine using qemu-system-arm KVM

2014-01-09 Thread Barak Wasserstrom
Hi,
I would like to utilize virtio-net and vhost_net on an ARM Cortex A15
machine using qemu-system-arm  KVM.
I have few questions:
1. Do i need to build qemu-system-arm myself, or apt-get install it? When i
apt-get install it i get KVM not supported for this target. kvm
accelerator does not exist. No accelerator found!.
2. Do i need to execute qemu-system-arm directly or through virsh? Does it
matter?
3. Must i use a machine that supports PCI controller or not? And if so,
which machine supports it? I saw that 'virt' and 'vexpress' don't support
it.

Thanks in advance,
Barak