[Qemu-devel] How to determine Q-id in VHOST_USER_SET_LOG_BASE in a Multi-Q setup ?

2016-03-21 Thread shesha Sreenivasamurthy (shesha)
Hi All,
I'm implementing VM migration support for open-VPP, an open source Vector 
Packet Processing (VPP) technology (https://wiki.fd.io/view/VPP) - A Linux 
foundation project. In lieu of it, I have hit an issue and I need some 
clarification.

In Qemu's vhost-user implementation, each queue is treated as a vhost-net 
device and during migration, vhost_user_set_log_base is invoked per device 
(queue). However, there is no information about the queue index in the API. How 
should the slave determine which queue the master is referring to ?

For example: If I have configured my guest with 4 queues, 
VHOST_USER_SET_LOG_BASE is invoked 4 times with different SHMFDs. How to map 
SHMFD to queue ID ?

--
- Thanks
char * (*shesha) (uint64_t cache, uint8_t F00D)
{ return 0xC0DE; }


Re: [Qemu-devel] [PATCH] vhost-user: Slave crashes as Master unmaps vrings during guest reboot

2016-01-18 Thread shesha Sreenivasamurthy (shesha)
Got it. Thanks, I missed that line while reading the spec. Is 
docs/specs/vhost-user.txt the official spec ?

--
- Thanks
char * (*shesha) (uint64_t cache, uint8_t F00D)
{ return 0xC0DE; }

From: "Michael S. Tsirkin" mailto:m...@redhat.com>>
Date: Sunday, January 17, 2016 at 3:23 AM
To: Cisco Employee mailto:she...@cisco.com>>
Cc: "qemu-devel@nongnu.org<mailto:qemu-devel@nongnu.org>" 
mailto:qemu-devel@nongnu.org>>
Subject: Re: [PATCH] vhost-user: Slave crashes as Master unmaps vrings during 
guest reboot

On Fri, Jan 15, 2016 at 12:12:43PM -0800, Shesha Sreenivasamurthy wrote:
Problem:

If a guest has vhost-user enabled, then on reboot vhost_virtqueue_stop
is invoked. This unmaps vring memory mappings. However, it will not give
any indication to the underlying DPDK slave application about it.
Therefore, a pollmode DPDK driver tries to read the ring to check for
packets and segfaults.

The spec currently says:
Client must start ring upon receiving a kick (that is, detecting that file
descriptor is readable) on the descriptor specified by
VHOST_USER_SET_VRING_KICK, and stop ring upon receiving
VHOST_USER_GET_VRING_BASE.

Why isn't this sufficient?

Solution:
--
VHOST_USER_RESET_OWNER API is issued by QEMU so that DPDK slave
application is informed that mappings will be soon gone so that
it can take necessary steps.
Shesha Sreenivasamurthy (1):
   vhost-user: Slave crashes as Master unmaps vrings during guest reboot
  hw/virtio/vhost.c | 5 +
  1 file changed, 5 insertions(+)
--
1.9.5 (Apple Git-50.3)



[Qemu-devel] [PATCH] vhost-user: Slave crashes as Master unmaps vrings during guest reboot

2016-01-15 Thread Shesha Sreenivasamurthy
Send VHOST_USER_RESET_OWNER when the device is stopped.

Signed-off-by: Shesha Sreenivasamurthy 
---
 hw/virtio/vhost.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index de29968..808184f 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -1256,6 +1256,11 @@ void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice 
*vdev)
  hdev->vq_index + i);
 }
 
+if (hdev->vhost_ops->vhost_reset_device(hdev) < 0) {
+fprintf(stderr, "vhost reset device %s failed\n", vdev->name);
+fflush(stderr);
+}
+
 vhost_log_put(hdev, true);
 hdev->started = false;
 hdev->log = NULL;
-- 
1.9.5 (Apple Git-50.3)




[Qemu-devel] [PATCH] vhost-user: Slave crashes as Master unmaps vrings during guest reboot

2016-01-15 Thread Shesha Sreenivasamurthy
Problem:

If a guest has vhost-user enabled, then on reboot vhost_virtqueue_stop
is invoked. This unmaps vring memory mappings. However, it will not give
any indication to the underlying DPDK slave application about it.
Therefore, a pollmode DPDK driver tries to read the ring to check for
packets and segfaults.

Solution:
--
VHOST_USER_RESET_OWNER API is issued by QEMU so that DPDK slave
application is informed that mappings will be soon gone so that
it can take necessary steps.

Shesha Sreenivasamurthy (1):
  vhost-user: Slave crashes as Master unmaps vrings during guest reboot

 hw/virtio/vhost.c | 5 +
 1 file changed, 5 insertions(+)

-- 
1.9.5 (Apple Git-50.3)




[Qemu-devel] [PATCH] vhost-user: Slave crashes as Master unmaps vrings during guest reboot

2016-01-15 Thread Shesha Sreenivasamurthy
Problem:

If a guest has vhost-user enabled, then on reboot vhost_virtqueue_stop
is invoked. This unmaps vring memory mappings. However, it will not give
any indication to the underlying DPDK slave application about it.
Therefore, a pollmode DPDK driver tries to read the ring to check for
packets and segfaults.

Solution:
--
VHOST_USER_RESET_OWNER API is issued by QEMU so that DPDK slave
application is informed that mappings will be soon gone so that
it can take necessary steps.

Shesha Sreenivasamurthy (1):
  vhost-user: Slave crashes as Master unmaps vrings during guest reboot

 hw/virtio/vhost.c | 5 +
 1 file changed, 5 insertions(+)

-- 
1.9.5 (Apple Git-50.3)




[Qemu-devel] [PATCH] vhost-user: Slave crashes as Master unmaps vrings during guest reboot

2016-01-15 Thread Shesha Sreenivasamurthy
Send VHOST_USER_RESET_OWNER when the device is stopped.

Signed-off-by: Shesha Sreenivasamurthy 
---
 hw/virtio/vhost.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/hw/virtio/vhost.c b/hw/virtio/vhost.c
index de29968..808184f 100644
--- a/hw/virtio/vhost.c
+++ b/hw/virtio/vhost.c
@@ -1256,6 +1256,11 @@ void vhost_dev_stop(struct vhost_dev *hdev, VirtIODevice 
*vdev)
  hdev->vq_index + i);
 }
 
+if (hdev->vhost_ops->vhost_reset_device(hdev) < 0) {
+fprintf(stderr, "vhost reset device %s failed\n", vdev->name);
+fflush(stderr);
+}
+
 vhost_log_put(hdev, true);
 hdev->started = false;
 hdev->log = NULL;
-- 
1.9.5 (Apple Git-50.3)




[Qemu-devel] DPDK application using vhost-user segfaults when guest is rebooted/shutdown

2016-01-14 Thread Shesha Sreenivasamurthy
If a guest has vhost-user enabled, then on reboot vhost_virtqueue_stop is
invoked. This unmaps vring memory. However, it will not give any indication
to the underlying DPDK application about it. Therefore, a pollmode DPDK
driver tries to read the ring to check for packets and segfaults.

We do have VHOST_USER_RESET_OWNER API an can be called by
vhost_virtqueue_stop so that DPDK application can note that mappings are
gone. Is that the right way ?

Qemu Version: qemu-2.5.0
DPDK Version: 2.2

Thanks,
Shesha.


[Qemu-devel] Memcpy behavior inside a VM

2015-10-14 Thread Shesha Sreenivasamurthy
Hi,
  I'm profiling memcpy and seeing strange behavior (for me at least) and
wanted to see if some one has an idea what may be happening.

My set up is as follows:

I have a Ubuntu 12.04 Linux host running 3.2.0-23 kernel. It has a four
10-core dual-hyper-threaded CPU with 128GB RAM. I have instantiated a VM
running same OS as the host with KVM enabled. VM is provided with 16 vCPUs
with 6GB guest memory. The QEMU ( qemu-2.3.0-rc3) process is bound on vCPUs
1-16 (included) using taskset. (The idea is each lwp that QEMU spins per
vCPU runs on separate hyper-thread)

taskset -pc 1 16 $qemupid


An application on this VM mallocs two 1GB chunks to be used as source and
destination (virtual address at 256 byte boundary). This application spins
of *n* threads each performing memcpy in parallel. Each thread is again
bound to a vCPU using *pthread_setaffinity_np()* inside the guest
application in a round-robin fashion. The size of each memcpy is 32 MB. I
experimented with 4 .. 32 threads in increments of 4. Each thread works on
different slice, where slice size is 32 MB.

src = bufaligned1 + (slice_sz * j);

dst = bufaligned2 + (slice_sz * j);


I noticed as the number of thread increases, the time taken to perform
memcpy of 32 MB increased too. Time measured in milli-seconds is as given
below (Threads - Transfer time)

04 - 52
08 - 79
12 - 148
16 - 180
20 - 223
24 - 270
28 - 302
32 - 354

I was expecting to see the transfer time to be approximately same up until
16 threads as I have 16 vCPUs bound to 16 hyper threads. After 16 threads
the time to transfer should increase as multiple threads are being
scheduled on the same vCPU and there is resource contention. But why is
transfer time increasing between 4-16 threads too ? Looks like there is
some contention at the host level. Any ideas what that could be so that I
can focus on that and profile that component ?

Thanks
Shesha


[Qemu-devel] Multiple Nics on a VLAN

2009-11-11 Thread Shesha Sreenivasamurthy
Hi All,
I'm using the following command to have two nics in multicast on the
same vlan. I see a storm of ARP requests. Does any one have any
suggestions?

qemu.bin.kvm84 -hda /live_disks/clone-disk.img -snapshot -serial
telnet:SERVER:5,nowait,server -monitor
tcp:SERVER:51000,server,nowait,nodelay -p 61000 -m 768m -smp 1 -vnc
SERVER:10 -net nic,model=e1000,vlan=0,macaddr=56:48:AA:BB:CC:DD -net
tap,vlan=0,script=netscripts/net0-ifup -net
nic,model=e1000,vlan=1,macaddr=56:48:AA:BB:CC:EE -net
socket,vlan=1,mcast=230.0.0.1:3001 -net
nic,model=e1000,vlan=1,macaddr=56:48:AA:BB:CC:FF -net
socket,vlan=1,mcast=230.0.0.1:3001 --uuid
cc6145a8-cdae-11de-ac18-003048d4fd3e

However, If I launch two QEMU with one nic in multicast, where eth0 in
both QEMU are connected to vlan1, the I can ping 1.1.1.1 -> 1.1.1.2
and vice versa.

I'm running CENTOS inside the VM.

Thanks,
Shesha