I tried to reproduce, to not interfere with another setup I'm currently
working on I had just one dpdk interface attached in the host and just
one vhost-user into the guest.

Device came up with 1 of 4 queues, installed DPDK in the guest as well,
was able to initialze it in the guest with one queue without bug.

I followed your testcase of disabling, rebooting, setting multiple queues via 
ethtool on the guest dev and reenabling openvswitch-switch. It worked just fine 
in my (slightly different) environment.
In the Host I see this in the Journal when I start the multiq-enabled 
openvswitch-dpdk in the guest:
http://paste.ubuntu.com/16167772/
Does that look anything like it for you - no matter what it looks like probably 
also worth adding when you report at the upstream mailing lists?


Sidenotes:
Other than the system config I found a few small differences.
 I don't  expect it but worth a test on yur side maybe?
My multiqueue xml doesn't usually have the vhost set:
yours:
 <driver name='vhost' queues='4'/>
mine:
<driver queues='4'/>


Also I usually never needed/used to set up the bridge itself
ip link set dev ovsbr up

You didn#t set "ovs-vsctl set Open_vSwitch . other_config:n-dpdk-rxqs=4"
in the guest - intentional?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1577088

Title:
  OVS+DPDK crashes at the host, right after starting another OVS+DPDK
  inside of a KVM Guest, easier to reproduce if multi-queue is enabled.

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/openvswitch/+bug/1577088/+subscriptions

-- 
ubuntu-bugs mailing list
[email protected]
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to