Hi
As this uses the same commit ids as onp 1.4 I assume this is related to your 
other message (subject [ovs-discuss] OVS_AGENT_TYPE)?

The error below commonly indicates that the ovs-vswitchd process has crashed.
Are there any error messages in the vswitchd log?  If you are using the 
local.conf from the other thread this log will
Reside at /tmp/ovs-vswitchd.log

When you say “after I install the vm” can I assume you are attempting to 
install ovs-dpdk with the networking-ovs-dpdk devstack plugin in a vm?
If so that deployment is not actually supported or tested with the 
networking-ovs-dpdk devstack plugin or onp 1.4.

One issue often encountered when trying to deploy ovs-dpdk in a vm is a failure 
to initaliese the vms “physical” interfaces.
As far as I recall it was not possible to deploy the 
1e77bbe565bbf5ae7f4c47f481a4097d666d3d68 commit of
Ovs with dpdk 2.0 in a vm with more than one vcpu if using a single queue 
virtio nic for the vms phical interface.
The reason for this Is that dpdk required 1 tx queue per vCPU by default and 
the virtio interface only has one tx queue by default.
This limitation was resolved in a later commit to ovs which introduced a 
fallback to using  spinlocks when fewer
Tx queue then vCPU were present.

As far as I am aware you can deploy ovs 2.4+ with dpdk 2.0 via the 
networking-ovs-dpdk devstack plugin in a vm but It is not part of ci or 
standard tests
So your mileage will vary depending on what particular comit of ovs and dpdk 
you use.

Assuming the issue is related to initializing the nic and you need to use the 
1e77bbe565bbf5ae7f4c47f481a4097d666d3d68 commit of ovs with dpdk 2.0
in a vm you can try one of the following workarounds.

-          Replace the virtio interface with a pci passtrhough of a physical 
nic.

-          Use a para-virtual nic type that provides equal or more tx queues to 
the number of vCPU of the vm

-          Use kernel vhost multi queue to allocate one tx queue per vCPU (see 
https://github.com/openvswitch/ovs/blob/master/INSTALL.DPDK.md#running-ovs-vswitchd-with-dpdk-backend-inside-a-vm)

Regards
Sean.


From: discuss [mailto:[email protected]] On Behalf Of MAO Ruoyu
Sent: Wednesday, December 23, 2015 1:07 AM
To: [email protected]
Subject: [ovs-discuss] ovs-dpdk crashed question

Hi,
I use devstack to install openstack with ovs-dpdk. And I use
OVS_GIT_TAG=${OVS_GIT_TAG:-1e77bbe565bbf5ae7f4c47f481a4097d666d3d68}
OVS_DPDK_GIT_TAG=${OVS_DPDK_GIT_TAG:-v2.0.0}

After I install vm. I put a file on vm or cat SYSLOG on vm,
The service “q-agt” crashed.
The main log on q-agt.log is :
2015-12-23 08:47:45.287 DEBUG networking_ovs_dpdk.agent.ovs_dpdk_neutron_agent 
[req-27c410ea-8abd-4f42-84a5-61fd5540df4a None None] Agent rpc_loop - 
iteration:157 started rpc_loop 
/usr/lib/python2.7/site-packages/networking_ovs_dpdk/agent/ovs_dpdk_neutron_agent.py:1430
2015-12-23 08:47:45.288 DEBUG neutron.agent.linux.utils 
[req-27c410ea-8abd-4f42-84a5-61fd5540df4a None None] Running command (rootwrap 
daemon): ['ovs-ofctl', 'dump-flows', 'br-int', 'table=23'] 
execute_rootwrap_daemon /opt/stack/neutron/neutron/agent/linux/utils.py:100
2015-12-23 08:48:01.091 7344 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID 
is 30c0a0f9cfb34562aa09909ce5de8bc7. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:258
2015-12-23 08:48:31.091 7344 DEBUG oslo_messaging._drivers.amqp [-] UNIQUE_ID 
is 8c507649f9f14f6ebb70cfa585bd852a. _add_unique_id 
/usr/lib/python2.7/site-packages/oslo_messaging/_drivers/amqp.py:258
2015-12-23 08:48:34.592 ERROR neutron.agent.linux.utils 
[req-27c410ea-8abd-4f42-84a5-61fd5540df4a None None]
Command: ['ovs-ofctl', 'dump-flows', 'br-int', 'table=23']
Exit code: 1
Stdin:
Stdout:
Stderr: ovs-ofctl: br-int: failed to connect to socket (Connection reset by 
peer)
Can you give a good idea? Thanks.

Best Regards
Mao Ruoyu
+86 21 38434078

_______________________________________________
discuss mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/discuss

Reply via email to