Public bug reported: Seen here:
http://logs.openstack.org/46/623246/3/check/nova- multiattach/efa830b/logs/screen-n-cpu.txt.gz?#_Dec_12_00_01_13_179474 Dec 12 00:01:13.179474 ubuntu-xenial-inap-mtl01-0001136812 nova- compute[29399]: WARNING nova.virt.libvirt.driver [None req-f43f3c35 -dc0b-4aa4-bfd0-4046045b315e tempest-TaggedBootDevicesTest-250467933 tempest-TaggedBootDevicesTest-250467933] [instance: 18e5987e-a535-4044 -84aa-dd8f023cedcc] Timeout waiting for [('network-vif-plugged', u'472ab433-bc18-41b5-85ae-009611117b70'), ('network-vif-plugged', u '6b476c8d-35b5-4380-a4c0-9e691f568ff5'), ('network-vif-plugged', u '8c3289ef-07aa-4048-8df1-979c2d8af07b'), ('network-vif-plugged', u'5a8d8e13-3dba-4b64-a17d-67724ef420ce'), ('network-vif-plugged', u'727b1a5d-e704-4185-9765-97c8ffb4f559'), ('network-vif-plugged', u '45563a2d-cd6c-4fe3-b5a9-74699bea3737'), ('network-vif-plugged', u'bc284b29-921e-495c-ad53-6f5fe9df6cd8')] for instance with vm_state building and task_state spawning.: Timeout: 300 seconds I've seen this in a couple of failures and each time the only port of the 7 that we don't get the network-vif-plugged event for is the first one, in this case 472ab433-bc18-41b5-85ae-009611117b70. The 472ab433-bc18-41b5-85ae-009611117b70 vif is plugged in nova-compute here: Dec 11 23:56:11.855811 ubuntu-xenial-inap-mtl01-0001136812 nova- compute[29399]: INFO os_vif [None req-f43f3c35-dc0b- 4aa4-bfd0-4046045b315e tempest-TaggedBootDevicesTest-250467933 tempest- TaggedBootDevicesTest-250467933] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:dd:9f,bridge_name='br- int',has_traffic_filtering=True,id=472ab433-bc18-41b5-85ae- 009611117b70,network=Network(4988ef3d-6a8f-477f- b0a2-298c33019494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap472ab433-bc') Interestingly, right before that in the neutron server logs I see a network-changed event for that port: Dec 11 23:55:46.981194 ubuntu-xenial-inap-mtl01-0001136812 neutron- server[20899]: DEBUG neutron.notifiers.nova [-] Sending events: [{'tag': u'472ab433-bc18-41b5-85ae-009611117b70', 'name': 'network-changed', 'server_uuid': u'18e5987e-a535-4044-84aa-dd8f023cedcc'}] {{(pid=21001) send_events /opt/stack/new/neutron/neutron/notifiers/nova.py:245}} It looks like that is due to the port binding, from the neutron agent logs: Dec 11 23:55:46.383255 ubuntu-xenial-inap-mtl01-0001136812 neutron- openvswitch-agent[21879]: DEBUG neutron.agent.resource_cache [None req- 8081683c-7287-4e57-936f-3376788aec97 service neutron] Resource Port 472ab433-bc18-41b5-85ae-009611117b70 updated (revision_number 1->2). Old fields: {'device_owner': u'', 'bindings': [PortBinding(host='',port_id=472ab433-bc18-41b5-85ae- 009611117b70,profile={},status='ACTIVE',vif_details=None,vif_type='unbound',vnic_type='normal')]} New fields: {'device_owner': u'compute:nova', 'bindings': [PortBinding(host='ubuntu-xenial-inap- mtl01-0001136812',port_id=472ab433-bc18-41b5-85ae- 009611117b70,profile={},status='ACTIVE',vif_details=None,vif_type='unbound',vnic_type='normal')]} {{(pid=21879) record_resource_update /opt/stack/new/neutron/neutron/agent/resource_cache.py:186}} Then in the neutron agent I see this: Dec 11 23:56:12.724830 ubuntu-xenial-inap-mtl01-0001136812 neutron-openvswitch-agent[21879]: DEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-cc26f856-2521-411e-ae10-34ad96b7c665 None None] Processing port: 472ab433-bc18-41b5-85ae-009611117b70 {{(pid=21879) treat_devices_added_or_updated /opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1605}} Dec 11 23:56:12.725010 ubuntu-xenial-inap-mtl01-0001136812 neutron-openvswitch-agent[21879]: INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-cc26f856-2521-411e-ae10-34ad96b7c665 None None] Port 472ab433-bc18-41b5-85ae-009611117b70 was not found on the integration bridge and will therefore not be processed Is that a problem which might cause the network-vif-plugged event to not be sent? And then I see this: Dec 11 23:56:14.095895 ubuntu-xenial-inap-mtl01-0001136812 neutron- openvswitch-agent[21879]: INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-cc26f856-2521-411e-ae10-34ad96b7c665 None None] Ports set([u'472ab433-bc18-41b5-85ae-009611117b70', u'bc284b29-921e-495c- ad53-6f5fe9df6cd8', u'45563a2d-cd6c-4fe3-b5a9-74699bea3737', u'6b476c8d- 35b5-4380-a4c0-9e691f568ff5', u'8c3289ef-07aa-4048-8df1-979c2d8af07b', u'5a8d8e13-3dba-4b64-a17d-67724ef420ce', u'727b1a5d-e704-4185-9765-97c8ffb4f559']) skipped, changing status to down http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C %22tempest- TaggedBootDevicesTest%5C%22%20AND%20message%3A%5C%22Timeout%20waiting%20for%20%5B ('network-vif- plugged'%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d 4 hits in 7 days, check and gate, all failures. ** Affects: neutron Importance: Medium Status: Confirmed ** Tags: gate-failure -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1808171 Title: TaggedBootDevicesTest.test_tagged_boot_devices intermittently fails waiting for network-vif-plugged event which neutron does not send Status in neutron: Confirmed Bug description: Seen here: http://logs.openstack.org/46/623246/3/check/nova- multiattach/efa830b/logs/screen-n-cpu.txt.gz?#_Dec_12_00_01_13_179474 Dec 12 00:01:13.179474 ubuntu-xenial-inap-mtl01-0001136812 nova- compute[29399]: WARNING nova.virt.libvirt.driver [None req-f43f3c35 -dc0b-4aa4-bfd0-4046045b315e tempest-TaggedBootDevicesTest-250467933 tempest-TaggedBootDevicesTest-250467933] [instance: 18e5987e-a535-4044 -84aa-dd8f023cedcc] Timeout waiting for [('network-vif-plugged', u'472ab433-bc18-41b5-85ae-009611117b70'), ('network-vif-plugged', u '6b476c8d-35b5-4380-a4c0-9e691f568ff5'), ('network-vif-plugged', u '8c3289ef-07aa-4048-8df1-979c2d8af07b'), ('network-vif-plugged', u'5a8d8e13-3dba-4b64-a17d-67724ef420ce'), ('network-vif-plugged', u'727b1a5d-e704-4185-9765-97c8ffb4f559'), ('network-vif-plugged', u '45563a2d-cd6c-4fe3-b5a9-74699bea3737'), ('network-vif-plugged', u'bc284b29-921e-495c-ad53-6f5fe9df6cd8')] for instance with vm_state building and task_state spawning.: Timeout: 300 seconds I've seen this in a couple of failures and each time the only port of the 7 that we don't get the network-vif-plugged event for is the first one, in this case 472ab433-bc18-41b5-85ae-009611117b70. The 472ab433-bc18-41b5-85ae-009611117b70 vif is plugged in nova- compute here: Dec 11 23:56:11.855811 ubuntu-xenial-inap-mtl01-0001136812 nova- compute[29399]: INFO os_vif [None req-f43f3c35-dc0b- 4aa4-bfd0-4046045b315e tempest-TaggedBootDevicesTest-250467933 tempest-TaggedBootDevicesTest-250467933] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:fb:dd:9f,bridge_name='br- int',has_traffic_filtering=True,id=472ab433-bc18-41b5-85ae- 009611117b70,network=Network(4988ef3d-6a8f-477f- b0a2-298c33019494),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap472ab433-bc') Interestingly, right before that in the neutron server logs I see a network-changed event for that port: Dec 11 23:55:46.981194 ubuntu-xenial-inap-mtl01-0001136812 neutron- server[20899]: DEBUG neutron.notifiers.nova [-] Sending events: [{'tag': u'472ab433-bc18-41b5-85ae-009611117b70', 'name': 'network- changed', 'server_uuid': u'18e5987e-a535-4044-84aa-dd8f023cedcc'}] {{(pid=21001) send_events /opt/stack/new/neutron/neutron/notifiers/nova.py:245}} It looks like that is due to the port binding, from the neutron agent logs: Dec 11 23:55:46.383255 ubuntu-xenial-inap-mtl01-0001136812 neutron- openvswitch-agent[21879]: DEBUG neutron.agent.resource_cache [None req-8081683c-7287-4e57-936f-3376788aec97 service neutron] Resource Port 472ab433-bc18-41b5-85ae-009611117b70 updated (revision_number 1->2). Old fields: {'device_owner': u'', 'bindings': [PortBinding(host='',port_id=472ab433-bc18-41b5-85ae- 009611117b70,profile={},status='ACTIVE',vif_details=None,vif_type='unbound',vnic_type='normal')]} New fields: {'device_owner': u'compute:nova', 'bindings': [PortBinding(host='ubuntu-xenial-inap- mtl01-0001136812',port_id=472ab433-bc18-41b5-85ae- 009611117b70,profile={},status='ACTIVE',vif_details=None,vif_type='unbound',vnic_type='normal')]} {{(pid=21879) record_resource_update /opt/stack/new/neutron/neutron/agent/resource_cache.py:186}} Then in the neutron agent I see this: Dec 11 23:56:12.724830 ubuntu-xenial-inap-mtl01-0001136812 neutron-openvswitch-agent[21879]: DEBUG neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-cc26f856-2521-411e-ae10-34ad96b7c665 None None] Processing port: 472ab433-bc18-41b5-85ae-009611117b70 {{(pid=21879) treat_devices_added_or_updated /opt/stack/new/neutron/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py:1605}} Dec 11 23:56:12.725010 ubuntu-xenial-inap-mtl01-0001136812 neutron-openvswitch-agent[21879]: INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-cc26f856-2521-411e-ae10-34ad96b7c665 None None] Port 472ab433-bc18-41b5-85ae-009611117b70 was not found on the integration bridge and will therefore not be processed Is that a problem which might cause the network-vif-plugged event to not be sent? And then I see this: Dec 11 23:56:14.095895 ubuntu-xenial-inap-mtl01-0001136812 neutron- openvswitch-agent[21879]: INFO neutron.plugins.ml2.drivers.openvswitch.agent.ovs_neutron_agent [None req-cc26f856-2521-411e-ae10-34ad96b7c665 None None] Ports set([u'472ab433-bc18-41b5-85ae-009611117b70', u'bc284b29-921e-495c- ad53-6f5fe9df6cd8', u'45563a2d-cd6c-4fe3-b5a9-74699bea3737', u '6b476c8d-35b5-4380-a4c0-9e691f568ff5', u'8c3289ef- 07aa-4048-8df1-979c2d8af07b', u'5a8d8e13-3dba-4b64-a17d-67724ef420ce', u'727b1a5d-e704-4185-9765-97c8ffb4f559']) skipped, changing status to down http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C %22tempest- TaggedBootDevicesTest%5C%22%20AND%20message%3A%5C%22Timeout%20waiting%20for%20%5B ('network-vif- plugged'%5C%22%20AND%20tags%3A%5C%22screen-n-cpu.txt%5C%22&from=7d 4 hits in 7 days, check and gate, all failures. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1808171/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : [email protected] Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp

