[Yahoo-eng-team] [Bug 1931583] Re: Wrong status of trunk sub-port after seting binding_profile

2021-06-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/795334
Committed: 
https://opendev.org/openstack/neutron/commit/6ada9124143f42311951a75b5d586bbab4451ce6
Submitter: "Zuul (22348)"
Branch:master

commit 6ada9124143f42311951a75b5d586bbab4451ce6
Author: Kamil Sambor 
Date:   Tue Jun 8 14:56:01 2021 +0200

Set trunk sub-port when bind profile is created

Closes-Bug: #1931583
Change-Id: Ief14ef053023a088716fa49e13d832b7e8faef31


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1931583

Title:
  Wrong status of trunk sub-port after seting binding_profile

Status in neutron:
  Fix Released

Bug description:
  When sub-port was created (with OVN enabled) and event was process
  without binding profile this sub port will end forever in DOWN status

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1931583/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1911132] Re: OVN mech driver - can't find Logical_Router errors

2021-06-11 Thread Terry Wilson
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1911132

Title:
  OVN mech driver - can't find Logical_Router errors

Status in neutron:
  Fix Released

Bug description:
  I saw such errors in the CI job's logs.

  Traceback:

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command [None req-
  c0d71b7e-d2a9-4528-9e1a-22437c79fbf1 admin admin] Error executing
  command: ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find
  Logical_Router with name=neutron-99840546-2921-4f59-a540-aee4e964b3f2

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command Traceback (most
  recent call last):

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command   File
  "/usr/local/lib/python3.8/dist-
  packages/ovsdbapp/backend/ovs_idl/command.py", line 39, in execute

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command
  self.run_idl(None)

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command   File
  "/usr/local/lib/python3.8/dist-
  packages/ovsdbapp/backend/ovs_idl/command.py", line 328, in run_idl

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command self.result
  = self.api.lookup(self.table, self.record)

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command   File
  "/usr/local/lib/python3.8/dist-
  packages/ovsdbapp/backend/ovs_idl/__init__.py", line 177, in lookup

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command return
  self._lookup(table, record)

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command   File
  "/usr/local/lib/python3.8/dist-
  packages/ovsdbapp/backend/ovs_idl/__init__.py", line 224, in _lookup

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command row =
  idlutils.row_by_value(self, rl.table, rl.column, record)

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command   File
  "/usr/local/lib/python3.8/dist-
  packages/ovsdbapp/backend/ovs_idl/idlutils.py", line 114, in
  row_by_value

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command raise
  RowNotFound(table=table, col=column, match=match)

  Jan 07 15:32:40.720920 ubuntu-focal-ovh-bhs1-0022432336 neutron-
  server[68561]: ERROR ovsdbapp.backend.ovs_idl.command
  ovsdbapp.backend.ovs_idl.idlutils.RowNotFound: Cannot find
  Logical_Router with name=neutron-99840546-2921-4f59-a540-aee4e964b3f2

  
  Logs: 
https://zuul.opendev.org/t/openstack/build/4b30c213380c4bbc8b910047b1c26797/logs

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1911132/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1931735] Re: node failed to deploy because an ephemeral network device was not found

2021-06-11 Thread Christian Grabowski
** Also affects: cloud-init
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1931735

Title:
  node failed to deploy because an ephemeral network device was not
  found

Status in cloud-init:
  New
Status in MAAS:
  New

Bug description:
  Hi,

  Using MAAS snap 2.8.6-8602-g.07cdffcaa.

  I just had a node failed to deploy because a network device that was
  present during commissioning wasn't present anymore, making cloud-init
  sad. To be precise, the node deployed properly, rebooted, and during
  the post-deploy boot, cloud-init got sad with :

  RuntimeError: Not all expected physical devices present:
  {'be:65:46:cb:58:b7'}

  (full stacktrace at https://pastebin.canonical.com/p/9Ycxwk5rRy/)

  I was indeed able to find the network device with MAC address
  'be:65:46:cb:58:b7', and it's an ephemeral NIC that gets created when
  someone logs in the HTML5 console (this is a Gigabyte server by the
  way). So someone was probably logged on the HTML5 console when the
  node was commissioned.

  I deleted this ephemeral device from the node in MAAS, and was then
  able to deploy it properly.

  These ephemeral NICs appear to have random MAC addresses. I was logged
  on the HTML5 console during the boot logged above, and you can see
  there's a device named "enx5a099ca01d4b" with MAC address
  "5a:09:9c:a0:1d:4b" (which doesn't match a known OUI).

  This is actually a cdc_ether device :
  $ dmesg|grep cdc_ether
  [   29.867170] cdc_ether 1-1.3:2.0 usb0: register 'cdc_ether' at 
usb-:06:00.3-1.3, CDC Ethernet Device, 5a:09:9c:a0:1d:4b
  [   29.867296] usbcore: registered new interface driver cdc_ether
  [   29.958137] cdc_ether 1-1.3:2.0 enx5a099ca01d4b: renamed from usb0
  [  205.908811] cdc_ether 1-1.3:2.0 enx5a099ca01d4b: unregister 'cdc_ether' 
usb-:06:00.3-1.3, CDC Ethernet Device

  (the last time is very probably when I logged off the HTML5 console,
  which removes the device).

  So I think :
  - MAAS should ignore these devices by default
  - cloud-init shouldn't die when a cdc_ether device is missing.

  Thanks

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1931735/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1925451] Re: [stable/rocky] grenade job is broken

2021-06-11 Thread Bernard Cafarelli
** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
 Assignee: (unassigned) => Bernard Cafarelli (bcafarel)

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1925451

Title:
  [stable/rocky] grenade job is broken

Status in neutron:
  Fix Released

Bug description:
  I just saw in the neutron-grenade job in stable/rocky branch error
  like:

  2021-04-22 08:11:54.188 | Complete output from command python setup.py 
egg_info:
  2021-04-22 08:11:54.188 | Couldn't find index page for 'pbr' (maybe 
misspelled?)
  2021-04-22 08:11:54.188 | No local packages or download links found for 
pbr>=1.8
  2021-04-22 08:11:54.188 | Traceback (most recent call last):
  2021-04-22 08:11:54.188 |   File "", line 1, in 
  2021-04-22 08:11:54.188 |   File 
"/tmp/pip-build-9jeigq7n/devstack-tools/setup.py", line 29, in 
  2021-04-22 08:11:54.188 | pbr=True)
  2021-04-22 08:11:54.188 |   File "/usr/lib/python3.5/distutils/core.py", 
line 108, in setup
  2021-04-22 08:11:54.188 | _setup_distribution = dist = klass(attrs)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/setuptools/dist.py", line 269, in __init__
  2021-04-22 08:11:54.188 | 
self.fetch_build_eggs(attrs['setup_requires'])
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/setuptools/dist.py", line 313, in 
fetch_build_eggs
  2021-04-22 08:11:54.188 | replace_conflicting=True,
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 826, in resolve
  2021-04-22 08:11:54.188 | dist = best[req.key] = env.best_match(req, 
ws, installer)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1092, in 
best_match
  2021-04-22 08:11:54.188 | return self.obtain(req, installer)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 1104, in obtain
  2021-04-22 08:11:54.188 | return installer(requirement)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/setuptools/dist.py", line 380, in 
fetch_build_egg
  2021-04-22 08:11:54.188 | return cmd.easy_install(req)
  2021-04-22 08:11:54.188 |   File 
"/usr/lib/python3/dist-packages/setuptools/command/easy_install.py", line 657, 
in easy_install
  2021-04-22 08:11:54.188 | raise DistutilsError(msg)
  2021-04-22 08:11:54.188 | distutils.errors.DistutilsError: Could not find 
suitable distribution for Requirement.parse('pbr>=1.8')

  Failure in
  
https://447f476affa473a2-ba0bbef8fa5bd9d33ddbd8694210833c.ssl.cf5.rackcdn.com/777123/4/check
  /neutron-grenade/e900d19/logs/grenade.sh.txt

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1925451/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1916024] Re: HA router master instance in error state because qg-xx interface is down

2021-06-11 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1916024

Title:
  HA router master instance in error state because qg-xx interface is
  down

Status in neutron:
  Fix Released

Bug description:
  BZ reference: https://bugzilla.redhat.com/show_bug.cgi?id=1929829

  Sometimes a router is created with all the instances in standby mode
  because the qg-xx interface is in down state and there isn't
  connectivity:

  (overcloud) [stack@undercloud-0 ~]$ neutron l3-agent-list-hosting-router 
router1
  neutron CLI is deprecated and will be removed in the future. Use openstack 
CLI instead.
  
+--+---++---+--+
  | id   | host  | 
admin_state_up | alive | ha_state |
  
+--+---++---+--+
  | 3b93ec23-48fa-4847-bbb2-f8903e9865f9 | networker-1.redhat.local  | True 
  | :-)   | standby  |
  | 41b8d1a8-4695-445a-916a-d12db523eb91 | controller-0.redhat.local | True 
  | :-)   | standby  |
  | 4533bd88-d2d1-4320-9e39-6fcb2a5cc236 | networker-0.redhat.local  | True 
  | :-)   | standby  |
  
+--+---++---+--+
  (overcloud) [stack@undercloud-0 ~]$ 

  
  Steps to reproduce:
  1. for i in $(seq 10); do ./create.sh $i; done
  3. Check FIP connectivity to detect the error
  4. for i in $(seq 10); do ./delete.sh $i; done

  Scripts: http://paste.openstack.org/show/802777/

  Seems to be a race condition between L3 agent and keepalived configuring 
qg-xxx interface:
  - /var/log/messages: http://paste.openstack.org/show/802778/
  - L3 agent logs: http://paste.openstack.org/show/802779/

  When keepalive is setting the qg-xxx interface IP addresses, the
  interface disappears from udev and reappears again (I still don't know
  why yet). The log in journalctl looks the same as when a new interface
  is created.

  Since [1], the L3 agent controls the GW interface status (up or down).
  If the L3 agent do not link up the interface, the router namespace
  won't be able to send/receive any traffic.

  [1]https://review.opendev.org/q/I8dca2c1a2f8cb467cfb44420f0eea54ca0932b05

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1916024/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1931716] [NEW] Detaching volume from the live domain: virDomainDetachDeviceFlags(): libvirt.libvirtError: operation failed: disk vdb not found

2021-06-11 Thread Balazs Gibizer
Public bug reported:

Description
===

The test_attach_attached_volume_to_same_server tests within the nova-
next job fail during detaching the volume:

File "/usr/local/lib/python3.8/dist-packages/libvirt.py", line 1515, in
detachDeviceFlags

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5] raise
libvirtError('virDomainDetachDeviceFlags() failed')

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5] libvirt.libvirtError: operation
failed: disk vdb not found


Steps to reproduce
==

Only seen as part of the nova-next job at present.

Expected result
===

detach succeeds

Actual result
=

test fails as the volume state

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

master

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

libvirt

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

cinder volume

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

N/A

Logs & Configs
==

https://zuul.opendev.org/t/openstack/build/02e6a99bf1574c978c663eb434705cbb/log/controller/logs/screen-n-cpu.txt?severity=0#34811

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [None req-aceedc3b-f03b-
4bd5-8e8e-cfe66170a2e2 tempest-AttachVolumeNegativeTest-1273499975
tempest-AttachVolumeNegativeTest-1273499975-project] [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5] Failed to detach volume
2939aa92-0156-45d6-9689-ed48dcc8fd8a from /dev/vdb:
libvirt.libvirtError: operation failed: disk vdb not found

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5] Traceback (most recent call last):

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5]   File
"/opt/stack/nova/nova/virt/block_device.py", line 328, in driver_detach

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5]
virt_driver.detach_volume(context, connection_info, instance, mp,

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5]   File
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2555, in
detach_volume

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5] self._detach_with_retry(

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5]   File
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2308, in
_detach_with_retry

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5]
self._detach_from_live_with_retry(

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5]   File
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2364, in
_detach_from_live_with_retry

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5]
self._detach_from_live_and_wait_for_event(

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5]   File
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2426, in
_detach_from_live_and_wait_for_event

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5] self._detach_sync(

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5]   File
"/opt/stack/nova/nova/virt/libvirt/driver.py", line 2498, in
_detach_sync

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-
compute[108182]: ERROR nova.virt.block_device [instance:
e3d6fd2b-7357-4d19-bcb4-482d97d332e5] guest.detach_device(dev,
persistent=persistent, live=live)

Jun 11 13:30:47.071650 ubuntu-focal-inap-mtl01-0025074130 nova-

[Yahoo-eng-team] [Bug 1931710] [NEW] nova-lvm lvs return -11 and fails with Failed to get udev device handler for device

2021-06-11 Thread Lee Yarwood
Public bug reported:

Description
===

Tests within the nova-lvm job fail during cleanup with the following
trace visible in n-cpu:

https://797b12f7389a12861990-09e4be48fe62aca6e4b03d954e19defe.ssl.cf5.rackcdn.com/795992/3/check
/nova-lvm/99a7b1f/controller/logs/screen-n-cpu.txt

Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 nova-compute[106254]: 
Command: lvs --noheadings -o lv_name /dev/stack-volumes-default
Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 nova-compute[106254]: 
Exit code: -11
Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 nova-compute[106254]: 
Stdout: ''
Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 nova-compute[106254]: 
Stderr: '  WARNING: Failed to get udev device handler for device /dev/sda1.\n  
/dev/sda15: stat failed: No such file or directory\n  Path /dev/sda15 no longer 
valid for device(8,15)\n  /dev/sda15: stat failed: No such file or directory\n  
Path /dev/sda15 no longer valid for device(8,15)\n  Device open /dev/sda 8:0 
failed errno 2\n  Device open /dev/sda 8:0 failed errno 2\n  Device open 
/dev/sda1 8:1 failed errno 2\n  Device open /dev/sda1 8:1 failed errno 2\n  
WARNING: Scan ignoring device 8:0 with no paths.\n  WARNING: Scan ignoring 
device 8:1 with no paths.\n'

Bug #1901783 details something simillar to this in Cinder but as the
above is coming from native Nova ephemeral storage code with a different
return code I'm going to treat this as a separate issue for now.


Steps to reproduce
==

Only seen as part of the nova-lvm job at present.

Expected result
===

nova-lvm and the removal of instances succeeds.

Actual result
=

nova-lvm and the removal of instances fails.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

master

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

libvirt

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

LVM (ephemeral)

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

N/A

Logs & Configs
==

As above.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Summary changed:

- lvs return -11 and fails with Failed to get udev device handler for device
+ nova-lvm lvs return -11 and fails with Failed to get udev device handler for 
device

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1931710

Title:
  nova-lvm lvs return -11 and fails with Failed to get udev device
  handler for device

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  Tests within the nova-lvm job fail during cleanup with the following
  trace visible in n-cpu:

  
https://797b12f7389a12861990-09e4be48fe62aca6e4b03d954e19defe.ssl.cf5.rackcdn.com/795992/3/check
  /nova-lvm/99a7b1f/controller/logs/screen-n-cpu.txt

  Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 
nova-compute[106254]: Command: lvs --noheadings -o lv_name 
/dev/stack-volumes-default
  Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 
nova-compute[106254]: Exit code: -11
  Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 
nova-compute[106254]: Stdout: ''
  Jun 11 13:04:38.733030 ubuntu-focal-inap-mtl01-0025074127 
nova-compute[106254]: Stderr: '  WARNING: Failed to get udev device handler for 
device /dev/sda1.\n  /dev/sda15: stat failed: No such file or directory\n  Path 
/dev/sda15 no longer valid for device(8,15)\n  /dev/sda15: stat failed: No such 
file or directory\n  Path /dev/sda15 no longer valid for device(8,15)\n  Device 
open /dev/sda 8:0 failed errno 2\n  Device open /dev/sda 8:0 failed errno 2\n  
Device open /dev/sda1 8:1 failed errno 2\n  Device open /dev/sda1 8:1 failed 
errno 2\n  WARNING: Scan ignoring device 8:0 with no paths.\n  WARNING: Scan 
ignoring device 8:1 with no paths.\n'

  Bug #1901783 details something simillar to this in Cinder but as the
  above is coming from native Nova ephemeral storage code with a
  different return code I'm going to treat this as a separate issue for
  now.

  
  Steps to reproduce
  ==

  Only seen as part of the nova-lvm job at present.

  Expected result
  ===

  nova-lvm and the removal of instances succeeds.

  Actual result
  =

  nova-lvm and the removal of instances fails.

  Environment
  ===
  1. Exact version of OpenStack you are running. See the following
list for all releases: http://docs.openstack.org/releases/

  master

  2. Which hypervisor did you use?
 (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
 What's the version of 

[Yahoo-eng-team] [Bug 1931707] [NEW] "NeutronAdminCredentialConfigurationInvalid: Networking client is experiencing an unauthorized exception" error while instantiating instance in Train RDO

2021-06-11 Thread startlearningnew
Public bug reported:

Hi, Following ticket has been reported in 
https://bugzilla.redhat.com/show_bug.cgi?id=1964893
Based on their answer, verified the neutron command manually and it works fine.

The issue is occurring randomly and failing the instance creation
usecase.

Recently had failed 2 cases,

1) it failed once out of 72 runs

2021-06-09 10:00:49.391 3194 INFO heat.engine.resource 
[req-6d38438b-be89-4619-8517-9edc49bc7a40 - instTermProject-1 - default 
default] CREATE: Net "vm_network_3" [5776a857-4316-47f9-b3fa-3a1fbff39132] 
Stack "vapp-Service-060821210256-1" [c7fa51bb-2012-4f5c-aa6f-1720dd47a5ed]
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource Traceback (most recent 
call last):
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 920, in 
_action_recorder
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource yield
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 1033, in 
_do_action
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource yield 
self.action_handler_task(action, args=handler_args)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/heat/engine/scheduler.py", line 346, in 
wrapper
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource step = next(subtask)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/heat/engine/resource.py", line 982, in 
action_handler_task
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource done = 
check(handler_data)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/heat/engine/resources/openstack/neutron/net.py",
 line 208, in check_create_complete
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource attributes = 
self._show_resource()
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/heat/engine/resources/openstack/neutron/neutron.py",
 line 139, in _show_resource
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource res_info = 
client_method(*args)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 822, in 
show_network
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource return 
self.get(self.network_path % (network), params=_params)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 354, in 
get
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource headers=headers, 
params=params)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 331, in 
retry_request
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource headers=headers, 
params=params)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 294, in 
do_request
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource 
self._handle_fault_response(status_code, replybody, resp)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 269, in 
_handle_fault_response
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource 
exception_handler_v20(status_code, error_body)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource   File 
"/usr/lib/python2.7/site-packages/neutronclient/v2_0/client.py", line 93, in 
exception_handler_v20
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource 
request_ids=request_ids)
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource Unauthorized: 
401-{u'error': {u'message': u'The request you have made requires 
authentication.', u'code': 401, u'title': u'Unauthorized'}}
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource Neutron server returns 
request_ids: ['req-8eb8d67b-1b44-4caf-8c5a-37a442494b31']
2021-06-09 10:00:49.391 3194 ERROR heat.engine.resource
2021-06-09 10:00:49.429 3194 INFO heat.engine.stack 
[req-6d38438b-be89-4619-8517-9edc49bc7a40 - instTermProject-1 - default 
default] Stack CREATE FAILED (vapp-Service-060821210256-1): Resource CREATE 
failed: Unauthorized: resources.vm_network_3: 401-{u'error': {u'message': u'The 
request you have made requires authentication.', u'code': 401, u'title': 
u'Unauthorized'}}
Neutron server returns request_ids: ['req-8eb8d67b-1b44-4caf-8c5a-37a442494b31']


2) it failed once out of 72 runs

2021-06-10 10:01:27.888 13501 INFO nova.compute.manager 
[req-c531e9c8-49b7-4e16-a06c-813cf32067d6 289cd17405e846c390fbb62a92a8adb9 
53e47db492134bf4b58f3d2aadafb639 - default default] [instance: 
f0dc327b-1416-400f-9846-86b9ef0dfc71] Took 8.84 seconds to spawn the instance 
on the hypervisor.
2021-06-10 10:01:28.034 

[Yahoo-eng-team] [Bug 1931702] [NEW] test_live_block_migration_with_attached_volume fails with BUG: soft lockup - CPU#0 stuck for 22s! in the guestOS while detaching a volume

2021-06-11 Thread Lee Yarwood
Public bug reported:

Description
===

test_live_block_migration_with_attached_volume fails during cleanups to
detach a volume from an instance that has as the test name suggest been
migrated, we've not got the complete console for some reason but the
part we have shows the following soft lockup:

https://933286ee423f4ed9028e-
1eceb8a6fb7f917522f65bda64a8589f.ssl.cf5.rackcdn.com/794766/2/check
/nova-grenade-multinode/a5ff180/

[   40.741525] watchdog: BUG: soft lockup - CPU#0 stuck for 22s! [run-parts:288]
[   40.745566] Modules linked in: ahci libahci ip_tables x_tables nls_utf8 
nls_iso8859_1 nls_ascii isofs hid_generic usbhid hid virtio_rng virtio_gpu 
drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm 
virtio_scsi virtio_net net_failover failover virtio_input virtio_blk 
qemu_fw_cfg 9pnet_virtio 9pnet pcnet32 8139cp mii ne2k_pci 8390 e1000
[   40.750740] CPU: 0 PID: 288 Comm: run-parts Not tainted 5.3.0-26-generic 
#28~18.04.1-Ubuntu
[   40.751458] Hardware name: OpenStack Foundation OpenStack Nova, BIOS 
1.13.0-1ubuntu1.1 04/01/2014
[   40.753365] RIP: 0010:__switch_to_asm+0x42/0x70
[   40.754190] Code: 48 8b 9e c8 08 00 00 65 48 89 1c 25 28 00 00 00 49 c7 c4 
10 00 00 00 e8 07 00 00 00 f3 90 0f ae e8 eb f9 e8 07 00 00 00 f3 90 <0f> ae e8 
eb f9 49 ff cc 75 e3 48 81 c4 00 01 00 00 41 5f 41 5e 41
[   40.755739] RSP: 0018:b6a9c027bdb8 EFLAGS: 0282 ORIG_RAX: 
ff13
[   40.756419] RAX: 0018 RBX: 97eec71e6000 RCX: 3c434e4753444bff
[   40.757057] RDX: 0001020304050608 RSI: 8080808080808080 RDI: 0fe0
[   40.757659] RBP: b6a9c027bde8 R08: fefefefefefefeff R09: 
[   40.758268] R10: 0fc8 R11: 40042000 R12: 7ffd9666df63
[   40.758954] R13:  R14: 0001 R15: 07ff
[   40.759654] FS:  7f55b7e936a0() GS:97eec760() 
knlGS:
[   40.760334] CS:  0010 DS:  ES:  CR0: 80050033
[   40.760830] CR2: 006ad340 CR3: 03cc8000 CR4: 06f0
[   40.761685] Call Trace:
[   40.762767]  ? __switch_to_asm+0x34/0x70
[   40.763183]  ? __switch_to_asm+0x40/0x70
[   40.763539]  ? __switch_to_asm+0x34/0x70
[   40.763895]  ? __switch_to_asm+0x40/0x70
[   40.764249]  ? __switch_to_asm+0x34/0x70
[   40.764597]  ? __switch_to_asm+0x40/0x70
[   40.764945]  ? __switch_to_asm+0x34/0x70
[   40.765311]  __switch_to_asm+0x40/0x70
[   40.765884]  ? __switch_to_asm+0x34/0x70
[   40.766239]  ? __switch_to_asm+0x40/0x70
[   40.766619]  ? __switch_to_asm+0x34/0x70
[   40.766972]  ? __switch_to_asm+0x40/0x70
[   40.767323]  ? __switch_to_asm+0x34/0x70
[   40.767677]  ? __switch_to_asm+0x40/0x70
[   40.768024]  ? __switch_to_asm+0x34/0x70
[   40.768375]  ? __switch_to_asm+0x40/0x70
[   40.768725]  ? __switch_to_asm+0x34/0x70
[   40.769516]  ? __switch_to+0x112/0x480
[   40.769864]  ? __switch_to_asm+0x40/0x70
[   40.770218]  ? __switch_to_asm+0x34/0x70
[   40.771035]  ? __schedule+0x2b0/0x670
[   40.771919]  ? schedule+0x33/0xa0
[   40.772741]  ? prepare_exit_to_usermode+0x98/0xa0
[   40.773398]  ? retint_user+0x8/0x8

I'm going to see if I can instrument the test a little more to dump the
console *after* the detach request so we get a better idea of what if
anything went wrong in the guestOS.

Steps to reproduce
==

nova-grenade-multinode and nova-live-migration have been hit this thus
far.

Expected result
===

test_live_block_migration_with_attached_volume passes.

Actual result
=

test_live_block_migration_with_attached_volume fails.

Environment
===
1. Exact version of OpenStack you are running. See the following
  list for all releases: http://docs.openstack.org/releases/

   Master.

2. Which hypervisor did you use?
   (For example: Libvirt + KVM, Libvirt + XEN, Hyper-V, PowerKVM, ...)
   What's the version of that?

   libvirt + KVM

2. Which storage type did you use?
   (For example: Ceph, LVM, GPFS, ...)
   What's the version of that?

   N/A

3. Which networking type did you use?
   (For example: nova-network, Neutron with OpenVSwitch, ...)

   N/A

Logs & Configs
==

See above.

** Affects: nova
 Importance: Undecided
 Status: New


** Tags: gate-failure

** Summary changed:

- BUG: soft lockup - CPU#0 stuck for 22s! while detaching a volume
+ test_live_block_migration_with_attached_volume fails with BUG: soft lockup - 
CPU#0 stuck for 22s! in the guestOS while detaching a volume

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1931702

Title:
  test_live_block_migration_with_attached_volume fails with BUG: soft
  lockup - CPU#0 stuck for 22s! in the guestOS while detaching a volume

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  test_live_block_migration_with_attached_volume fails 

[Yahoo-eng-team] [Bug 1930367] Re: "TestNeutronServer" related tests failing frequently

2021-06-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/793899
Committed: 
https://opendev.org/openstack/neutron/commit/da760973be09a98a1fc07c519e74a11899cb4aa5
Submitter: "Zuul (22348)"
Branch:master

commit da760973be09a98a1fc07c519e74a11899cb4aa5
Author: Rodolfo Alonso Hernandez 
Date:   Tue Jun 1 07:44:33 2021 +

Use "multiprocessing.Queue" for "TestNeutronServer" related tests

Instead of using a file to log the processes status and actions,
"TestNeutronServer" now uses a "multiprocessing.Queue" that is
safer than writting a single file accross multiple processes.

Change-Id: I6d04df180cd9b2d593bb99c8d22a60a3534f22a0
Closes-Bug: #1930367


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930367

Title:
  "TestNeutronServer" related tests failing frequently

Status in neutron:
  Fix Released

Bug description:
  "TestNeutronServer" and those inheriting from this parent class
  ("TestWsgiServer", "TestRPCServer", "TestPluginWorker") are failing
  frequently.

  Error:
  
https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_d0d/791365/7/gate
  /neutron-functional-with-uwsgi/d0d293b/testr_results.html

  Snippet: http://paste.openstack.org/show/806208/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930367/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1930876] Re: "get_reservations_for_resources" execute DB operations without opening a DB context

2021-06-11 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/794777
Committed: 
https://opendev.org/openstack/neutron/commit/96b2926671d66e7a1b32cdfec645aa9dd143aa48
Submitter: "Zuul (22348)"
Branch:master

commit 96b2926671d66e7a1b32cdfec645aa9dd143aa48
Author: Rodolfo Alonso Hernandez 
Date:   Fri Jun 4 11:18:53 2021 +

Add CONTEXT_READER to "get_reservations_for_resources"

The method executes DB operations outside a DB context and with an
inactive session.

Change-Id: Ifd1c7b99421768dfa9462237e2b1b14af0e68f41
Closes-Bug: #1930876


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1930876

Title:
  "get_reservations_for_resources" execute DB operations without opening
  a DB context

Status in neutron:
  Fix Released

Bug description:
  "get_reservations_for_resources" execute DB operations without opening
  a DB context. In this case, this method should be decorated with
  CONTEXT_READER.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1930876/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1931683] [NEW] Old root volume status is incorrect after rebuilding instance

2021-06-11 Thread Jeremy Liu
Public bug reported:

When rebuild volume backed instance, the old root volume status change
to reserved. The reason is nova create new attachment for each bdm when
rebuilding instance. For old root volume, it is unnecessary to create
new attachment.

** Affects: nova
 Importance: Undecided
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1931683

Title:
  Old root volume status is incorrect after rebuilding instance

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  When rebuild volume backed instance, the old root volume status change
  to reserved. The reason is nova create new attachment for each bdm
  when rebuilding instance. For old root volume, it is unnecessary to
  create new attachment.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1931683/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1811352] Re: [RFE] Include neutron CLI floatingip port-forwarding support

2021-06-11 Thread LIU Yulong
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1811352

Title:
  [RFE] Include neutron CLI floatingip port-forwarding support

Status in neutron:
  Fix Released

Bug description:
  The floating ip port-forwarding is supported since Rocky by neutron
  API:

  https://developer.openstack.org/api-ref/network/v2/index.html?expanded
  =create-port-forwarding-detail#floating-ips-port-forwarding

  but the neutron client does not include this option yet.
  Also floatingip-update method is missing.

  It should include an option similar to this for floatingip
  *-create/*-update

  $ neutron floatingip-create  --port-forwarding
  
protocol=tcp,internal_port_id=,internal_port=,external_port=

  You should be able to repeat --port-forwarding several times

  Also for floatingip-update with and extra option:
  --port-forwarding clean

  To remove the current port-forwarding rules list.

  * Version: OpenStack Rocky

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1811352/+subscriptions

-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp