[Yahoo-eng-team] [Bug 1452641] Re: Static Ceph mon IP addresses in connection_info can prevent VM startup

2023-07-10 Thread Billy Olsen
This is not a charm bug, its a limitation/bug in the way that nova
handles the BDM devices.

** Changed in: nova
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1452641

Title:
  Static Ceph mon IP addresses in connection_info can prevent VM startup

Status in OpenStack Compute (nova):
  Invalid
Status in nova package in Ubuntu:
  Triaged

Bug description:
  The Cinder rbd driver extracts the IP addresses of the Ceph mon servers from 
the Ceph mon map when the instance/volume connection is established. This info 
is then stored in nova's block-device-mapping table and is never re-validated 
down the line. 
  Changing the Ceph mon servers' IP adresses will prevent the instance from 
booting as the stale connection info will enter the instance's XML. One idea to 
fix this would be to use the information from ceph.conf, which should be an 
alias or a loadblancer, directly.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1452641/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2026775] [NEW] Metadata agents do not parse X-Forwarded-For headers properly

2023-07-10 Thread Brian Haley
Public bug reported:

While looking at an unrelated issue I noticed log lines like this in the
neutron-ovn-metadata-agent log file:

  No port found in network b62452f3-ec93-4cd7-af2d-9f9eabb33b12 with IP
address 10.246.166.21,10.131.84.23

While it might seem harmless, looking at the code it only showed a
single value being logged:

  LOG.error("No port found in network %s with IP address %s",
network_id, remote_address)

The code in question is looking for a matching IP address, but will
never match the concatenated string.

Google shows the additional IP address(es) that might be present in this
header are actually proxies:

  https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Forwarded-
For

And sure enough in my case the second IP was always the same.

The code needs to be changed to account for proxies, which aren't
actually necessary to lookup what port is making the request, but it
could be logged for posterity.

I'll send a change for that soon.

** Affects: neutron
 Importance: Medium
 Assignee: Brian Haley (brian-haley)
 Status: In Progress

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2026775

Title:
  Metadata agents do not parse X-Forwarded-For headers properly

Status in neutron:
  In Progress

Bug description:
  While looking at an unrelated issue I noticed log lines like this in
  the neutron-ovn-metadata-agent log file:

No port found in network b62452f3-ec93-4cd7-af2d-9f9eabb33b12 with
  IP address 10.246.166.21,10.131.84.23

  While it might seem harmless, looking at the code it only showed a
  single value being logged:

LOG.error("No port found in network %s with IP address %s",
  network_id, remote_address)

  The code in question is looking for a matching IP address, but will
  never match the concatenated string.

  Google shows the additional IP address(es) that might be present in
  this header are actually proxies:

https://developer.mozilla.org/en-
  US/docs/Web/HTTP/Headers/X-Forwarded-For

  And sure enough in my case the second IP was always the same.

  The code needs to be changed to account for proxies, which aren't
  actually necessary to lookup what port is making the request, but it
  could be logged for posterity.

  I'll send a change for that soon.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2026775/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2026757] [NEW] dnsmasq on Ubuntu Jammy/Lunar crashes on neutron-dhcp-agent updates

2023-07-10 Thread Julia Kreger
Public bug reported:

The Ironic project's CI has been having major blocking issues moving to
utilizing Ubuntu Jammy and with some investigation we were able to
isolate the issues down to the dhcp updates causing dnsmasq to crash on
Ubuntu Jammy, which ships with dnsmasq 2.86. This issue sounds similar
to an issue known about to the dnsmasq maintainers, where dnsmasq would
crash with updates occurring due to configuration refresh[0].

This resulted in us upgrading dnsmasq to the version which ships with
Ubuntu Lunar.

Which was no better. Dnsmasq still crashed upon record updates for
addresses and ports getting configuration added/changed/removed.

We later downgraded to the version of dnsmasq shipped in Ubuntu Focal,
and dnsmasq stopped crashing and appeared stable enough to utilize for
CI purposes.

** Kernel log from Ubuntu Jammy Package **

[229798.876726] dnsmasq[81586]: segfault at 7c28 ip 7f6e8313147e sp 
7fffb3d6f830 error 4 in libc.so.6[7f6e830b4000+195000]
[229798.876745] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[229805.444912] dnsmasq[401428]: segfault at dce8 ip 7fe63bf6a47e sp 
7ffdb105b440 error 4 in libc.so.6[7fe63beed000+195000]
[229805.444933] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[230414.213448] dnsmasq[401538]: segfault at 78b8 ip 7f12160e447e sp 
7ffed6ef2190 error 4 in libc.so.6[7f1216067000+195000]
[230414.213467] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[230465.098989] dnsmasq[402665]: segfault at c378 ip 7f81458f047e sp 
7fff0db334a0 error 4 in libc.so.6[7f8145873000+195000]
[230465.099005] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[231787.247374] dnsmasq[402863]: segfault at 7318 ip 7f3940b9147e sp 
7ffc8df4f010 error 4 in libc.so.6[7f3940b14000+195000]
[231787.247392] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[231844.886399] dnsmasq[405182]: segfault at dc58 ip 7f32a29e147e sp 
7ffddedd7480 error 4 in libc.so.6[7f32a2964000+195000]
[231844.886420] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[234692.482154] dnsmasq[405289]: segfault at 67d8 ip 7fab0c5c447e sp 
7fffd6fd8fa0 error 4 in libc.so.6[7fab0c547000+195000]
[234692.482173] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a

** Kernel log entries from Ubuntu Lunar package **

[234724.842339] dnsmasq[409843]: segfault at fffd ip 
7f35a147647e sp 7ffd536038c0 error 5 in libc.so.6[7f35a13f9000+195000]
[234724.842368] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[234784.918116] dnsmasq[410019]: segfault at fffd ip 
7f634233947e sp 7fff33877f20 error 5 in libc.so.6[7f63422bc000+195000]
[234784.918133] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[235022.163339] dnsmasq[410151]: segfault at fffd ip 
7f21dd37f47e sp 7fff9bf416d0 error 5 in libc.so.6[7f21dd302000+195000]
[235022.163362] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[235024.831325] dnsmasq[410445]: segfault at fffd ip 
7f7edf02147e sp 7ffc4fb19cd0 error 5 in libc.so.6[7f7edefa4000+195000]
[235024.831354] Code: 98 13 00 e8 04 b9 ff ff 0f 1f 40 00 f3 0f 1e fa 48 85 ff 
0f 84 bb 00 00 00 55 48 8d 77 f0 53 48 83 ec 18 48 8b 1d 92 39 17 00 <48> 8b 47 
f8 64 8b 2b a8 02 75 57 48 8b 15 18 39 17 00 64 48 83 3a
[236052.793683] dnsmasq[410630]: segfault at fffd ip 
7f3046ca147e sp 7ffe5583df50 error 5 in libc.so.6[7f3046c24000+195000]
[236052.793704] Code: 98 13 00 e8 

[Yahoo-eng-team] [Bug 1424728] Re: Remove old rpc alias(es) from code

2023-07-10 Thread Dr. Jens Harbott
** Changed in: grenade
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1424728

Title:
  Remove old rpc alias(es) from code

Status in Cinder:
  Fix Released
Status in CloudPulse:
  In Progress
Status in Designate:
  Fix Released
Status in grenade:
  Invalid
Status in IoTronic:
  In Progress
Status in Ironic:
  Fix Released
Status in keystonemiddleware:
  Fix Released
Status in Magnum:
  Fix Released
Status in OpenStack Shared File Systems Service (Manila):
  Fix Released
Status in neutron:
  Fix Released
Status in OpenStack Compute (nova):
  Fix Released
Status in oslo.messaging:
  Fix Released
Status in Sahara:
  Fix Released
Status in OpenStack Searchlight:
  Fix Released
Status in watcher:
  Fix Released

Bug description:
  We have several TRANSPORT_ALIASES entries from way back (Essex, Havana)
  http://git.openstack.org/cgit/openstack/nova/tree/nova/rpc.py#n48

  We need a way to warn end users that they need to fix their nova.conf
  So these can be removed in a later release (full cycle?)

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/1424728/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025096] Re: test_rebuild_volume_backed_server failing 100% on ceph job

2023-07-10 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2025096

Title:
  test_rebuild_volume_backed_server  failing 100% on ceph job

Status in Cinder:
  Invalid
Status in devstack:
  Fix Released
Status in devstack-plugin-ceph:
  Invalid
Status in OpenStack Compute (nova):
  Invalid
Status in tempest:
  Invalid

Bug description:
  There is some issue in ceph job during password injection
  during the rebuild operation, and due to that test is failing 100% failure on 
ceph job.

  These test pass on other jobs like tempest-full-py3

  Failure logs:

  -
  
https://b932a1446345e101b3ef-4740624f0848c8c3257f704064a4516f.ssl.cf5.rackcdn.com/883557/2/check/nova-
  ceph-multistore/d7db64f/testr_results.html

  Response - Headers: {'date': 'Mon, 26 Jun 2023 01:07:28 GMT', 'server': 
'Apache/2.4.52 (Ubuntu)', 'x-openstack-request-id': 
'req-f707a2bb-a7c6-4e65-93e2-7cb8195dd331', 'connection': 'close', 'status': 
'204', 'content-location': 
'https://10.209.99.44:9696/networking/v2.0/security-groups/63dc9e50-2d05-4cfa-912d-92a3c9283e42'}
  Body: b''
  2023-06-26 01:07:28,442 108489 INFO [tempest.lib.common.rest_client] 
Request (ServerActionsV293TestJSON:_run_cleanups): 404 GET 
https://10.209.99.44:9696/networking/v2.0/security-groups/63dc9e50-2d05-4cfa-912d-92a3c9283e42
 0.034s
  2023-06-26 01:07:28,442 108489 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'date': 'Mon, 26 Jun 2023 01:07:28 GMT', 'server': 
'Apache/2.4.52 (Ubuntu)', 'content-type': 'application/json', 'content-length': 
'146', 'x-openstack-request-id': 'req-ae967163-b104-4ddf-b1e8-bb6da298b498', 
'connection': 'close', 'status': '404', 'content-location': 
'https://10.209.99.44:9696/networking/v2.0/security-groups/63dc9e50-2d05-4cfa-912d-92a3c9283e42'}
  Body: b'{"NeutronError": {"type": "SecurityGroupNotFound", "message": 
"Security group 63dc9e50-2d05-4cfa-912d-92a3c9283e42 does not exist", "detail": 
""}}'
  2023-06-26 01:07:29,135 108489 INFO [tempest.lib.common.rest_client] 
Request (ServerActionsV293TestJSON:_run_cleanups): 204 DELETE 
https://10.209.99.44:9696/networking/v2.0/floatingips/c6cc0747-06bd-4783-811d-2408a72db3cc
 0.692s
  2023-06-26 01:07:29,135 108489 DEBUG[tempest.lib.common.rest_client] 
Request - Headers: {'Content-Type': 'application/json', 'Accept': 
'application/json', 'X-Auth-Token': ''}
  Body: None
  Response - Headers: {'date': 'Mon, 26 Jun 2023 01:07:29 GMT', 'server': 
'Apache/2.4.52 (Ubuntu)', 'x-openstack-request-id': 
'req-e0797282-5cc1-4d86-b2ec-696feed6369a', 'connection': 'close', 'status': 
'204', 'content-location': 
'https://10.209.99.44:9696/networking/v2.0/floatingips/c6cc0747-06bd-4783-811d-2408a72db3cc'}
  Body: b''
  }}}

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 136, in 
_get_ssh_connection
  ssh.connect(self.host, port=self.port, username=self.username,
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/paramiko/client.py",
 line 365, in connect
  sock.connect(addr)
  TimeoutError: timed out

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in 
wrapper
  return f(*func_args, **func_kwargs)
File 
"/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 
927, in test_rebuild_volume_backed_server
  linux_client.validate_authentication()
File "/opt/stack/tempest/tempest/lib/common/utils/linux/remote_client.py", 
line 31, in wrapper
  return function(self, *args, **kwargs)
File "/opt/stack/tempest/tempest/lib/common/utils/linux/remote_client.py", 
line 123, in validate_authentication
  self.ssh_client.test_connection_auth()
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 245, in 
test_connection_auth
  connection = self._get_ssh_connection()
File "/opt/stack/tempest/tempest/lib/common/ssh.py", line 155, in 
_get_ssh_connection
  raise exceptions.SSHTimeout(host=self.host,
  tempest.lib.exceptions.SSHTimeout: Connection to the 172.24.5.20 via SSH 
timed out.
  User: cirros, Password: rebuildPassw0rd

To manage notifications about this bug go to:
https://bugs.launchpad.net/cinder/+bug/2025096/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025486] Re: [stable/wallaby] neutron-tempest-plugin-scenario-ovn-wallaby fails on ovn git clone

2023-07-10 Thread Dr. Jens Harbott
** Changed in: devstack
   Status: In Progress => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025486

Title:
  [stable/wallaby] neutron-tempest-plugin-scenario-ovn-wallaby fails on
  ovn git clone

Status in devstack:
  Invalid
Status in neutron:
  Fix Released

Bug description:
  Since 2023-06-30, the neutron-tempest-plugin-scenario-ovn-wallaby started to 
fail 100% in stable/wallaby backports:
  
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-tempest-plugin-scenario-ovn-wallaby=openstack/neutron

  
  Sample failure grabbed from 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_288/887253/2/check/neutron-tempest-plugin-scenario-ovn-wallaby/288071d/job-output.txt

  2023-06-30 11:00:07.584319 | controller | + functions-common:git_timed:644
   :   timeout -s SIGINT 0 git clone https://github.com/ovn-org/ovn.git 
/opt/stack/ovn --branch 36e3ab9b47e93af0599a818e9d6b2930e49473f0
  2023-06-30 11:00:07.587213 | controller | Cloning into '/opt/stack/ovn'...
  2023-06-30 11:00:07.828809 | controller | fatal: Remote branch 
36e3ab9b47e93af0599a818e9d6b2930e49473f0 not found in upstream origin

  I think I recall some recent fixes (to devstack maybe) to change git
  clone/checkout, is it related and just a missing backport to wallaby?
  Newer branches are fine

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/2025486/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025144] Re: [OVN] ``update_floatingip`` should handle the case when only the QoS policy is updated

2023-07-10 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025144

Title:
  [OVN] ``update_floatingip`` should handle the case when only the QoS
  policy is updated

Status in neutron:
  Fix Released

Bug description:
  The ``OVNClient.update_floatingip`` method deletes and creates again
  the OVN NAT rules when a FIP is updated. However, this process is not
  necessary if only the QoS policy is updated. Only the QoS driver call
  is needed. That speeds up the FIP process and avoids the
  ``FIPAddDeleteEvent`` that is called twice, when the NAT register is
  deleted first and then added (if there is a fixed port associated to
  the FIP).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2025144/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2025486] Re: [stable/wallaby] neutron-tempest-plugin-scenario-ovn-wallaby fails on ovn git clone

2023-07-10 Thread Rodolfo Alonso
Fixed in https://review.opendev.org/c/openstack/neutron-tempest-
plugin/+/888029.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2025486

Title:
  [stable/wallaby] neutron-tempest-plugin-scenario-ovn-wallaby fails on
  ovn git clone

Status in devstack:
  In Progress
Status in neutron:
  Fix Released

Bug description:
  Since 2023-06-30, the neutron-tempest-plugin-scenario-ovn-wallaby started to 
fail 100% in stable/wallaby backports:
  
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-tempest-plugin-scenario-ovn-wallaby=openstack/neutron

  
  Sample failure grabbed from 
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_288/887253/2/check/neutron-tempest-plugin-scenario-ovn-wallaby/288071d/job-output.txt

  2023-06-30 11:00:07.584319 | controller | + functions-common:git_timed:644
   :   timeout -s SIGINT 0 git clone https://github.com/ovn-org/ovn.git 
/opt/stack/ovn --branch 36e3ab9b47e93af0599a818e9d6b2930e49473f0
  2023-06-30 11:00:07.587213 | controller | Cloning into '/opt/stack/ovn'...
  2023-06-30 11:00:07.828809 | controller | fatal: Remote branch 
36e3ab9b47e93af0599a818e9d6b2930e49473f0 not found in upstream origin

  I think I recall some recent fixes (to devstack maybe) to change git
  clone/checkout, is it related and just a missing backport to wallaby?
  Newer branches are fine

To manage notifications about this bug go to:
https://bugs.launchpad.net/devstack/+bug/2025486/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1953165] Re: DHCP agent fails to fully configure DHCP namespaces because of duplicate address detected

2023-07-10 Thread Rodolfo Alonso
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1953165

Title:
  DHCP agent fails to fully configure DHCP namespaces because of
  duplicate address detected

Status in neutron:
  Fix Released

Bug description:
  After upgrading a Neutron/ML2 OVS deployment from Ussuri to Victoria,
  updating the host OS from CentOS Linux 8 to CentOS Stream 8, and
  rebooting, DHCP was not functional on some but not all networks.

  DHCP agent logs included the following error multiple times:

  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent [-] Failure waiting 
for address fe80::a9fe:a9fe to become ready: Duplicate address detected: 
neutron.agent.linux.ip_lib.AddressNotReady: Failure waiting for address 
fe80::a9fe:a9fe to become ready: Duplicate address detected
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent Traceback (most 
recent call last):
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/common/utils.py", line 
165, in call
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent return 
func(*args, **kwargs)
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py", 
line 401, in safe_configure_dhcp_for_network
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent 
self.configure_dhcp_for_network(network)
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/osprofiler/profiler.py", line 
160, in wrapper
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent result = 
f(*args, **kwargs)
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py", 
line 415, in configure_dhcp_for_network
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent 
self.update_isolated_metadata_proxy(network)
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/osprofiler/profiler.py", line 
160, in wrapper
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent result = 
f(*args, **kwargs)
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py", 
line 758, in update_isolated_metadata_proxy
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent 
self.enable_isolated_metadata_proxy(network)
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/osprofiler/profiler.py", line 
160, in wrapper
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent result = 
f(*args, **kwargs)
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/agent/dhcp/agent.py", 
line 816, in enable_isolated_metadata_proxy
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent self.conf, 
bind_address=constants.METADATA_V4_IP, **kwargs)
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/agent/metadata/driver.py",
 line 271, in spawn_monitored_metadata_proxy
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent 
).wait_until_address_ready(address=bind_address_v6)
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py",
 line 597, in wait_until_address_ready
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent 
exception=AddressNotReady(address=address, reason=errmsg))
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/common/utils.py", line 
701, in wait_until_true
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent while not 
predicate():
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent   File 
"/var/lib/kolla/venv/lib/python3.6/site-packages/neutron/agent/linux/ip_lib.py",
 line 591, in is_address_ready
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent address=address, 
reason=_('Duplicate address detected'))
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent 
neutron.agent.linux.ip_lib.AddressNotReady: Failure waiting for address 
fe80::a9fe:a9fe to become ready: Duplicate address detected
  2021-11-30 17:05:35.475 7 ERROR neutron.agent.dhcp.agent 

  The tap interface inside each affected qdhcp namespace was in a state
  like this:

  35: tap0f8bb343-c1:  mtu 1450 qdisc noqueue 
state UNKNOWN group default qlen 1000
  link/ether fa:16:3e:ed:6f:60 brd ff:ff:ff:ff:ff:ff
  inet 169.254.169.254/32 brd 169.254.169.254 scope global