[Yahoo-eng-team] [Bug 2035286] [NEW] [ironic driver] Shards are not properly queried for

2023-09-12 Thread Jay Faulkner
Public bug reported:

While attempting to setup CI to further validate support for shards, I
found the following behavior:

- the ironic api logs showed no shard= being provided in the query
- no nodes showed in the resource inventory, failing CI on this change: 
https://review.opendev.org/c/openstack/ironic/+/894460

On a held devstack node, I was able to reproduce and resolve the problem
by using the proper, plural query string "shards" instead of "shard".

Patch incoming.

** Affects: nova
 Importance: Undecided
 Status: In Progress


** Tags: ironic

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2035286

Title:
  [ironic driver] Shards are not properly queried for

Status in OpenStack Compute (nova):
  In Progress

Bug description:
  While attempting to setup CI to further validate support for shards, I
  found the following behavior:

  - the ironic api logs showed no shard= being provided in the query
  - no nodes showed in the resource inventory, failing CI on this change: 
https://review.opendev.org/c/openstack/ironic/+/894460

  On a held devstack node, I was able to reproduce and resolve the
  problem by using the proper, plural query string "shards" instead of
  "shard".

  Patch incoming.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2035286/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2034952] Re: dashboard failures with Django 4.2.4

2023-09-12 Thread Corey Bryant
** Summary changed:

- manila-ui failures with Django 4.2.4
+ dashboard failures with Django 4.2.4

** Also affects: neutron-vpnaas-dashboard
   Importance: Undecided
   Status: New

** Description changed:

  Running unit tests with Django==4.2.4 results in 124 errors. They seem
  to be limited to the same error:
+ 
+ == manila-ui ==
  
  ==
  ERROR: test_migration_get_progress 
(manila_ui.tests.dashboards.admin.shares.test_forms.ManilaDashboardsAdminMigrationFormTests)
  --
  Traceback (most recent call last):
    File 
"/home/corey/pkg/bobcat/upstream/manila-ui/manila_ui/tests/dashboards/admin/shares/test_forms.py",
 line 29, in setUp
  self.request = wsgi.WSGIRequest(FAKE_ENVIRON)
    File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/django/core/handlers/wsgi.py",
 line 78, in __init__
  self._stream = LimitedStream(self.environ["wsgi.input"], content_length)
    File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/django/core/handlers/wsgi.py",
 line 24, in __init__
  self._read = stream.read
  AttributeError: 'str' object has no attribute 'read'
  
  To reproduce:
  git clone https://opendev.org/openstack/manila-ui; cd manila-ui
  tox -e py3
  .tox/py3/bin/pip3 install Django==4.2.4
  tox -e py3
  
  
+ == horizon ==
+ 
  I'm also hitting a similar error that seems to need fixing in horizon:
  
  ==
  ERROR: test_view_1_None 
(manila_ui.tests.dashboards.project.user_messages.tests.UserMessagesViewTests)
  --
- Traceback (most recent call last):
 
-   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/ddt.py",
 line 220, in wrapper
- return func(self, *args, **kwargs)   
-   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/manila_ui/tests/dashboards/project/user_messages/tests.py",
 line 44, in test_view
- self.assertNoMessages()  
-   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/horizon/test/helpers.py",
 line 195, in assertNoMessages
- self.assertMessageCount(response, success=0, warn=0, info=0, error=0)
-   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/horizon/test/helpers.py",
 line 203, in assertMessageCount
- temp_req = self.client.request(**{'wsgi.input': None})
-   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/django/test/client.py",
 line 886, in request
- response = self.handler(environ) 
-   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/django/test/client.py",
 line 168, in __call__
- request = WSGIRequest(environ)   
-   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/django/core/handlers/wsgi.py",
 line 79, in __init__
- self._stream = LimitedStream(self.environ["wsgi.input"], content_length)
-   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/django/core/handlers/wsgi.py",
 line 25, in __init__
- self._read = stream.read
+ Traceback (most recent call last):
+   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/ddt.py",
 line 220, in wrapper
+ return func(self, *args, **kwargs)
+   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/manila_ui/tests/dashboards/project/user_messages/tests.py",
 line 44, in test_view
+ self.assertNoMessages()
+   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/horizon/test/helpers.py",
 line 195, in assertNoMessages
+ self.assertMessageCount(response, success=0, warn=0, info=0, error=0)
+   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/horizon/test/helpers.py",
 line 203, in assertMessageCount
+ temp_req = self.client.request(**{'wsgi.input': None})
+   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/django/test/client.py",
 line 886, in request
+ response = self.handler(environ)
+   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/django/test/client.py",
 line 168, in __call__
+ request = WSGIRequest(environ)
+   File 
"/home/corey/pkg/bobcat/upstream/manila-ui/.tox/py3/lib/python3.10/site-packages/django/core/handlers/wsgi.py",
 line 79, in __init__
+ self._stream = LimitedStream(self.environ["wsgi.input"], content_length)
+   File 

[Yahoo-eng-team] [Bug 2035281] [NEW] [ML2/OVN] DGP/Floating IP issue - no flows for chassis gateway port

2023-09-12 Thread Roberto Bartzen Acosta
Public bug reported:

Hello everyone.

I noticed a problem with DGP feature when configured by OpenStack
Neutron using multiple external (provider) subnets.

For example, the OpenStack external provider network has multiple
subnets, such as:

subnet1: 172.16.10.0/24
subnet2: 172.16.20.0/24

When the Logical Router attaches the external gateway port to this
network, only one subnet is configured (static or dinamically), e.g. IP
address = 172.16.10.1/24.

If the Floating IP assigned for some VM uses the same subnet range as
the router's IP network, the dnat_and_snat rule will be created
correctly and inbound/outbound traffic will work. However, when the
Floating IP uses the other one subnet (not on the same network of the
external router port), the dnat_and_snat is not created and we can see
the warning message in the log as below:

2023-09-08T13:29:40.721Z|00202|northd|WARN|Unable to determine
gateway_port for NAT with external_ip: 172.16.20.157 configured on
logical router: neutron-477cf920-21e3-46e5-8c8f-7b8caef7f549 with
multiple distributed gateway ports

This problem occurs because Neutron has not configured the "gateway-
port" param in the OVN NAT rule. In this case, the northd [1]
automatically obtains the gateway port using the external IP from the
NAT rule and the external network configured on the OVN logical router.
This issue was introduced with these commits [1][2] and affect ML2/OVN
backend since OVN version 21.09.

This problem was discussed on the ovs-discuss mailing list [3], but
technically it seems to me that it is a required change in the CMS to
guarantee the creation of FIP flows without having to rely on OVN to
automatically discover the gateway port.


If OVN is using Distributed Gateway Port on the router, the FIP created by 
Neutron will not work due to lack of openflow flows to the gateway port:

Before set the gateway_port:

ovn-nbctl lr-nat-list 078fd69b-f4c7-4469-a900-918d0a229bd1 
TYPE GATEWAY_PORT  EXTERNAL_IPEXTERNAL_PORT
LOGICAL_IP  EXTERNAL_MAC LOGICAL_PORT
dnat_and_snat  172.16.20.10
10.0.0.232
snat   172.16.10.41
10.0.0.0/24

ovn-sbctl lflow-list | grep 172.16.20.10
  table=25(ls_in_l2_lkup  ), priority=80   , match=(flags[1] == 0 && arp.op 
== 1 && arp.tpa == 172.16.20.10), action=(clone {outport = "admin-rt1-tenant1"; 
output; }; outport = "_MC_flood_l2"; output;)
  table=25(ls_in_l2_lkup  ), priority=80   , match=(flags[1] == 0 && arp.op 
== 1 && arp.tpa == 172.16.20.10), action=(clone {outport = 
"1cda494c-4e86-4941-9680-b949341b12a5"; output; }; outport = "_MC_flood_l2"; 
output;)
  table=25(ls_in_l2_lkup  ), priority=80   , match=(flags[1] == 0 && arp.op 
== 1 && arp.tpa == 172.16.20.10), action=(clone {outport = 
"bdf0ad70-8677-4340-b5ec-f26af6575e5e"; output; }; outport = "_MC_flood_l2"; 
output;)
  table=25(ls_in_l2_lkup  ), priority=80   , match=(flags[1] == 0 && arp.op 
== 1 && arp.tpa == 172.16.20.10), action=(clone {outport = 
"e77e522c-5170-4566-a7b5-1b6ef9f88000"; output; }; outport = "_MC_flood_l2"; 
output;)
  table=3 (lr_in_ip_input ), priority=90   , match=(arp.op == 1 && arp.tpa 
== 172.16.20.10), action=(eth.dst = eth.src; eth.src = xreg0[0..47]; arp.op = 
2; /* ARP reply */ arp.tha = arp.sha; arp.sha = xreg0[0..47]; arp.tpa <-> 
arp.spa; outport = inport; flags.loopback = 1; output;)
  table=25(ls_in_l2_lkup  ), priority=80   , match=(flags[1] == 0 && arp.op 
== 1 && arp.tpa == 172.16.20.10), action=(clone {outport = 
"bb89ed8d-a60d-4a9d-8210-205770490180"; output; }; outport = "_MC_flood_l2"; 
output;)


After set the gateway_port:
ovn-nbctl lr-nat-list 078fd69b-f4c7-4469-a900-918d0a229bd1 
TYPE GATEWAY_PORT  EXTERNAL_IPEXTERNAL_PORT
LOGICAL_IP  EXTERNAL_MAC LOGICAL_PORT
dnat_and_snatlrp-bb89ed8d-a60d-172.16.20.10
10.0.0.232
snat   172.16.10.41
10.0.0.0/24

ovn-sbctl lflow-list | grep 172.16.20.10
  table=25(ls_in_l2_lkup  ), priority=80   , match=(flags[1] == 0 && arp.op 
== 1 && arp.tpa == 172.16.20.10), action=(clone {outport = "admin-rt1-tenant1"; 
output; }; outport = "_MC_flood_l2"; output;)
  table=25(ls_in_l2_lkup  ), priority=80   , match=(flags[1] == 0 && arp.op 
== 1 && arp.tpa == 172.16.20.10), action=(clone {outport = 
"1cda494c-4e86-4941-9680-b949341b12a5"; output; }; outport = "_MC_flood_l2"; 
output;)
  table=25(ls_in_l2_lkup  ), priority=80   , match=(flags[1] == 0 && arp.op 
== 1 && arp.tpa == 172.16.20.10), action=(clone {outport = 
"bdf0ad70-8677-4340-b5ec-f26af6575e5e"; output; }; outport = "_MC_flood_l2"; 
output;)
  table=25(ls_in_l2_lkup  ), priority=80   , match=(flags[1] == 0 && arp.op 
== 1 && arp.tpa == 172.16.20.10), action=(clone {outport = 
"e77e522c-5170-4566-a7b5-1b6ef9f88000"; output; }; 

[Yahoo-eng-team] [Bug 2035230] [NEW] Port on creation is returned without the fixed_ips field populated

2023-09-12 Thread Christian Rohmann
Public bug reported:

I ran into an issue with port creation when using the openstack-
terraform provider (which then uses Gophercloud) to access the Neutron
API. https://github.com/terraform-provider-openstack/terraform-
provider-openstack/issues/1606


In short a port is created, but that port does not have the `fixed_ips` field 
populated.
This is the response to the create request:

```
{
  "ports": [
{
  "admin_state_up": true,
  "allowed_address_pairs": [
{
  "ip_address": "10.3.4.0/24",
  "mac_address": "fa:16:3e:3a:58:ec"
}
  ],
  "binding:vnic_type": "normal",
  "created_at": "2023-08-23T10:06:12Z",
  "description": "",
  "device_id": "",
  "device_owner": "",
  "dns_assignment": [],
  "dns_name": "",
  "extra_dhcp_opts": [],
  "fixed_ips": [],
  "id": "9b37978b-ed53-41c2-983f-31570eb88259",
  "mac_address": "fa:16:3e:3a:58:ec",
  "name": "vpn",
  "network_id": "f946cedc-94d1-4bde-a680-f59d615ad2e3",
  "port_security_enabled": true,
  "project_id": "REDACTED",
  "revision_number": 1,
  "security_groups": [
"87acb073-5123-4473-b33b-fc78f522c6b8"
  ],
  "status": "DOWN",
  "tags": [],
  "tenant_id": "REDACTED",
  "updated_at": "2023-08-23T10:06:12Z"
}
  ]
}
```


If you look at my attempt to dive into the mechanics at 
https://github.com/terraform-provider-openstack/terraform-provider-openstack/issues/1606#issuecomment-1692082094,
 I am wondering if the IPAllocation might  happening asynchronously and thus 
the response to the API call might not contain the fixed_ips if that takes to 
long?


All in all I just kindly as for a qualified comment if this is expected
behavior (needs handing within the terraform provider) or a bug (can be
fixed on the Neutron side).

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2035230

Title:
  Port on creation is returned without the fixed_ips field populated

Status in neutron:
  New

Bug description:
  I ran into an issue with port creation when using the openstack-
  terraform provider (which then uses Gophercloud) to access the Neutron
  API. https://github.com/terraform-provider-openstack/terraform-
  provider-openstack/issues/1606

  
  In short a port is created, but that port does not have the `fixed_ips` field 
populated.
  This is the response to the create request:

  ```
  {
"ports": [
  {
"admin_state_up": true,
"allowed_address_pairs": [
  {
"ip_address": "10.3.4.0/24",
"mac_address": "fa:16:3e:3a:58:ec"
  }
],
"binding:vnic_type": "normal",
"created_at": "2023-08-23T10:06:12Z",
"description": "",
"device_id": "",
"device_owner": "",
"dns_assignment": [],
"dns_name": "",
"extra_dhcp_opts": [],
"fixed_ips": [],
"id": "9b37978b-ed53-41c2-983f-31570eb88259",
"mac_address": "fa:16:3e:3a:58:ec",
"name": "vpn",
"network_id": "f946cedc-94d1-4bde-a680-f59d615ad2e3",
"port_security_enabled": true,
"project_id": "REDACTED",
"revision_number": 1,
"security_groups": [
  "87acb073-5123-4473-b33b-fc78f522c6b8"
],
"status": "DOWN",
"tags": [],
"tenant_id": "REDACTED",
"updated_at": "2023-08-23T10:06:12Z"
  }
]
  }
  ```

  
  If you look at my attempt to dive into the mechanics at 
https://github.com/terraform-provider-openstack/terraform-provider-openstack/issues/1606#issuecomment-1692082094,
 I am wondering if the IPAllocation might  happening asynchronously and thus 
the response to the API call might not contain the fixed_ips if that takes to 
long?



  All in all I just kindly as for a qualified comment if this is
  expected behavior (needs handing within the terraform provider) or a
  bug (can be fixed on the Neutron side).

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2035230/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2018967] Re: [fwaas] test_update_firewall_group fails randomly

2023-09-12 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-fwaas/+/884333
Committed: 
https://opendev.org/openstack/neutron-fwaas/commit/5b56eaf3b082fddaf219a8a6d7af53ba4840480c
Submitter: "Zuul (22348)"
Branch:master

commit 5b56eaf3b082fddaf219a8a6d7af53ba4840480c
Author: zhouhenglc 
Date:   Thu May 25 16:06:07 2023 +0800

Firewall group associated with ports is not allowed to be deleted

Currently, we determine that the firewall group is in use based on
its ACTIVE status. But the firewall group may have just updated
the port and is currently PENDING_UPDATE status, deletion should
not be allowed at this time.
This patch changes the judgment method for deleting firewall
groups, no longer based on their status. But like other neutron
resources, based on whether or not they are associated.

Closes-Bug: #2018967
Depends-On: 
https://review.opendev.org/c/openstack/neutron-tempest-plugin/+/883826

Change-Id: Ib7ab0daf9f6de45125ffc9408f865fc0964ff339


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2018967

Title:
  [fwaas] test_update_firewall_group fails randomly

Status in neutron:
  Fix Released

Bug description:
  Seen twice till now recently:-
  - 
https://a78793e982809689fe25-25fa16d377ec97c08c4e6ce3af683bd9.ssl.cf5.rackcdn.com/881232/1/check/neutron-tempest-plugin-fwaas/b0730f9/testr_results.html
  - 
https://53a7c53d508ecea7485c-f8ccc2b7c32dd8ba5caab7dc1c36a741.ssl.cf5.rackcdn.com/881232/1/gate/neutron-tempest-plugin-fwaas/5712826/testr_results.html

  Fails as below:-
  traceback-1: {{{
  Traceback (most recent call last):
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/api/test_fwaasv2_extensions.py",
 line 130, in _try_delete_firewall_group
  self.firewall_groups_client.delete_firewall_group(fwg_id)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 38, in delete_firewall_group
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 339, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 867, in 
_error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: Conflict with state of target resource
  Details: {'type': 'FirewallGroupInUse', 'message': 'Firewall group 
57266ed6-c39c-4be2-80d8-649469adf7eb is still active.', 'detail': ''}
  }}}

  traceback-2: {{{
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/utils/test_utils.py", line 87, 
in call_and_ignore_notfound_exc
  return func(*args, **kwargs)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 100, in delete_firewall_policy
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 339, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 
request
  self._error_checker(resp, resp_body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 867, in 
_error_checker
  raise exceptions.Conflict(resp_body, resp=resp)
  tempest.lib.exceptions.Conflict: Conflict with state of target resource
  Details: {'type': 'FirewallPolicyInUse', 'message': 'Firewall policy 
0e23c50e-28a9-41e5-829c-9a67d058bafd is being used.', 'detail': ''}
  }}}

  traceback-3: {{{
  Traceback (most recent call last):
File "/opt/stack/tempest/tempest/lib/common/utils/test_utils.py", line 87, 
in call_and_ignore_notfound_exc
  return func(*args, **kwargs)
File 
"/opt/stack/tempest/.tox/tempest/lib/python3.10/site-packages/neutron_tempest_plugin/fwaas/services/v2_client.py",
 line 100, in delete_firewall_policy
  return self.delete_resource(uri)
File "/opt/stack/tempest/tempest/lib/services/network/base.py", line 42, in 
delete_resource
  resp, body = self.delete(req_uri)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 339, in 
delete
  return self.request('DELETE', url, extra_headers, headers, body)
File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 742, in 

[Yahoo-eng-team] [Bug 2034684] Re: UEFI (edk2/ovmf) network boot with OVN fail because no DHCP release reply

2023-09-12 Thread Rodolfo Alonso
Removing the Neutron dependency. We'll monitor the core OVN bug to track
the progress and test it.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2034684

Title:
  UEFI (edk2/ovmf) network boot with OVN fail because no DHCP release
  reply

Status in Ironic:
  New
Status in neutron:
  Invalid

Bug description:
  When attempting to verify neutron change[1], we discovered that
  despite options in DHCPv6 ADV and REQ/REPLY are correct network
  booting still fails.

  When comparing traffic capture between openvswitch+neutron-dhcp-agent setup 
to the ovn setup a significant difference is that:
  * neutron-dhcp-ageent(dnsmasq) does REPLY to RELEASE with a packet including 
a dhcpv6 option type Status code (13) success to confirm the release. edk2/ovmf 
does TFTP transfer of the NBP immediately after recieving this reply.
  * OVN does not respond with a REPLY to the clients RELEASE. In traffic 
capture we can see the client repeates the RELEASE several times, but finally 
give up and raise an error:

  >>Start PXE over IPv6..
Station IP address is FC01:0:0:0:0:0:0:206
Server IP address is FC00:0:0:0:0:0:0:1
NBP filename is snponly.efi
NBP filesize is 0 Bytes
PXE-E53: No boot filename received.

  --
  FAILING - sequence on OVN
  --
  No.   TimeSource  Destination ProtocolLength  Info
  1 0.00fe80::f816:3eff:fe6f:a0ab   ::  ICMPv6  118 
Router Advertisement from fa:16:3e:6f:a0:ab
  2 51.931422   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  177 
Solicit XID: 0x4f04ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 
  3 51.931840   fe80::f816:3eff:feeb:b176   fe80::5054:ff:feb1:a5b0 
DHCPv6  198 Advertise XID: 0x4f04ed CID: 
000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  4 56.900421   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  219 
Request XID: 0x5004ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  5 56.900726   fe80::f816:3eff:feeb:b176   fe80::5054:ff:feb1:a5b0 
DHCPv6  198 Reply XID: 0x5004ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 
IAA: fc01::2ad 
  6 68.861979   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  7 69.900715   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  8 72.900784   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  9 77.900774   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  1086.900759   fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 
  11103.900786  fe80::5054:ff:feb1:a5b0 ff02::1:2   DHCPv6  152 
Release XID: 0x5104ed CID: 000430a25dc55972534aa516ff9c9f7c7ac4 IAA: fc01::2ad 

  
  --
  WORKING - sequence on neutron-dhcp-agent (dnsmasq)
  --
  No.   TimeSource  Destination ProtocolLength  Info
  1 0.00fe80::f816:3eff:fe38:eef0   ff02::1 ICMPv6  142 
Router Advertisement from fa:16:3e:38:ee:f0
  2 0.001102fe80::5054:ff:fed9:3d5c ff02::1:2   DHCPv6  116 
Solicit XID: 0x71d892 CID: 0004c9b0caa37bce994e85633d7572708047 
  3 0.001245fe80::f816:3eff:fef5:ef7a   fe80::5054:ff:fed9:3d5c 
DHCPv6  208 Advertise XID: 0x71d892 CID: 
0004c9b0caa37bce994e85633d7572708047 IAA: fc01::87 
  4 0.002436fe80::5054:ff:fed9:3d5c ff02::1:2   DHCPv6  162 
Request XID: 0x72d892 CID: 0004c9b0caa37bce994e85633d7572708047 IAA: fc01::87 
  5 0.002508fe80::f816:3eff:fef5:ef7a   fe80::5054:ff:fed9:3d5c 
DHCPv6  219 Reply XID: 0x72d892 CID: 0004c9b0caa37bce994e85633d7572708047 
IAA: fc01::87 
  6 3.130605fe80::5054:ff:fed9:3d5c ff02::1:2   DHCPv6  223 
Request XID: 0x73d892 CID: 0004c9b0caa37bce994e85633d7572708047 IAA: fc01::87 
  7 3.130791fe80::f816:3eff:fef5:ef7a   fe80::5054:ff:fed9:3d5c 
DHCPv6  256 Reply XID: 0x73d892 CID: 0004c9b0caa37bce994e85633d7572708047 
IAA: fc01::2a0 
  8 3.132060fe80::5054:ff:fed9:3d5c ff02::1:2   DHCPv6  156 
Release XID: 0x74d892 CID: 0004c9b0caa37bce994e85633d7572708047 IAA: fc01::87 
  9 3.132126fe80::f816:3eff:fef5:ef7a   fe80::5054:ff:fed9:3d5c 
DHCPv6  128 Reply XID: 0x74d892 CID: 

[Yahoo-eng-team] [Bug 1949606] Re: QEMU >= 5.0.0 with -accel tcg uses a tb-size of 1GB causing OOM issues in CI

2023-09-12 Thread yatin
Fixed with https://review.opendev.org/c/openstack/nova/+/868419

** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1949606

Title:
  QEMU >= 5.0.0 with -accel tcg uses a tb-size of 1GB  causing OOM
  issues in CI

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  This is a Nova tracker for a set of issues being seen in OpenStack CI
  jobs using QEMU >= 5.0.0 caused by the following change in defaults
  witin QEMU:

  https://github.com/qemu/qemu/commit/600e17b26

  https://gitlab.com/qemu-project/qemu/-/issues/693

  At present most of the impacted jobs are being given an increased
  amount of swap with lower Tempest concurrency settings to avoid the
  issue, for example for CentOS 8 stream:

  https://review.opendev.org/c/openstack/devstack/+/803706

  https://review.opendev.org/c/openstack/tempest/+/797614

  Longer term a libvirt RFE has been raised to allow Nova to control the
  size of the cache:

  https://gitlab.com/libvirt/libvirt/-/issues/229

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1949606/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998789] Re: PooledLDAPHandler.result3 does not release pool connection back when an exception is raised

2023-09-12 Thread Mustafa Kemal Gilor
** Also affects: keystone (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: keystone (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: keystone (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/zed
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/xena
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/yoga
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/1998789

Title:
  PooledLDAPHandler.result3 does not release pool connection back when
  an exception is raised

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  New
Status in Ubuntu Cloud Archive xena series:
  New
Status in Ubuntu Cloud Archive yoga series:
  New
Status in Ubuntu Cloud Archive zed series:
  New
Status in OpenStack Identity (keystone):
  Fix Released
Status in keystone package in Ubuntu:
  New
Status in keystone source package in Focal:
  New
Status in keystone source package in Jammy:
  New

Bug description:
  This is a follow-up issue for LP#1896125.

  This problem has happened when LDAP connection pooling is on
  (use_pool=True), page_size > 0 and pool_connection_timeout is < 'ldap
  server response time'. The scenario is as follows:

  - An user tries to log in to a domain that is attached to LDAP backend.
  - LDAP server does not respond in `pool_connection_timeout` seconds, causing 
LDAP connection to raise a ldap.TIMEOUT() exception
  - From now on, all subsequent LDAP requests will fail with 
ldappool.MaxConnectionReachedError

  
  An in-depth analysis explains why it happens:

  - LDAP query initiated for user login request with BaseLdap._ldap_get() 
function call, which grabs a connection with self.get_connection() and invokes 
conn.search_s()
  - conn.search_s() invokes conn._paged_search_s() since page_size is > 0
  - conn._paged_search_s() calls conn.search_ext() 
(PooledLDAPHandler.search_ext) method 
  - conn.search_ext() initiates an asynchronous LDAP request and returns an 
AsynchronousMessage object to the _paged_search_s(), representing the request.
  - conn._paged_search_s() tries to obtain asynchronous LDAP request results 
via calling conn.result3() (PooledLDAPHandler.result3) 
  - conn.result3() calls message.connection.result3()
  - the server cannot respond in pool_connection_timeout seconds, 
  - message.connection.result3() raises a ldap.TIMEOUT(), causes subsequent 
connection release function, message.clean() to be not called
  - the connection is kept active forever, subsequent requests cannot use it 
anymore

  Reproducer:

  - Deploy an LDAP server of your choice
  - Fill it with many data so the search takes more than 
`pool_connection_timeout` seconds
  - Define a keystone domain with the LDAP driver with following options:

  [ldap]
  use_pool = True
  page_size = 100
  pool_connection_timeout = 3
  pool_retry_max = 3
  pool_size = 10
   
  - Point the domain to the LDAP server
  - Try to login to the OpenStack dashboard, or try to do anything that uses 
the LDAP user
  - Observe the /var/log/apache2/keystone_error.log, it should contain 
ldap.TIMEOUT() stack traces followed by `ldappool.MaxConnectionReachedError` 
stack traces

  Known workarounds:

  - Disable LDAP pooling by setting use_pool=Flase
  - Set page_size to 0

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1998789/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2012731] Re: [CI] "neutron-ovs-grenade-multinode-skip-level" and "neutron-ovn-grenade-multinode-skip-level" failing always

2023-09-12 Thread yatin
Required patch merged long back, jobs are green, Closing it:-
* 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-grenade-multinode-skip-level=0
* 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovn-grenade-multinode-skip-level=0

** Changed in: neutron
   Status: New => Fix Released

** Changed in: neutron
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2012731

Title:
  [CI] "neutron-ovs-grenade-multinode-skip-level" and "neutron-ovn-
  grenade-multinode-skip-level" failing always

Status in neutron:
  Fix Released

Bug description:
  Logs:
  * 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovs-grenade-multinode-skip-level=0
  * 
https://zuul.opendev.org/t/openstack/builds?job_name=neutron-ovn-grenade-multinode-skip-level=0

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2012731/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp