[Yahoo-eng-team] [Bug 2037717] Re: [OVN] ``PortBindingChassisEvent`` event is not executing the conditions check

2024-03-01 Thread Brian Haley
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Jammy)
   Status: New => Fix Released

** Also affects: cloud-archive
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/wallaby
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2037717

Title:
  [OVN] ``PortBindingChassisEvent`` event is not executing the
  conditions check

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  Fix Released

Bug description:
  Since [1], that overrides the "match_fn" method, the event is not checking 
the defined conditions in the initialization, that are:
    ('type', '=', ovn_const.OVN_CHASSIS_REDIRECT)

  [1]https://review.opendev.org/q/I3b7c5d73d2b0d20fb06527ade30af8939b249d75

  Related bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=2241824

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/2037717/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2041611] Re: GET /v3/domains returns all domains even in domain scope

2024-03-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/keystone/+/900028
Committed: 
https://opendev.org/openstack/keystone/commit/dd785ee692118a56ea0e3aaaf7f5bd6c73ea9c91
Submitter: "Zuul (22348)"
Branch:master

commit dd785ee692118a56ea0e3aaaf7f5bd6c73ea9c91
Author: Markus Hentsch 
Date:   Fri Nov 3 10:43:34 2023 +0100

Add domain scoping to list_domains

Introduces domain-scoped filtering of the response list of the
list_domains endpoint when the user is authenticated in domain scope
instead of returning all domains. This aligns the implementation with
other endpoints like list_projects or list_groups and allows for a
domain-scoped reader role.
Changes the default policy rule for identity:list_domains to
incorporate this new behavior for the reader role.

Closes-Bug: 2041611
Change-Id: I8ee50efc3b4850060cce840fc904bae17f1503a9


** Changed in: keystone
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2041611

Title:
  GET /v3/domains returns all domains even in domain scope

Status in OpenStack Identity (keystone):
  Fix Released

Bug description:
  ## Summary

  The `GET /v3/domains` endpoint's returned domain list is not filtered if a 
domain-scoped authentication is used to access it. Instead it returns all 
domains.
  In case domain names have relations to tenants/customers, any policy model 
that allows tenants to list domains will expose other tenants' identities.

  In contrast, endpoints like `GET /v3/projects` and `GET /v3/groups`
  implement proper domain scoping. For further technical analysis how
  those endpoints achieve this, see here:
  
https://github.com/SovereignCloudStack/issues/issues/446#issuecomment-1775095749

  ## Steps to reproduce

  The following steps have been recorded using an unmodified DevStack
  environment.

  First consider the following adjustment to
  `/etc/keystone/policy.yaml`:

  ```
  identity:list_domains: role:member or rule:admin_required
  ```

  ... so that users with the `member` role may access `GET /v3/domains`
  for illustration purposes.

  Next, create additional domains and a domain member:

  ```
  openstack domain create domain2
  openstack domain create domain3
  openstack user create --domain domain2 --password "foobar123%" domain2-user
  openstack role add --user domain2-user --domain domain2 member
  ```

  Finally, create an openrc file for the domain member to have it issue
  a domain-scoped token:

  ```
  source stackrc
  export OS_REGION_NAME=RegionOne
  export OS_AUTH_URL=http://$HOST_IP/identity
  export OS_IDENTITY_API_VERSION=3
  export OS_USERNAME=domain2-user
  export OS_AUTH_TYPE=password
  export OS_USER_DOMAIN_NAME=domain2
  export OS_DOMAIN_NAME=domain2
  export OS_PASSWORD=foobar123%
  unset OS_PROJECT_NAME  
  unset OS_TENANT_NAME
  unset OS_PROJECT_DOMAIN_NAME
  unset OS_PROJECT_DOMAIN_ID
  unset OS_USER_DOMAIN_ID
  ```

  (this example is based on a DevStack environment)

  Now the following happens when the domain member user is accessing the
  domain list:

  ```
  $ source domain-member.openrc

  $ openstack domain list
  +--+-+-++
  | ID   | Name| Enabled | Description|
  +--+-+-++
  | 1a1a793377464131a2744e27fec9bcdf | domain2 | True||
  | 449167ed506c43cea43b997a1f345606 | domain3 | True||
  | default  | Default | True| The default domain |
  +--+-+-++
  ```

  Although the token of the domain member user making the API request is 
strictly domain-scoped, all domains are returned.
  In case domain names would somehow be related to other tenants' identities, 
these would get exposed this way.

  ## Notes

  This is not an issue with Keystone's default policy configuration
  since only admins may access the `GET /v3/domains` endpoint at all and
  those have access to all domains anyway.

  Only unlocking `GET /v3/domains` for other roles will make this
  undesired behavior possible.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2041611/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973347] Re: OVN revision_number infinite update loop

2024-03-01 Thread Edward Hope-Morley
** Also affects: neutron (Ubuntu)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Focal)
   Importance: Undecided
   Status: New

** Also affects: neutron (Ubuntu Jammy)
   Importance: Undecided
   Status: New

** Changed in: neutron (Ubuntu Jammy)
   Status: New => Fix Released

** Also affects: cloud-archive/ussuri
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/victoria
   Importance: Undecided
   Status: New

** Also affects: cloud-archive/wallaby
   Importance: Undecided
   Status: New

** Changed in: cloud-archive/wallaby
   Status: New => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973347

Title:
  OVN revision_number infinite update loop

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  Fix Released

Bug description:
  After the change described in
  https://mail.openvswitch.org/pipermail/ovs-dev/2022-May/393966.html
  was merged and released in stable OVN 22.03, there is a possibility to
  create an endless loop of revision_number update in external_ids of
  ports and router_ports. We have confirmed the bug in Ussuri and Yoga.
  When the problem happens, the Neutron log would look like this:

  2022-05-13 09:30:56.318 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4815
  2022-05-13 09:30:56.366 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:56.367 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:56.367 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:56.467 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4815
  2022-05-13 09:30:56.880 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:56.881 25 ... Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(...)
  2022-05-13 09:30:56.881 25 ... Running txn n=1 command(idx=2): 
SetLRouterPortInLSwitchPortCommand(...)
  2022-05-13 09:30:56.984 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4816
  2022-05-13 09:30:57.057 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.057 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:57.058 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:57.159 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4816
  2022-05-13 09:30:57.523 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.523 25 ... Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(...)
  2022-05-13 09:30:57.524 25 ... Running txn n=1 command(idx=2): 
SetLRouterPortInLSwitchPortCommand(...)
  2022-05-13 09:30:57.627 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4817
  2022-05-13 09:30:57.674 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.674 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:57.675 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:57.765 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4817

  (full version here: https://pastebin.com/raw/NLP1b6Qm).

  In our lab environment we have confirmed that the problem is gone
  after mentioned change is rolled back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1973347/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1973347] Re: OVN revision_number infinite update loop

2024-03-01 Thread Brian Haley
** Also affects: cloud-archive
   Importance: Undecided
   Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1973347

Title:
  OVN revision_number infinite update loop

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive ussuri series:
  New
Status in Ubuntu Cloud Archive victoria series:
  New
Status in Ubuntu Cloud Archive wallaby series:
  Fix Released
Status in neutron:
  Fix Released
Status in neutron package in Ubuntu:
  New
Status in neutron source package in Focal:
  New
Status in neutron source package in Jammy:
  Fix Released

Bug description:
  After the change described in
  https://mail.openvswitch.org/pipermail/ovs-dev/2022-May/393966.html
  was merged and released in stable OVN 22.03, there is a possibility to
  create an endless loop of revision_number update in external_ids of
  ports and router_ports. We have confirmed the bug in Ussuri and Yoga.
  When the problem happens, the Neutron log would look like this:

  2022-05-13 09:30:56.318 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4815
  2022-05-13 09:30:56.366 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:56.367 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:56.367 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:56.467 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4815
  2022-05-13 09:30:56.880 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:56.881 25 ... Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(...)
  2022-05-13 09:30:56.881 25 ... Running txn n=1 command(idx=2): 
SetLRouterPortInLSwitchPortCommand(...)
  2022-05-13 09:30:56.984 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4816
  2022-05-13 09:30:57.057 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.057 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:57.058 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:57.159 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4816
  2022-05-13 09:30:57.523 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.523 25 ... Running txn n=1 command(idx=1): 
UpdateLRouterPortCommand(...)
  2022-05-13 09:30:57.524 25 ... Running txn n=1 command(idx=2): 
SetLRouterPortInLSwitchPortCommand(...)
  2022-05-13 09:30:57.627 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: router_ports) to 4817
  2022-05-13 09:30:57.674 25 ... Running txn n=1 command(idx=0): 
CheckRevisionNumberCommand(...)
  2022-05-13 09:30:57.674 25 ... Running txn n=1 command(idx=1): 
SetLSwitchPortCommand(...)
  2022-05-13 09:30:57.675 25 ... Running txn n=1 command(idx=2): 
PgDelPortCommand(...)
  2022-05-13 09:30:57.765 25 ... Successfully bumped revision number for 
resource 8af189bd-c5bf-48a9-b072-3fb6c69ae592 (type: ports) to 4817

  (full version here: https://pastebin.com/raw/NLP1b6Qm).

  In our lab environment we have confirmed that the problem is gone
  after mentioned change is rolled back.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1973347/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055700] [NEW] server rebuild with reimage-boot-volume and is_volume_backed fails with BuildAbortException

2024-03-01 Thread Fabian Wiesel
Public bug reported:

Description
===

More specifically the following tempest test in master fails with:
tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server
Even with patch for https://review.opendev.org/c/openstack/nova/+/910627

Technically though, it should be unrelated to the driver implementation
as...


The `ComputeManager._rebuild_default_impl` calls first destroy on the VM in 
both branches:
- 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/manager.py#L3695-L3701
And in the case of a volume backed VM with `reimage_boot_volume=True` calls 
`ComputeManager._rebuild_volume_backed_instance` here
- 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/manager.py#L3710-L3715
The function tries to detach the volume from the destroyed instance and at 
least in the VMware driver raises an `InstanceNotFound`, which I'd argue would 
be expected.
- 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/manager.py#L3596-L3607


Steps to reproduce
==
* Install Devstack from master
* Run tempest test 
`tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server`

Or as a bash script:
```
IMAGE=$(openstack image list -c ID -f value)
ID1=$(openstack server create --flavor 1 --image $IMAGE --boot-from-volume 1 
rebuild-1 -c id -f value)
ID2=$(openstack server create --flavor 1 --image $IMAGE --boot-from-volume 1 
rebuild-2 -c id -f value)
# Wait for servers to be ready

# Works
openstack server rebuild --os-compute-api-version 2.93  --image $IMAGE $ID1

# Fails
openstack server rebuild --os-compute-api-version 2.93 --reimage-boot-volume 
--image $IMAGE $ID1

```
Expected result
===
The test succeeds.

Actual result
=


Environment
===
1. Patch proposed in https://review.opendev.org/c/openstack/nova/+/909474
  +  Patch proposed in https://review.opendev.org/c/openstack/nova/+/910627

2. Which hypervisor did you use? What's the version of that?

vmwareapi (VSphere 7.0.3 & ESXi 7.0.3)

2. Which storage type did you use?

vmdk on NFS 4.1

3. Which networking type did you use?

networking-nsx-t (https://github.com/sapcc/networking-nsx-t)

Logs & Configs
==

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2055700

Title:
  server rebuild with reimage-boot-volume and is_volume_backed fails
  with BuildAbortException

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===

  More specifically the following tempest test in master fails with:
  
tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server
  Even with patch for https://review.opendev.org/c/openstack/nova/+/910627

  Technically though, it should be unrelated to the driver
  implementation as...

  
  The `ComputeManager._rebuild_default_impl` calls first destroy on the VM in 
both branches:
  - 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/manager.py#L3695-L3701
  And in the case of a volume backed VM with `reimage_boot_volume=True` calls 
`ComputeManager._rebuild_volume_backed_instance` here
  - 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/manager.py#L3710-L3715
  The function tries to detach the volume from the destroyed instance and at 
least in the VMware driver raises an `InstanceNotFound`, which I'd argue would 
be expected.
  - 
https://opendev.org/openstack/nova/src/branch/master/nova/compute/manager.py#L3596-L3607

  
  Steps to reproduce
  ==
  * Install Devstack from master
  * Run tempest test 
`tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server`

  Or as a bash script:
  ```
  IMAGE=$(openstack image list -c ID -f value)
  ID1=$(openstack server create --flavor 1 --image $IMAGE --boot-from-volume 1 
rebuild-1 -c id -f value)
  ID2=$(openstack server create --flavor 1 --image $IMAGE --boot-from-volume 1 
rebuild-2 -c id -f value)
  # Wait for servers to be ready

  # Works
  openstack server rebuild --os-compute-api-version 2.93  --image $IMAGE $ID1

  # Fails
  openstack server rebuild --os-compute-api-version 2.93 --reimage-boot-volume 
--image $IMAGE $ID1

  ```
  Expected result
  ===
  The test succeeds.

  Actual result
  =

  
  Environment
  ===
  1. Patch proposed in https://review.opendev.org/c/openstack/nova/+/909474
+  Patch proposed in https://review.opendev.org/c/openstack/nova/+/910627

  2. Which hypervisor did you use? What's the version of that?

  vmwareapi (VSphere 7.0.3 & ESXi 7.0.3)

  2. Which storage type did you use?

  vmdk on NFS 4.1

  3. Which networking type did you use?

  networking-nsx-t 

[Yahoo-eng-team] [Bug 2055245] Re: DHCP Option is not passed to VM via Cloud-init

2024-03-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/nova/+/910466
Committed: 
https://opendev.org/openstack/nova/commit/135af5230e6f30fe158a25f7888e52cf886a7a35
Submitter: "Zuul (22348)"
Branch:master

commit 135af5230e6f30fe158a25f7888e52cf886a7a35
Author: Steven Blatzheim 
Date:   Wed Feb 28 07:25:59 2024 +0100

Fix nova-metadata-api for ovn dhcp native networks

With the change from ml2/ovs DHCP agents towards OVN implementation
in neutron there is no port with device_owner network:dhcp anymore.
Instead DHCP is provided by network:distributed port.

Closes-Bug: 2055245
Change-Id: Ibb569b9db1475b8bbd8f8722d49228182cd47f85


** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2055245

Title:
  DHCP Option is not passed to VM via Cloud-init

Status in neutron:
  Invalid
Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  Description
  ===

  Nova-Metadata-API doesn't provide ipv4_dhcp type for OVN (native OVH
  DHCP feature, no DHCP agents) networks with dhcp_enabled but no
  default gateway.

  Problem seems to be in
  
https://opendev.org/openstack/nova/src/branch/master/nova/network/neutron.py#L3617

  There is just an exception to networks without device_owner:
  network:dhcp where default gateway is used, which doesn't cover this
  case.

  Steps to reproduce
  ==

  Create a OVN network in an environment where native DHCP feature is
  provided by ovn (no ml2/ovs DHCP Agents). In addition this network
  needs to have no default gateway enabled.

  Create VM in this network and observe the cloud-init process
  (network_data.json)

  Expected result
  ===

  network_data.json
  (http://169.254.169.254/openstack/2018-08-27/network_data.json) should
  return something like:

  {
"links": [
  {
"id": "tapddc91085-96",
"vif_id": "ddc91085-9650-4b7b-ad9d-b475bac8ec8b",
"type": "ovs",
"mtu": 1442,
"ethernet_mac_address": "fa:16:3e:93:49:fa"
  }
],
"networks": [
  {
"id": "network0",
"type": "ipv4_dhcp",
"link": "tapddc91085-96",
"network_id": "9f61a3a7-26d3-4013-b61d-12880b325ea9"
  }
],
"services": []
  }

  Actual result
  =

  {
"links": [
  {
"id": "tapddc91085-96",
"vif_id": "ddc91085-9650-4b7b-ad9d-b475bac8ec8b",
"type": "ovs",
"mtu": 1442,
"ethernet_mac_address": "fa:16:3e:93:49:fa"
  }
],
"networks": [
  {
"id": "network0",
"type": "ipv4",
"link": "tapddc91085-96",
"ip_address": "10.0.0.40",
"netmask": "255.255.255.0",
"routes": [],
"network_id": "9f61a3a7-26d3-4013-b61d-12880b325ea9",
"services": []
  }
],
"services": []
  }

  Environment
  ===

  Openstack Zed with Neutron OVN feature enabled

  Nova: 26.2.1

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2055245/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055561] [NEW] [stable-only][OVN] Configure VETH interface MAC address before setting UP the device

2024-03-01 Thread Rodolfo Alonso
Public bug reported:

In [1] a change was introduced, along with the implemented feature, to
set the MAC address of the metadata VETH interface before setting it up.
This change should be backported to stable versions too.

[1]https://review.opendev.org/c/openstack/neutron/+/894026/12/neutron/agent/ovn/metadata/agent.py#710

** Affects: neutron
 Importance: Low
 Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez)
 Status: New

** Changed in: neutron
 Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez)

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2055561

Title:
  [stable-only][OVN] Configure VETH interface MAC address before setting
  UP the device

Status in neutron:
  New

Bug description:
  In [1] a change was introduced, along with the implemented feature, to
  set the MAC address of the metadata VETH interface before setting it
  up. This change should be backported to stable versions too.

  
[1]https://review.opendev.org/c/openstack/neutron/+/894026/12/neutron/agent/ovn/metadata/agent.py#710

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2055561/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2055173] Re: [netaddr>=1.0.0] Do not use netaddr.core.ZEROFILL flag with IPv6 addresses

2024-03-01 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron-lib/+/910331
Committed: 
https://opendev.org/openstack/neutron-lib/commit/3a9e39b9ef4d73cc60b534411bc5255e7d280bef
Submitter: "Zuul (22348)"
Branch:master

commit 3a9e39b9ef4d73cc60b534411bc5255e7d280bef
Author: Rodolfo Alonso Hernandez 
Date:   Fri Feb 23 14:23:58 2024 +

[netaddr>=1.0.0] Do not use netaddr.core.ZEROFILL flag with IPv6

The flag "netaddr.core.ZEROFILL" cannot be used with IPv6 addresses
with netaddr>=1.0.0

Change-Id: I116ea2abbee13a73302ebca9c65707f427a7d9d0
Closes-Bug: #2055173


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2055173

Title:
  [netaddr>=1.0.0] Do not use netaddr.core.ZEROFILL flag with IPv6
  addresses

Status in neutron:
  Fix Released

Bug description:
  In case of using this flag, that will raise the following exception:
  >>> netaddr.IPAddress("200a::1", flags=netaddr.core.ZEROFILL)
  Traceback (most recent call last):
File "", line 1, in 
File "/usr/local/lib/python3.10/dist-packages/netaddr/ip/__init__.py", line 
333, in __init__
  self._value = module.str_to_int(addr, flags)
File "/usr/local/lib/python3.10/dist-packages/netaddr/strategy/ipv4.py", 
line 120, in str_to_int
  addr = '.'.join(['%d' % int(i) for i in addr.split('.')])
File "/usr/local/lib/python3.10/dist-packages/netaddr/strategy/ipv4.py", 
line 120, in 
  addr = '.'.join(['%d' % int(i) for i in addr.split('.')])
  ValueError: invalid literal for int() with base 10: '200a::1'

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2055173/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp