[Yahoo-eng-team] [Bug 2038646] Re: [RBAC] Update "subnet" policies

2023-10-10 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/897540
Committed: 
https://opendev.org/openstack/neutron/commit/294e1c60b41d3422bb830758e2ea6b6cf554ac46
Submitter: "Zuul (22348)"
Branch:master

commit 294e1c60b41d3422bb830758e2ea6b6cf554ac46
Author: Rodolfo Alonso Hernandez 
Date:   Thu Oct 5 19:32:32 2023 +

[RBAC] Update the subnet policies

* get_subnet: the network owner can retrieve the subnet too.
* update_subnet: any project member can update the subnet.
* delete_subnet: any project member can delete the subnet.

Closes-Bug: #2038646
Change-Id: Iae2e3a31eb65d68dc0d3d0f9dd9fc8cf83260769


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038646

Title:
  [RBAC] Update "subnet" policies

Status in neutron:
  Fix Released

Bug description:
  * "get_subnet"
  Currently only the admin or a project reader can get a subnet. However, it 
doesn't make sense that the net owner can create the subnet [1] but cannot list 
it.

  * "update_subnet"
  Currently only the admin and the network owner can modify the subnet. Any 
project member should be able too.

  * "delete_subnet"
  Same argument as in "update_subnet"

  
[1]https://github.com/openstack/neutron/blob/8cba97016e421e4b01b96de70b4b194972d0186f/neutron/conf/policies/subnet.py#L42-L43

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038646/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038978] [NEW] [OVN] ARP + Floating IP issues

2023-10-10 Thread Mohammed Naser
Public bug reported:

When using OVN, if you have a virtual router with a gateway that is in
subnet A, and has a port that has a floating IP attached to it from
subnet B, they seem to not be reachable.

https://mail.openvswitch.org/pipermail/ovs-dev/2021-July/385253.html

There was a fix brought into OVN with this not long ago, it introduces
an option called `options:add_route` to `true`.

see: https://mail.openvswitch.org/pipermail/ovs-
dev/2021-July/385255.html

I think we should do this in order to mirror the same behaviour in
ML2/OVS since we install scope link routes.

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038978

Title:
  [OVN] ARP + Floating IP issues

Status in neutron:
  New

Bug description:
  When using OVN, if you have a virtual router with a gateway that is in
  subnet A, and has a port that has a floating IP attached to it from
  subnet B, they seem to not be reachable.

  https://mail.openvswitch.org/pipermail/ovs-dev/2021-July/385253.html

  There was a fix brought into OVN with this not long ago, it introduces
  an option called `options:add_route` to `true`.

  see: https://mail.openvswitch.org/pipermail/ovs-
  dev/2021-July/385255.html

  I think we should do this in order to mirror the same behaviour in
  ML2/OVS since we install scope link routes.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038978/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1939733] Fix included in openstack/neutron train-eol

2023-10-10 Thread OpenStack Infra
This issue was fixed in the openstack/neutron train-eol  release.

** Changed in: cloud-archive/train
   Status: Fix Committed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1939733

Title:
  [OSSA-2021-005] Arbitrary dnsmasq reconfiguration via extra_dhcp_opts
  (CVE-2021-40085)

Status in Ubuntu Cloud Archive:
  New
Status in Ubuntu Cloud Archive queens series:
  Fix Released
Status in Ubuntu Cloud Archive rocky series:
  Fix Released
Status in Ubuntu Cloud Archive stein series:
  Fix Released
Status in Ubuntu Cloud Archive train series:
  Fix Released
Status in Ubuntu Cloud Archive ussuri series:
  Fix Committed
Status in Ubuntu Cloud Archive victoria series:
  Fix Committed
Status in Ubuntu Cloud Archive wallaby series:
  Fix Committed
Status in Ubuntu Cloud Archive xena series:
  New
Status in neutron:
  Fix Released
Status in OpenStack Security Advisory:
  Fix Released
Status in neutron package in Ubuntu:
  Fix Released
Status in neutron source package in Bionic:
  New
Status in neutron source package in Focal:
  Fix Released
Status in neutron source package in Hirsute:
  Won't Fix
Status in neutron source package in Impish:
  Fix Released

Bug description:
  Application doesnt check the input values for extra_dhcp_opts port
  parameter allowing user to use a newline character. The values from
  extra_dhcp_opts are used in rendering of opts file which is passed to
  dnsmasq as a dhcp-optsfile. Considering this, an attacker can inject
  any options to that file.

  The main direct impact in my opinion is that attacker can push
  arbitrary dhcp options to another instances connected to the same
  network. And due to we are able to modify our own port connected to
  external network, it is possible to push dhcp options to the instances
  of another tennants using the same external network.

  If we go further, there is an known buffer overflow vulnerability in
  dnsmasq
  
(https://thekelleys.org.uk/gitweb/?p=dnsmasq.git;a=commitdiff;h=7d04e17444793a840f98a0283968b96502b112dc)
  which was not considered as a security issue due to attacker cannot
  control dhcp opts in most cases and therefore this vulnerability is
  still exists in most distributives (e.g Ubuntu 20.04.1). In our case
  dhcp opts is exactly what attacker can modify, so we can trigger
  buffer overflow there. I even managed to write an exploit which lead
  to a remote code execution using this buffer overflow vulnerability.

  Here the payload to crash dnsmasq as a proof of concept:
  ```
  PUT /v2.0/ports/9db67e0f-537c-494a-a655-c8a0c518d57e HTTP/1.1
  Host: openstack
  X-Auth-Token: TOKEN
  Content-Type: application/json
  Content-Length: 170

  {"port":{
  "extra_dhcp_opts":[{"opt_name":"zzz",
  
"opt_value":"xxx\n128,aa:bb\n120,aa.cc\n128,:"
  }]}}
  ```

  Tested on ocata, train and victoria versions.

  Vulnerability was found by Pavel Toporkov

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-archive/+bug/1939733/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038936] [NEW] [fwaas] CI job "neutron-fwaas-v2-dsvm-tempest-multinode" failing

2023-10-10 Thread Rodolfo Alonso
Public bug reported:

Error:
https://zuul.opendev.org/t/openstack/build/74f05799b17340f5824a652e1d1ecbfb

Snippet: https://paste.opendev.org/show/b3HfOkty2EIeaG1UyCI5/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038936

Title:
  [fwaas] CI job "neutron-fwaas-v2-dsvm-tempest-multinode" failing

Status in neutron:
  New

Bug description:
  Error:
  https://zuul.opendev.org/t/openstack/build/74f05799b17340f5824a652e1d1ecbfb

  Snippet: https://paste.opendev.org/show/b3HfOkty2EIeaG1UyCI5/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038936/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038931] [NEW] ovsfw: OVS br-int rule disappears from the table=60 after stop/start VM

2023-10-10 Thread Anton Kurbatov
Public bug reported:

I found out that after VM creation and after VM stop/start the set of
OVS rules is different in br-int table=60 (TRANSIENT_TABLE)

I have a flat network, in this network I create a VM. After the VM
stop/start the set of rules in table 60 for this VM is different from
the one that was after VM creation.

Here is a demo:

[root@devstack0 ~]# openstack server create test-vm --image 
cirros-0.6.2-x86_64-disk --network public --flavor m1.tiny -c id
+---+--+
| Field | Value|
+---+--+
| id| 84c7ed9c-c78e-4d15-8a09-6eb18b0f872a |
+---+--+
[root@devstack0 ~]# openstack port list --device-id 
84c7ed9c-c78e-4d15-8a09-6eb18b0f872a -c ID -c mac_address
+--+---+
| ID   | MAC Address   |
+--+---+
| 4fd0022b-223d-43ac-9134-1623b38ee2a6 | fa:16:3e:4b:db:3e |
+--+---+
[root@devstack0 ~]#


Table 60: two rules with dl_dst=fa:16:3e:4b:db:3e after VM is created:

[root@devstack0 neutron]# ovs-ofctl dump-flows br-int table=60 | grep 
fa:16:3e:4b:db:3e
 cookie=0x1a51dc2aa3392248, duration=23.420s, table=60, n_packets=0, n_bytes=0, 
idle_age=1961, priority=90,vlan_tci=0x/0x1fff,dl_dst=fa:16:3e:4b:db:3e 
actions=load:0x1c->NXM_NX_REG5[],load:0x2->NXM_NX_REG6[],resubmit(,81)
 cookie=0x1a51dc2aa3392248, duration=23.420s, table=60, n_packets=25, 
n_bytes=2450, idle_age=678, priority=90,dl_vlan=2,dl_dst=fa:16:3e:4b:db:3e 
actions=load:0x1c->NXM_NX_REG5[],load:0x2->NXM_NX_REG6[],strip_vlan,resubmit(,81)
[root@devstack0 neutron]#


Stop/start the VM and check it again:

[root@devstack0 ~]# openstack server stop test-vm
[root@devstack0 ~]# openstack server start test-vm
[root@devstack0 ~]#
[root@devstack0 neutron]# ovs-ofctl dump-flows br-int table=60 | grep 
fa:16:3e:4b:db:3e
 cookie=0x1a51dc2aa3392248, duration=14.201s, table=60, n_packets=25, 
n_bytes=2450, idle_age=697, priority=90,dl_vlan=2,dl_dst=fa:16:3e:4b:db:3e 
actions=load:0x1d->NXM_NX_REG5[],load:0x2->NXM_NX_REG6[],strip_vlan,resubmit(,81)
[root@devstack0 neutron]#

You can see that the rule [1] has disappeared.

And there is a neutron-openvsiwth-agent message 'Initializing port
 that was already initialized' while VM starting:

Oct 10 08:50:05 devstack0 neutron-openvswitch-agent[232791]: INFO 
neutron.agent.securitygroups_rpc [None req-df876af2-5007-42ae-ae4e-8c968f59fb5c 
None None] Preparing filters for devices 
{'4fd0022b-223d-43ac-9134-1623b38ee2a6'}
Oct 10 08:50:05 devstack0 neutron-openvswitch-agent[232791]: INFO 
neutron.agent.linux.openvswitch_firewall.firewall [None 
req-df876af2-5007-42ae-ae4e-8c968f59fb5c None None] Initializing port 
4fd0022b-223d-43ac-9134-1623b38ee2a6 that was already initialized.

I get this behavior on devstack with neutron from master branch.

It looks like this rule is disappeared because OVS interface under OVS
port is recreated after VM stop/start and new OFPort object is creating
with network_type=None (as well with physical_network=None). Compare to
a few lines above where the OFPort object is created with
network_type/physical_network [2]


I actually discovered this behavior while testing my neutron port-check plugin 
[3]

[root@devstack0 ~]# openstack port check 4fd0022b-223d-43ac-9134-1623b38ee2a6 
-c firewall
+--+--+
| Field| Value  
  |
+--+--+
| firewall | - No flow: table=60, priority=90,vlan_tci=(0, 
8191),eth_dst=fa:16:3e:4b:db:3e 
actions=set_field:29->reg5,set_field:2->reg6,resubmit(,81) |
+--+--+
[root@devstack0 ~]#

[1] 
https://opendev.org/openstack/neutron/src/commit/78027da56ccb25d19ac2c3bc1c174acb2150e6a5/neutron/agent/linux/openvswitch_firewall/firewall.py#L915
[2] 
https://opendev.org/openstack/neutron/src/commit/78027da56ccb25d19ac2c3bc1c174acb2150e6a5/neutron/agent/linux/openvswitch_firewall/firewall.py#L724
[3] https://github.com/antonkurbatov/neutron-portcheck

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038931

Title:
  ovsfw: OVS br-int rule disappears from the table=60 after stop/start
  VM

Status in neutron:
 

[Yahoo-eng-team] [Bug 1889655] Re: removeSecurityGroup action returns 500 when there are multiple security groups with the same name

2023-10-10 Thread Pavlo Shchelokovskyy
** Changed in: nova
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1889655

Title:
  removeSecurityGroup action returns 500 when there are multiple
  security groups with the same name

Status in OpenStack Compute (nova):
  Fix Released

Bug description:
  according to OpenStack Compute api ref a security group name can be
  supplied in the request to remove a security group from the server.

  Nova correctly handles a case of adding security group to a server
  when there are multiple security groups with the requested name and
  returns HTTP409 Conflict.

  However it fails in the same scenario when removing security group
  from the server (for example when a security group with a duplicate
  name was added after server was created), returning HTTP500.

  reproduce script for current DevStack/master

  #!/usr/bin/env bash
  set -ex
  # repro on DevStack
  export OS_CLOUD=devstack
  TOKEN=$(openstack token issue -f value -c id)
  # openstackclient catalog list/show are not very bash-friendly, only with jq 
:-/
  computeapi=$(openstack catalog show compute | grep public | awk '{print $4}')
  # adjust image, flavor and network to your liking
  serverid=$(openstack server create dummy --image cirros-0.5.1-x86_64-disk 
--flavor m1.nano --network private -f value -c id)
  openstack security group create dummy
  openstack server add security group dummy dummy
  openstack security group create dummy
  # smart clients (openstackclient, openstacksdk) use some sort of 
pre-validation
  # or name-to-id resolving first, so using raw curl to demonstrate.
  curl -g -i --cacert "/opt/stack/data/ca-bundle.pem" \
  -X POST $computeapi/servers/$serverid/action \
  -d '{"removeSecurityGroup":{"name":"dummy"}}' \
  -H "Content-Type: application/json" \
  -H "X-Auth-Token: $TOKEN"

  
  the last command returns
  {"computeFault": {"code": 500, "message": "Unexpected API Error. Please 
report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if 
possible.\n"}}

  The reason is that the logic handling such conflict was added to the security 
group adding code - but not to the removal one, see 
`nova/network/security_group_api.py`,
  methods `add_to_instance`
  
https://opendev.org/openstack/nova/src/commit/2f3a380c3c081fb022c8a2dcfdcc365733161cac/nova/network/security_group_api.py#L611-L618
  vs `remove_from_instance`
  
https://opendev.org/openstack/nova/src/commit/2f3a380c3c081fb022c8a2dcfdcc365733161cac/nova/network/security_group_api.py#L674-L679

  the latter does not handle NeutronClientNoUniqueMatch exception

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1889655/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038898] [NEW] image format change during migration is not reflected in libvirt XML

2023-10-10 Thread Pavlo Shchelokovskyy
Public bug reported:

Discovered in a environment that was configured with

[libvirt]
images_type = raw

only, the other relevant options were at their defaults (use_cow_images
= True, force_raw_images = False).

Symptom - the instances were non-responsive and non running after cold 
migration (e.g. no console log at all), live migration works fine.
Workaround - setting use_cow_images=False and force_raw_images=True solved the 
problem.

Reproduction on a current multinode devstack:

1. Configure computes as described above - set [libvirt]images_type = raw, 
leave the rest per default devstack / nova settings.
2. Create a raw image in Glance.
3. Boot an instance from that raw image.
4. Inspect the image on the file system - the image is in fact raw.
5. Cold-migrate the server.
6. Migration finishes successfully, instance is reported as up and running on 
the new host - but in fact it has completely failed to start (not accessible, 
no console log, nothing).
7. If you check the image file nova uses on the new compute - it is now qcow2, 
not raw.
8. But the libvirt XML of the instance still defines the disk as raw!

Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]:   
Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]: 
Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]:   
Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]:   
Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]:   
Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]: 

Stopping the instance and manually converting the disk back to raw
allows instance to start properly.

I tracked it down to this place in finish_migration method:

https://opendev.org/openstack/nova/src/branch/stable/2023.2/nova/virt/libvirt/driver.py#L11739

if (disk_name != 'disk.config' and
info['type'] == 'raw' and CONF.use_cow_images):
self._disk_raw_to_qcow2(info['path'])

Effectively, nova changes disk type but not changing the XML
appropriately to reflect the actual new disk format, and thus the
instance fails to start.

** Affects: nova
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2038898

Title:
  image format change during migration is not reflected in libvirt XML

Status in OpenStack Compute (nova):
  New

Bug description:
  Discovered in a environment that was configured with

  [libvirt]
  images_type = raw

  only, the other relevant options were at their defaults
  (use_cow_images = True, force_raw_images = False).

  Symptom - the instances were non-responsive and non running after cold 
migration (e.g. no console log at all), live migration works fine.
  Workaround - setting use_cow_images=False and force_raw_images=True solved 
the problem.

  Reproduction on a current multinode devstack:

  1. Configure computes as described above - set [libvirt]images_type = raw, 
leave the rest per default devstack / nova settings.
  2. Create a raw image in Glance.
  3. Boot an instance from that raw image.
  4. Inspect the image on the file system - the image is in fact raw.
  5. Cold-migrate the server.
  6. Migration finishes successfully, instance is reported as up and running on 
the new host - but in fact it has completely failed to start (not accessible, 
no console log, nothing).
  7. If you check the image file nova uses on the new compute - it is now 
qcow2, not raw.
  8. But the libvirt XML of the instance still defines the disk as raw!

  Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]:   
  Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]: 
  Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]:   
  Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]:   
  Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]:   
  Oct 09 12:15:35 pshchelo-devstack-jammy nova-compute[427994]: 

  Stopping the instance and manually converting the disk back to raw
  allows instance to start properly.

  I tracked it down to this place in finish_migration method:

  
https://opendev.org/openstack/nova/src/branch/stable/2023.2/nova/virt/libvirt/driver.py#L11739

  if (disk_name != 'disk.config' and
  info['type'] == 'raw' and CONF.use_cow_images):
  self._disk_raw_to_qcow2(info['path'])

  Effectively, nova changes disk type but not changing the XML
  appropriately to reflect the actual new disk format, and thus the
  instance fails to start.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2038898/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp