[Yahoo-eng-team] [Bug 2039285] [NEW] [neutron-fwaas] "neutron-fwaas-fullstack" job broken

2023-10-13 Thread Rodolfo Alonso
Public bug reported:

The experimental job "neutron-fwaas-fullstack" is broken right now.

Logs:
https://25f3b33717093a9f4f4e-a759d6b54561529b072782a6b0052389.ssl.cf5.rackcdn.com/896741/6/experimental/neutron-
fwaas-fullstack/87a2cd7/testr_results.html

Error: https://paste.opendev.org/show/ba1P2MdMl5kXIsQd71Qu/

** Affects: neutron
 Importance: Low
 Status: New

** Changed in: neutron
   Importance: Undecided => Low

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039285

Title:
  [neutron-fwaas] "neutron-fwaas-fullstack" job broken

Status in neutron:
  New

Bug description:
  The experimental job "neutron-fwaas-fullstack" is broken right now.

  Logs:
  
https://25f3b33717093a9f4f4e-a759d6b54561529b072782a6b0052389.ssl.cf5.rackcdn.com/896741/6/experimental/neutron-
  fwaas-fullstack/87a2cd7/testr_results.html

  Error: https://paste.opendev.org/show/ba1P2MdMl5kXIsQd71Qu/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039285/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2038655] Re: DHCP agent scheduler API extension should be supported by ML2/OVN backend

2023-10-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/897528
Committed: 
https://opendev.org/openstack/neutron/commit/f006d29251abe3c138ee0dea5b549496b66b8fa7
Submitter: "Zuul (22348)"
Branch:master

commit f006d29251abe3c138ee0dea5b549496b66b8fa7
Author: Slawek Kaplonski 
Date:   Fri Oct 6 10:58:57 2023 +0200

Add dhcpagentscheduler API extension to the ML2/OVN extensions

In most typical use cases ML2/OVN backend don't needs to run DHCP agent
as OVN provides DHCP functionality natively. But there are some use
cases like Baremetal provisioning over IPv6 or Spine Leaf architecture
with DHCP relays where DHCP agent is necessary and it can work perfectly
fine with ML2/OVN backend.
The problem was that dhcpagentscheduler API extension wasn't listed as
supported by the OVN backend so it was filtered out from the list of
supported extensions during start of the neutron server. This caused
problems with API to get/set/delete networks to/from DHCP agent.

This patch adds this API extension to the list of the extensions
supported by the OVN driver to fix that issue.

Depends-On: https://review.opendev.org/c/openstack/tempest/+/898090

Closes-bug: #2038655

Change-Id: I09a37ca451d44607b7dde344c93ace060c7bda01


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2038655

Title:
  DHCP agent scheduler API extension should be supported by ML2/OVN
  backend

Status in neutron:
  Fix Released

Bug description:
  Neutron DHCP agent can work perfectly fine with ML2/OVN backend and there are 
some use cases, like Baremetal provisioning over IPv6 which requires that. 
Because of that dhcpagentscheduler API extension should be added to the list of 
the extensions supported by the OVN driver so it's not disabled during start of 
the Neutron server.
  Adding it there will allow users to use API to add/removed networks to/from 
DHCP agents as well as check what networks are hosted on which DHCP agent.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2038655/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039269] [NEW] Implement full_match mapping compination matching rule

2023-10-13 Thread Aliaksandr Vasiuk
Public bug reported:

Hello,

As a OpenStack administrator I would like to federate flexible access policies 
to Openstack projects from identity provider.
For example, I have projects Green and Red, and Admin and User roles. From 
identity provider Keystone receives an array like: "Green_Admin;Red_User". And 
there is no way to specify rule "If idp gives Green_Admin and Red_User then set 
role Admin for project Green, and role User for project Red".

I tried to implement "full match" logic with something like:
any_one_of: Green_Admin
any_one_of: Red_User
not_any_of: Green_User, Red_Admin 
But in real life example with a dozen of projects and several roles I ended up 
with 50MB mappings JSON that Keystone can't accept.

Best Regards,
Alex.

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2039269

Title:
  Implement full_match mapping compination matching rule

Status in OpenStack Identity (keystone):
  New

Bug description:
  Hello,

  As a OpenStack administrator I would like to federate flexible access 
policies to Openstack projects from identity provider.
  For example, I have projects Green and Red, and Admin and User roles. From 
identity provider Keystone receives an array like: "Green_Admin;Red_User". And 
there is no way to specify rule "If idp gives Green_Admin and Red_User then set 
role Admin for project Green, and role User for project Red".

  I tried to implement "full match" logic with something like:
  any_one_of: Green_Admin
  any_one_of: Red_User
  not_any_of: Green_User, Red_Admin 
  But in real life example with a dozen of projects and several roles I ended 
up with 50MB mappings JSON that Keystone can't accept.

  Best Regards,
  Alex.

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2039269/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 2039265] [NEW] Insufficient support for creating default policy rules

2023-10-13 Thread Olaf Seibert
Public bug reported:

Since Wallaby, you don't need to give Horizon the full set of policy
rules you are using for cinder (etc). Just the non-default rules, the
same as you have configured for cinder (etc) itself, is enough. See
https://docs.openstack.org/releasenotes/horizon/wallaby.html under
19.1.0 New Features.

It is also mentioned that "they are synced with registered defaults of
back-end services before the horizon release." So they are present in
Horizon out of the box.

Then I would expect that Horizon knows where these default policies are,
and would use them. As of the Yoga version, which I'm using, this
doesn't seem to be the case, however.

One needs to include something like this in local_settings.py:

DEFAULT_POLICY_FILES = {
'identity': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/keystone.yaml',
'compute': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/nova.yaml',
'volume': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/cinder.yaml',
'image': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/glance.yaml',
'orchestration': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/heat.yaml',
'network': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/neutron.yaml',
}

This looks totally like it can't have been meant this way. So, how
should it be done?

There is a further issue. The defaults built into the yoga version of
Horizon match those of the yoga services. What if you're using, say,
cinder of a different version? How do you get the default policies then?

There is a mention of

To update these files, run the following command:

 python manage.py dump_default_policies \
   --namespace  \
   --output-file openstack_dashboard/conf/default_policies/.yaml

 must be a namespace under oslo.policy.policies to query and
we use "keystone", "nova", "cinder", "neutron" and "glance".

This manage.py script seems to be part of the horizon source only, and not of 
the installed Horizon. So you cannot run this command in the actual openstack 
installation.
Furthermore, even if it was installed, it requires that nova, cinder, glance, 
neutron etc, are installed into the same container where horizon is, because it 
needs access to the python code of these services.

So this is not really workable.

So how should I get the default policies for horizon, given separate
containers in which nova, cinder etc are installed?

This can be considered a feature request, since I suspect that currently
the answer should be "this is not possible".

** Affects: horizon
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Dashboard (Horizon).
https://bugs.launchpad.net/bugs/2039265

Title:
  Insufficient support for creating default policy rules

Status in OpenStack Dashboard (Horizon):
  New

Bug description:
  Since Wallaby, you don't need to give Horizon the full set of policy
  rules you are using for cinder (etc). Just the non-default rules, the
  same as you have configured for cinder (etc) itself, is enough. See
  https://docs.openstack.org/releasenotes/horizon/wallaby.html under
  19.1.0 New Features.

  It is also mentioned that "they are synced with registered defaults of
  back-end services before the horizon release." So they are present in
  Horizon out of the box.

  Then I would expect that Horizon knows where these default policies
  are, and would use them. As of the Yoga version, which I'm using, this
  doesn't seem to be the case, however.

  One needs to include something like this in local_settings.py:

  DEFAULT_POLICY_FILES = {
  'identity': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/keystone.yaml',
  'compute': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/nova.yaml',
  'volume': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/cinder.yaml',
  'image': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/glance.yaml',
  'orchestration': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/heat.yaml',
  'network': 
'/usr/lib/python3/dist-packages/openstack_dashboard/conf/default_policies/neutron.yaml',
  }

  This looks totally like it can't have been meant this way. So, how
  should it be done?

  There is a further issue. The defaults built into the yoga version of
  Horizon match those of the yoga services. What if you're using, say,
  cinder of a different version? How do you get the default policies
  then?

  There is a mention of

  To update these files, run the following command:

   python manage.py dump_default_policies \
 --namespace  \
 --output-file openstack_dashboard/conf/default_polici

[Yahoo-eng-team] [Bug 2039027] Re: tempest nftables jobs are not running in periodic queue

2023-10-13 Thread OpenStack Infra
Reviewed:  https://review.opendev.org/c/openstack/neutron/+/897427
Committed: 
https://opendev.org/openstack/neutron/commit/daa0d1c5a2bec7b78837686eff3ea052f0c45bd7
Submitter: "Zuul (22348)"
Branch:master

commit daa0d1c5a2bec7b78837686eff3ea052f0c45bd7
Author: Rodolfo Alonso Hernandez 
Date:   Tue Oct 10 17:13:59 2023 +

Restore the tempest nftables jobs in experimental and periodic queues

The job names were changed but not replaced in the
"neutron-periodic-jobs" template.

This patch is also adding new binaries to the nftables installation
role, that includes all the "-save" and "-restore" ones.

Closes-Bug: #2039027

Change-Id: Ia4c140af74db29f4e40299648f1b5091b4801b51


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/2039027

Title:
  tempest nftables jobs are not running in periodic queue

Status in neutron:
  Fix Released

Bug description:
  The jobs "neutron-linuxbridge-tempest-plugin-nftables" and "neutron-
  ovs-tempest-plugin-iptables_hybrid-nftables" are not running the in
  periodic (and experimental) queue since both template names were
  changed.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/2039027/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp