[Yahoo-eng-team] [Bug 1990285] Re: Help for `network create --external` is incorrect

2022-12-01 Thread Brian Haley
** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1990285

Title:
  Help for `network create --external` is incorrect

Status in neutron:
  Fix Released

Bug description:
  Hi,

  The description for the `--external` option for the `network create`
  command reads:

     "Set this network as an external network (external-net extension
     required)"

  This is incorrect.

  I am told that the `--external` option does the following:

     Specifies that the router used by the network cannot be external to
     neutron.

  Networking is not my domain. However, it might help users to clarify
  the word, `external`. Generally speaking it's not good form to use the
  same word as the option to document the option. :^)

  Thanks for considering this bug.

  Best,
  --Greg

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1990285/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1932373] Re: DB migration is interrupted and next execution will fail

2022-12-01 Thread Brian Haley
Will close based on last comment and since it's been idle for over a
year. Please re-open if necessary.

** Changed in: neutron
   Status: New => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1932373

Title:
  DB migration is interrupted and next execution will fail

Status in neutron:
  Invalid

Bug description:
  Sometimes, alembic migration is interrupted in the middle, the db
  table structure has changed, but alembic_version table version number
  in the table has not been updated, as a result, the next database
  migration will fail.

  
  DB: mariadb

  MariaDB [chen_test]> select * from alembic_version;
  +--+
  | version_num  |
  +--+
  | c613d0b82681 |
  +--+

  (venv) root@dev1:~/workspace/neutron/neutron/db/migration# neutron-db-manage 
upgrade +1
  DEBUG [oslo_concurrency.lockutils] Lock "context-manager" acquired by 
"neutron_lib.db.api._create_context_manager" :: waited 0.000s
  DEBUG [oslo_concurrency.lockutils] Lock "context-manager" released by 
"neutron_lib.db.api._create_context_manager" :: held 0.000s
  DEBUG [neutron_lib.callbacks.manager] Subscribe: > rbac-policy before_create 
  DEBUG [neutron_lib.callbacks.manager] Subscribe: > rbac-policy before_update 
  DEBUG [neutron_lib.callbacks.manager] Subscribe: > rbac-policy before_delete 
  DEBUG [neutron_lib.callbacks.manager] Subscribe:  agent after_create 

  DEBUG [neutron_lib.callbacks.manager] Subscribe:  agent after_update 

  DEBUG [neutron_lib.callbacks.manager] Subscribe:  segment 
precommit_create 
  DEBUG [neutron_lib.callbacks.manager] Subscribe:  network precommit_delete 

  DEBUG [neutron_lib.callbacks.manager] Subscribe:  segment 
before_delete 
  INFO  [alembic.runtime.migration] Context impl MySQLImpl.
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
Running upgrade for neutron ...
  INFO  [alembic.runtime.migration] Context impl MySQLImpl.
  INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
  INFO  [alembic.runtime.migration] Running upgrade c613d0b82681 -> c3e9d13c4367
  Traceback (most recent call last):
File "/root/venv/lib/python3.8/site-packages/sqlalchemy/engine/base.py", 
line 1770, in _execute_context
  self.dialect.do_execute(
File "/root/venv/lib/python3.8/site-packages/sqlalchemy/engine/default.py", 
line 717, in do_execute
  cursor.execute(statement, parameters)
File "/root/venv/lib/python3.8/site-packages/pymysql/cursors.py", line 148, 
in execute
  result = self._query(query)
File "/root/venv/lib/python3.8/site-packages/pymysql/cursors.py", line 310, 
in _query
  conn.query(q)
File "/root/venv/lib/python3.8/site-packages/pymysql/connections.py", line 
548, in query
  self._affected_rows = self._read_query_result(unbuffered=unbuffered)
File "/root/venv/lib/python3.8/site-packages/pymysql/connections.py", line 
775, in _read_query_result
  result.read()
File "/root/venv/lib/python3.8/site-packages/pymysql/connections.py", line 
1156, in read
  first_packet = self.connection._read_packet()
File "/root/venv/lib/python3.8/site-packages/pymysql/connections.py", line 
725, in _read_packet
  packet.raise_for_error()
File "/root/venv/lib/python3.8/site-packages/pymysql/protocol.py", line 
221, in raise_for_error
  err.raise_mysql_exception(self._data)
File "/root/venv/lib/python3.8/site-packages/pymysql/err.py", line 143, in 
raise_mysql_exception
  raise errorclass(errno, errval)
  pymysql.err.OperationalError: (1060, "Duplicate column name 'binding_index'")

  The above exception was the direct cause of the following exception:

  Traceback (most recent call last):
File "/root/venv/bin/neutron-db-manage", line 10, in 
  sys.exit(main())
File "/root/workspace/neutron/neutron/db/migration/cli.py", line 687, in 
main
  return_val |= bool(CONF.command.func(config, CONF.command.name))
File "/root/workspace/neutron/neutron/db/migration/cli.py", line 184, in 
do_upgrade
  do_alembic_command(config, cmd, revision=revision,
File "/root/workspace/neutron/neutron/db/migration/cli.py", line 86, in 
do_alembic_command
  getattr(alembic_command, cmd)(config, *args, **kwargs)
File "/root/venv/lib/python3.8/site-packages/alembic/command.py", line 294, 
in upgrade
  script.run_env()
File "/root/venv/lib/python3.8/site-packages/alembic/script/base.py", line 
490, in run_env
  util.load_python_file(self.dir, "env.py")
File "/root/venv/lib/python3.8/site-packages/alembic/util/pyfiles.py", line 
97, in load_python_file
  module = load_module_py(module_id, path)
File "/root/venv/lib/python3.8/site-packages/alembic/util/compat.py", line 
184, in load_module_py
  spec.loader.exec_module(module)
File "", line 783, in exec_module

[Yahoo-eng-team] [Bug 1950662] Re: [DHCP] Improve RPC server methods

2022-12-01 Thread Brian Haley
Two changes have merged, so I'll assume this is fixed and close.

** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1950662

Title:
  [DHCP] Improve RPC server methods

Status in neutron:
  Fix Released

Bug description:
  Improve the RPC server methods, removing unnecessary DB requests.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1950662/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1912948] Re: Missing option for OSP16.2 OVN migration

2022-12-01 Thread Brian Haley
Since fix seems to be included in later releases will close this bug.

** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1912948

Title:
  Missing option for OSP16.2 OVN migration

Status in neutron:
  Fix Released

Bug description:
  Attempt to perform OVN migration to OSP16.2 using infrared plugin
  included in the neutron repo failed due to missing 16.2 option. In
  order to support OVN migration to this version we need to include this
  option.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1912948/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1886216] Re: keepalived-state-change does not format correctly the logs

2022-12-01 Thread Brian Haley
** Changed in: neutron
   Status: Confirmed => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1886216

Title:
  keepalived-state-change does not format correctly the logs

Status in neutron:
  Fix Released

Bug description:
  In versions prior to Train, the monitor "keepalived-state-change" does
  not format correctly the log messages. That happens when the
  "Daemon.run()" method executes "unwatch_log". After the privileges are
  dropped, we can enable again the logging.

  Without configuring the logging: http://paste.openstack.org/show/795536/
  Configuring the logging: http://paste.openstack.org/show/795537/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1886216/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998539] [NEW] writing of sudoers is not idempotent

2022-12-01 Thread Mina Galić
Public bug reported:

after several (full) re-runs of cloud-init, my
/usr/local/etc/sudoers.d/90-cloud-init-users file looks like this:

# Created by cloud-init v. 22.3 on Wed, 05 Oct 2022 21:34:14 +

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL

# User rules for freebsd
freebsd ALL=(ALL) NOPASSWD:ALL


while this has no affect on sudo's functionality, it's also not deduplicated:

freebsd@fbsd14-amd64 ~> sudo -l
User freebsd may run the following commands on fbsd14-amd64:
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL
(ALL) NOPASSWD: ALL


given what we're trying to accomplish with writing sudoers rules, I think it 
would make sense to *always* rewrite the file, regardless of whether it exists 
or not.

** Affects: cloud-init
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1998539

Title:
  writing of sudoers is not idempotent

Status in cloud-init:
  New

Bug description:
  after several (full) re-runs of cloud-init, my
  /usr/local/etc/sudoers.d/90-cloud-init-users file looks like this:

  # Created by cloud-init v. 22.3 on Wed, 05 Oct 2022 21:34:14 +

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  # User rules for freebsd
  freebsd ALL=(ALL) NOPASSWD:ALL

  
  while this has no affect on sudo's functionality, it's also not deduplicated:

  freebsd@fbsd14-amd64 ~> sudo -l
  User freebsd may run the following commands on fbsd14-amd64:
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL
  (ALL) NOPASSWD: ALL

  
  given what we're trying to accomplish with writing sudoers rules, I think it 
would make sense to *always* rewrite the file, regardless of whether it exists 
or not.

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1998539/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1819446] Re: After the vm's port name is modified, the port status changes from down to active.

2022-12-01 Thread Brian Haley
I just tried to recreate this on the latest code in Antelope development
and can't reproduce the issue. I could not find an obvious change that
fixed it, but I'll close for now. Please re-open if you see it again.

** Changed in: neutron
   Status: Confirmed => Invalid

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1819446

Title:
  After the vm's port name is modified, the port status changes from
  down to active.

Status in neutron:
  Invalid

Bug description:
  Hello,

  I've faced a problem with ml2 plugin in Neutron (Queens).

  steps as as follows:
  1. stop VM A, The port status of VM A changes from active to down.

  2. update the port name.

  3. the port status will change from down to active.

  Is this phenomenon correct?

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1819446/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998526] [NEW] cloud-init not restarting ssh service after writing sshd_config

2022-12-01 Thread Anh Vo (MSFT)
Public bug reported:

cloud-init 22.2 introduced a race condition with ssh.service since it
added a systemctl call to check if service is stopped/not-started. If
ssh.service starts at around the same time of the check, cloud-init
believes that the service is stopped and does not need to be restarts,
proceeds to write the sshd_config. The  writing of sshd_config might
happen after the ssh.service has started, especially if there's
significant delay in the systemctl status call (the delay would have
given ssh.service plenty of time to startup)

I believed this is the commit that introduced the issue:
https://github.com/canonical/cloud-init/commit/5054ffe188

I've attached the cloud-init.log and the auth.log showing the time ssh
service started.

>From cloud-init.log - the call to check ssh service status happened at
22:44:43,630, when cloud-init wrote the file sshd_config at 22:44:51,
ssh service already started. There's a strange 8s delay from systemctl
that  might have to do with systemd or the condition of the VM.
Regardless, the race is definitely there.

2022-11-22 22:44:43,630 - subp.py[DEBUG]: Running command ['systemctl', 
'status', 'ssh'] with allowed return codes [0] (shell=False, capture=True)
2022-11-22 22:44:51,116 - cc_set_passwords.py[DEBUG]: Writing config 
'ssh_pwauth: True'. SSH service 'ssh' will not be restarted because it is 
stopped.
2022-11-22 22:44:51,116 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config 
(quiet=False)
2022-11-22 22:44:51,116 - util.py[DEBUG]: Read 3546 bytes from 
/etc/ssh/sshd_config
2022-11-22 22:44:51,116 - ssh_util.py[DEBUG]: line 55: option 
PasswordAuthentication updated no -> yes
2022-11-22 22:44:51,117 - util.py[DEBUG]: Writing to /etc/ssh/sshd_config - wb: 
[600] 3547 bytes

Here's what the customer gave us from their VM:

ssh.service - OpenBSD Secure Shell server
   Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: 
enabled)
   Active: active (running) since Tue 2022-11-22 22:44:43 UTC; 6 days ago 

sshd_config file changed after only few seconds.

-rw--- 1 root root 3547 2022-11-22 22:44:51.113697898 +
/etc/ssh/sshd_config

** Affects: cloud-init
 Importance: Undecided
 Status: New

** Attachment added: "ssh_race_logs.zip"
   
https://bugs.launchpad.net/bugs/1998526/+attachment/5633811/+files/ssh_race_logs.zip

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to cloud-init.
https://bugs.launchpad.net/bugs/1998526

Title:
  cloud-init not restarting ssh service after writing sshd_config

Status in cloud-init:
  New

Bug description:
  cloud-init 22.2 introduced a race condition with ssh.service since it
  added a systemctl call to check if service is stopped/not-started. If
  ssh.service starts at around the same time of the check, cloud-init
  believes that the service is stopped and does not need to be restarts,
  proceeds to write the sshd_config. The  writing of sshd_config might
  happen after the ssh.service has started, especially if there's
  significant delay in the systemctl status call (the delay would have
  given ssh.service plenty of time to startup)

  I believed this is the commit that introduced the issue:
  https://github.com/canonical/cloud-init/commit/5054ffe188

  I've attached the cloud-init.log and the auth.log showing the time ssh
  service started.

  From cloud-init.log - the call to check ssh service status happened at
  22:44:43,630, when cloud-init wrote the file sshd_config at 22:44:51,
  ssh service already started. There's a strange 8s delay from systemctl
  that  might have to do with systemd or the condition of the VM.
  Regardless, the race is definitely there.

  2022-11-22 22:44:43,630 - subp.py[DEBUG]: Running command ['systemctl', 
'status', 'ssh'] with allowed return codes [0] (shell=False, capture=True)
  2022-11-22 22:44:51,116 - cc_set_passwords.py[DEBUG]: Writing config 
'ssh_pwauth: True'. SSH service 'ssh' will not be restarted because it is 
stopped.
  2022-11-22 22:44:51,116 - util.py[DEBUG]: Reading from /etc/ssh/sshd_config 
(quiet=False)
  2022-11-22 22:44:51,116 - util.py[DEBUG]: Read 3546 bytes from 
/etc/ssh/sshd_config
  2022-11-22 22:44:51,116 - ssh_util.py[DEBUG]: line 55: option 
PasswordAuthentication updated no -> yes
  2022-11-22 22:44:51,117 - util.py[DEBUG]: Writing to /etc/ssh/sshd_config - 
wb: [600] 3547 bytes

  Here's what the customer gave us from their VM:

  ssh.service - OpenBSD Secure Shell server
 Loaded: loaded (/lib/systemd/system/ssh.service; enabled; vendor preset: 
enabled)
 Active: active (running) since Tue 2022-11-22 22:44:43 UTC; 6 days ago 

  sshd_config file changed after only few seconds.

  -rw--- 1 root root 3547 2022-11-22 22:44:51.113697898 +
  /etc/ssh/sshd_config

To manage notifications about this bug go to:
https://bugs.launchpad.net/cloud-init/+bug/1998526/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to :

[Yahoo-eng-team] [Bug 1970467] Re: nova blocks vdpa move operations that work correctly.

2022-12-01 Thread sean mooney
** Also affects: nova/zed
   Importance: Undecided
   Status: New

** Also affects: nova/xena
   Importance: Undecided
   Status: New

** Also affects: nova/wallaby
   Importance: Undecided
   Status: New

** Also affects: nova/yoga
   Importance: Undecided
   Status: New

** Changed in: nova/zed
   Status: New => Fix Released

** Changed in: nova/yoga
   Status: New => Fix Released

** Changed in: nova/xena
   Status: New => In Progress

** Changed in: nova/wallaby
   Status: New => In Progress

** Changed in: nova/wallaby
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

** Changed in: nova/xena
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

** Changed in: nova/yoga
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

** Changed in: nova/zed
 Assignee: (unassigned) => sean mooney (sean-k-mooney)

** Changed in: nova/wallaby
   Importance: Undecided => Medium

** Changed in: nova/yoga
   Importance: Undecided => Medium

** Changed in: nova/zed
   Importance: Undecided => Medium

** Changed in: nova/xena
   Importance: Undecided => Medium

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1970467

Title:
  nova blocks vdpa move operations that work correctly.

Status in OpenStack Compute (nova):
  Fix Released
Status in OpenStack Compute (nova) wallaby series:
  In Progress
Status in OpenStack Compute (nova) xena series:
  In Progress
Status in OpenStack Compute (nova) yoga series:
  Fix Released
Status in OpenStack Compute (nova) zed series:
  Fix Released

Bug description:
  during the implementation of the initall VDPA feature support i only
  had one server with vdpa capable nics.

  as a result no move operations were actually tested so we choose out of an 
abundance of cation
  to block them at the API level until we had a chance to test them.
  They have now been tested and work without code change once the API block is 
removed
  during the zed ptg we agreed to treat this as a bug and remove the API block.

  once this is complete cold migration, resize, evacuate and
  shelve/unshelve will be available when using interfaces with vnic-type
  vdpa.

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1970467/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998104] Re: dynamic-routing: address_scope calculation error

2022-12-01 Thread OpenStack Infra
Reviewed:  
https://review.opendev.org/c/openstack/neutron-dynamic-routing/+/863708
Committed: 
https://opendev.org/openstack/neutron-dynamic-routing/commit/feff164b608d00bf8163ea259305c3a64e68c1da
Submitter: "Zuul (22348)"
Branch:master

commit feff164b608d00bf8163ea259305c3a64e68c1da
Author: ROBERTO BARTZEN ACOSTA 
Date:   Fri Nov 4 17:01:08 2022 -0300

Fix address_scope calculation

Fix in the iteration to obtain address_scope linked to a subnet.
A network can be linked to more than one subnet (ipv4 and ipv6),
but if one of them does not have an address_scope, a null object
element access failure occurs.

Closes-bug: #1998104
Change-Id: Ic6d48a86043aaf4b458bb2230883a355fc841ee9


** Changed in: neutron
   Status: In Progress => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1998104

Title:
  dynamic-routing: address_scope calculation error

Status in neutron:
  Fix Released

Bug description:
  Neutron dynamic routing needs to be fixed in the iteration to obtain
  address_scope linked to a provider network with multiple subnets. A
  network can be linked to more than one subnet (e.g. ipv4 and ipv6),
  but if one of them does not have an address_scope, a null object
  element access failure occurs.

  Fix proposed in the neutron-dynamic-routing project [1].

  [1] - https://review.opendev.org/c/openstack/neutron-dynamic-
  routing/+/863708

  
  Steps to reproduce:

  #1 - IPv6 address scope
  openstack address scope create --share --ip-version 6 bgp

  #2 - self-service subnet pool
  openstack subnet pool create --address-scope address-scope-ipv6 --share 
--default --pool-prefix 2001:db9:1234::/48 --default-prefix-length 64 
--max-prefix-length 64 default-pool-ipv6

  #3 - provider subnet pool
  openstack subnet pool create --address-scope address-scope-ipv6 --pool-prefix 
2001:db9:4321:42::/64 --default-prefix-length 64 public-pool-ipv6

  #4 - Provider network
  openstack network create provider --external --provider-physical-network \
provider --provider-network-type flat
  #5 - provider subnet
  openstack subnet create --ip-version 6 --subnet-pool public-pool-ipv6 
--network provider --ipv6-address-mode dhcpv6-stateful --ipv6-ra-mode 
dhcpv6-stateful provider1-v6
  openstack subnet create --ip-version 4 --network provider --dhcp --host-route 
 destination=200.201.0.0/24,gateway=200.201.0.1 --subnet-range 200.201.0.0/24 
provider1-v4

  #6 - self-service network
  openstack network create self-service

  #7 - self-service subnet
  openstack subnet create --ip-version 6 --subnet-pool default-pool-ipv6 
--network self-service --ipv6-address-mode dhcpv6-stateful --ipv6-ra-mode 
dhcpv6-stateful self-service-v6
  openstack subnet create --ip-version 4 --network self-service  --dhcp 
--host-route  destination=192.168.0.0/24,gateway=192.168.0.1 --subnet-range 
192.168.0.0/24 self-service-v4

  #8 - create router
  openstack router create router1

  #9 - add self-service subnet as an interface on the router
  openstack router add subnet router1 self-service-v4
  openstack router add subnet router1 self-service-v6

  #10 - Add the provider network as a gateway on each router.
  openstack router set --external-gateway provider router1

  #11 - create bgp speaker
  openstack bgp speaker create --ip-version 6 --local-as 65000 bgpspeaker
  openstack bgp speaker add network bgpspeaker provider

  #12 - create a vm on the self-service network
  openstack server create --image cirros --flavor 1vcpu --network=self-service 
--security-group cf2e7d53-0db7-4873-82ab-cf67eceda937 vm1

  
  # We can see the messages below in the neutron log:

  Nov 07 14:20:00 os-infra-1-neutron-server-container-819795c0 
neutron-server[3698]: 2022-11-07 14:20:00.472 3698 ERROR 
neutron_lib.callbacks.manager [req-24fe543c-7122-4869-b52d-21ecb782ea0e 
8c140d00a7754295beae4ac85c5beecc 115a2ce896ad4958a26e3a4d624902a5 - default 
default] Error during notification for 
neutron_dynamic_routing.services.bgp.bgp_plugin.BgpPlugin.port_callback-950729 
port, after_update: AttributeError: 'NoneType' object has no attribute 
'address_scope_id'

 2022-11-07 14:20:00.472 3698 ERROR neutron_lib.callbacks.manager Traceback 
(most recent call last):

 2022-11-07 14:20:00.472 3698 ERROR neutron_lib.callbacks.manager   File 
"/usr/lib/python3/dist-packages/neutron_lib/callbacks/manager.py", line 181, in 
_notify_loop

 2022-11-07 14:20:00.472 3698 ERROR neutron_lib.callbacks.manager 
callback(resource, event, trigger, payload=payload)

 2022-11-07 14:20:00

[Yahoo-eng-team] [Bug 1511134] Re: Batch DVR ARP updates

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this bug

** Changed in: neutron
   Status: New => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1511134

Title:
  Batch DVR ARP updates

Status in neutron:
  Won't Fix

Bug description:
  The L3 agent currently issues ARP updates one at a time while
  processing a DVR router. Each ARP update creates an external process
  which has to call the neutron-rootwrap helper while also "ip netns
  exec " -ing each time.

  The ip command contains a "-batch " option which would be
  able to batch all of the "ip neigh replace" commands into one external
  process per qrouter namespace. This would greatly reduce the amount of
  time it takes the L3 agent to update large numbers of ARP entries,
  particularly as the number of VMs in a deployment rises.

  The benefit of batching ip commands can be seen in this simple bash
  example:

  $ time for i in {0..50}; do sudo ip netns exec qrouter-
  bc38451e-0c2f-4ad2-b76b-daa84066fefb ip a > /dev/null; done

  real  0m2.437s
  user0m0.183s
  sys   0m0.359s
  $ for i in {0..50}; do echo a >> /tmp/ip_batch_test; done
  $ time sudo ip netns exec qrouter-bc38451e-0c2f-4ad2-b76b-daa84066fefb ip -b 
/tmp/ip_batch_test > /dev/null

  real  0m0.046s
  user0m0.003s
  sys   0m0.007s

  If just 50 arp updates are batched together, there is about a 50x
  speedup. Repeating this test with 500 commands showed a speedup of
  250x (disclaimer: this was a rudimentary test just to get a rough
  estimate of the performance benefit).

  Note: see comments #1-3 for less-artificial performance data.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1511134/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1509924] Re: Tempest needs to test DHCPv6 stateful

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think please 
reopen this bug
As I see we have still tests for slaac, and not for stateful

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1509924

Title:
  Tempest needs to test DHCPv6 stateful

Status in neutron:
  Won't Fix

Bug description:
  Currently there are no tests for DHCPv6 stateful IPv6 configurations,
  due to a bug in Cirros, which does not have support for DHCPv6

  https://bugs.launchpad.net/cirros/+bug/1487041

  Work needs to be done in Tempest to select an image that has DHCPv6
  stateful support.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1509924/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1526587] Re: Neutron doesn't have a command to show the available IP addresses for one subnet

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this bug

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1526587

Title:
  Neutron doesn't have a command to show the available IP addresses for
  one subnet

Status in neutron:
  Won't Fix
Status in python-neutronclient:
  Won't Fix
Status in python-openstackclient:
  In Progress

Bug description:
  Neutron doesn't have a command to show the allocated ip addresses for
  one subnet.

  We can get the allocated ip list with command:
  [root@cts-orch ~]# neutron port-list | grep `neutron subnet-show 110-OAM2 | 
awk '/ id / {print $4}'` | cut -d"|" -f5 | cut -d":" -f3 | sort
   "135.111.122.97"}
   "135.111.122.98"}

  But we don't have a command to show the available ips for one subnet.
  I write a shell script to show the available ports as below, but it
  will be helpful if we can provide such a neutron command.

  [root@cts-orch ~]# ./show_available_ip.sh 110-OAM2
  135.111.122.99
  135.111.122.100
  135.111.122.101
  135.111.122.102
  135.111.122.103
  135.111.122.104
  135.111.122.105
  135.111.122.106
  135.111.122.107
  135.111.122.108
  135.111.122.109
  135.111.122.110
  135.111.122.111
  135.111.122.112
  135.111.122.113
  135.111.122.114
  135.111.122.115
  135.111.122.116
  135.111.122.117
  135.111.122.118
  135.111.122.119
  135.111.122.120
  135.111.122.121
  135.111.122.122
  135.111.122.123
  135.111.122.124
  Total Count: 26

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1526587/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1505627] Re: [RFE] QoS Explicit Congestion Notification (ECN) Support

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this RFE

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1505627

Title:
  [RFE] QoS Explicit Congestion Notification (ECN) Support

Status in neutron:
  Won't Fix

Bug description:
  [Existing problem]
  Network congestion can be very common in large data centers generating huge 
traffic from multiple hosts. Though each hosts can use IP header TOS ECN bit 
functionality to implement explicit congestion notification [1]_ but this will 
be a redundant effort.

  [Proposal]
  This proposal talks about achieving ECN on behalf of each host. This will 
help in making the solution centralized and can be done per tenant level. In 
addition to this traffic classification for applying ECN functionality can also 
be achieved via specific filtering rules, if required. Almost all the leading 
vendors support this option for better QoS [2]_.

  Existing QoS framework is limited only to bandwidth rate limiting and
  be extend for supporting explicit congestion notification (RFC 3168
  [3]_).

  [Benefits]
  - Enhancement to the existing QoS functionality.

  [What is the enhancement?]
  - Add ECN support to the QoS extension.
  - Add additional command lines for realizing ECN functionality.
  - Add OVS support.

  [Related information]
  [1] ECN Wiki
     http://en.wikipedia.org/wiki/Explicit_Congestion_Notification
  [2] QoS
     https://review.openstack.org/#/c/88599/
  [3] RFC 3168
     https://tools.ietf.org/html/rfc3168
  [4] Specification
  
https://blueprints.launchpad.net/neutron/+spec/explicit-congestion-notification
  [5] Specification Discussion: https://etherpad.openstack.org/p/QoS_ECN
  [6] OpenVSwitch support for ECN : 
http://openvswitch.org/support/dist-docs/ovs-ofctl.8.txt
  [7] Etherpad Link : https://etherpad.openstack.org/p/QoS_ECN

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1505627/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1506076] Re: Allow connection tracking to be disabled per-port

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this bug report

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1506076

Title:
  Allow connection tracking to be disabled per-port

Status in neutron:
  Won't Fix

Bug description:
  This RFE is being raised in the context of this use case
  https://review.openstack.org/#/c/176301/ from the TelcoWG.

  OpenStack implements levels of per-VM security protection (security
  groups, anti-spoofing rules).  If you want to deploy a trusted VM
  which itself is providing network security functions, as with the
  above use case, then it is often necessary to disable some of the
  native OpenStack protection so as not to interfere with the protection
  offered by the VM or use excessive host resources.

  Neutron already allows you to disable security groups on a per-port
  basis.  However, the Linux kernel will still perform connection
  tracking on those ports.  With default Linux config, VMs will be
  severely scale limited without specific host configuration of
  connection tracking limits - for example, a Session Border Controller
  VM may be capable of handling millions of concurrent TCP connections,
  but a default host won't support anything like that.  This bug is
  therefore a RFE to request that disabling security group function for
  a port further disables kernel connection tracking for IP addresses
  associated with that port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1506076/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1507499] Re: [RFE] Centralized Management System for testing the environment

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this RFE

** Changed in: neutron
   Status: Triaged => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1507499

Title:
  [RFE] Centralized Management System for testing the environment

Status in neutron:
  Won't Fix

Bug description:
  To enable operators to reduce manual work upon experiencing networking
  issue, and to fast pinpoint the cause of a failure, there is a need for
  neutron to provide real-time diagnostics of its resources. This way,
  current need for manual checks, often requiring root access, would be
  gradually replaced by API queries. Providing diagnostics options in
  neutron API would also open space for development of specialized tools
  that would solve particular type of issues, e.g. inability to ping VM’s
  interface.

  Note: The description of this RFE was changed to cover previous RFEs
  related to diagnostics (namely bug 1563538, bug 1537686, bug 1519537
  and the original of this bug).

  Problem Description
  ===

  One of common questions seen at ask.openstack.org and mailing lists is
  "Why cannot I ping my floating IP address?". Usually, there are common
  steps in the diagnostics required to answer the question involving
  determination of relevant namespaces, pinging the instance from that
  namespaces etc. Currently, these steps need to be performed manually,
  often by crawling the relevant hosts and running tools that require root
  access.

  Neutron currently provides data on how the resources *should* be
  configured. It however provides only a very little diagnostics
  information reflecting *actual* resource state. Hence if an issue
  occurs, user is often left with only a little details of what works and
  what not, and has to manually crawl affected hosts to troubleshoot the
  issue.

  Proposed Change
  ===

  This RFE requests an extension of current API that exposes
  diagnostics for neutron resources so that it is accessible via API
  calls, reducing amount of needed manual work. Further it describes
  additions to Neutron CLI necessary to call the newly added API.

  Spec
  
  https://review.openstack.org/#/c/308973/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1507499/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1498987] Re: [RFE] DHCP agent should provide ipv6 RAs for isolated networks with ipv6 subnets

2022-12-01 Thread Lajos Katona
As I see the history we can close this with won't fix if you think
please reopen this RFE

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1498987

Title:
  [RFE] DHCP agent should provide ipv6 RAs for isolated networks with
  ipv6 subnets

Status in neutron:
  Won't Fix

Bug description:
  Currently, if there is no router attached to a subnet, then instances
  cannot walk thru IPv6 address assignment because there is nothing on
  the network that multicasts RAs that would provide basic info about
  how ipv6 addressing is handled there. We can have DHCP agent to run
  radvd in that case. Then instances would be able to receive IPv6
  addresses on isolated networks too.

  We could try to rely on https://tools.ietf.org/html/rfc4191.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1498987/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1998474] [NEW] [UT] Random failure of "test_meter_manager_allocate_meter_id"

2022-12-01 Thread Rodolfo Alonso
Public bug reported:

Logs:
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_563/866307/1/check/openstack-
tox-py38/56317f0/testr_results.html

Snippet: https://paste.opendev.org/show/b5iclT52tAaZ7ROyrmFz/

** Affects: neutron
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1998474

Title:
  [UT] Random failure of "test_meter_manager_allocate_meter_id"

Status in neutron:
  New

Bug description:
  Logs:
  
https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_563/866307/1/check/openstack-
  tox-py38/56317f0/testr_results.html

  Snippet: https://paste.opendev.org/show/b5iclT52tAaZ7ROyrmFz/

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1998474/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp