[Yahoo-eng-team] [Bug 2063345] [NEW] Instance state is error when resizing or migrating

2024-04-24 Thread Khoi
Public bug reported:

Openstack Details:

Base OS - Ubuntu 20.04 LTS

HW - 3 Controller and multi computes

Openstack Version - Yoga(Kolla-Ansible)

Cinder Backend: Powerstore 5000T using ISCSI

Hello, I have this error when I resize or migrate instances

2024-04-12 09:49:35.284 7 WARNING os_brick.exception 
[req-ea4c2c71-d1f1-45ab-82b5-9b384bc93101 
5875e342bc6c39052c9b291df7da1a4f4960cb5578defbadb0b432f5b5dbd087 
a14e68c02f6344cb8acdd279dce5aab1 - default default] Flushing 
36000d3100578ec0005ba failed: 
oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while 
running command.
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager 
[req-ea4c2c71-d1f1-45ab-82b5-9b384bc93101 
5875e342bc6c39052c9b291df7da1a4f4960cb5578defbadb0b432f5b5dbd087 
a14e68c02f6344cb8acdd279dce5aab1 - default default] [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] Setting instance vm_state to ERROR: 
oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while 
running command.
Command: multipath -f 36000d3100578ec0005ba
Exit code: 1
Stdout: ''
Stderr: 'Apr 12 09:48:55 | 36000d3100578ec0005ba: map in use\n'
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] Traceback (most recent call last):
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 
10415, in _error_out_instance_on_exception
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] yield
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/nova/compute/manager.py", line 
5712, in _resize_instance
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] disk_info = 
self.driver.migrate_disk_and_power_off(
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", 
line 11252, in migrate_disk_and_power_off
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] self._disconnect_volume(context, 
connection_info, instance)
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/nova/virt/libvirt/driver.py", 
line 1971, in _disconnect_volume
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] vol_driver.disconnect_volume(
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/nova/virt/libvirt/volume/iscsi.py",
 line 74, in disconnect_volume
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] self.connector.disconnect_volume(
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/os_brick/utils.py", line 169, 
in trace_logging_wrapper
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] return f(*args, **kwargs)
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/oslo_concurrency/lockutils.py",
 line 391, in inner
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] return f(*args, **kwargs)
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/os_brick/utils.py", line 382, 
in change_encrypted2024-04-12 09:49:35.309 7 ERROR nova.compute.manager 
[instance: 17a71500-ebb8-42cf-82b8-0d380929d380] res = func(**call_args)
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/os_brick/initiator/connectors/iscsi.py",
 line 855, in disconnect_volume
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] return 
self._cleanup_connection(connection_properties, force=force,
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] File 
"/var/lib/kolla/venv/lib/python3.8/site-packages/os_brick/initiator/connectors/iscsi.py",
 line 904, in _cleanup_connection
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [instance: 
17a71500-ebb8-42cf-82b8-0d380929d380] multipath_name = 
self._linuxscsi.remove_connection(
2024-04-12 09:49:35.309 7 ERROR nova.compute.manager [in

[Yahoo-eng-team] [Bug 2063342] [NEW] slow db due to huge instance_actions tables

2024-04-24 Thread Max
Public bug reported:

Description
===
In a large environment the instance_actions and instance_actions_events tables 
for not deleted instances can grow infinitely.
For example when doing many migrations/live-migrations the events will sum up.
Every migration produces one action with ~10 events.

Huge tables in combination with the api autojoins can produce slow
queries.

Keeping migration events for one month is usually enough.

The patch in the attachment proposes a clean_events option for the nova-
manage command.


Steps to reproduce
==
Have many VMs and instance actions e.g. live-migrations 


Expected result
===
fast api & db queries

Actual result
=
slow api & overloaded db due to many entries in instance_actions and
instance_actions_events - normmallymax_statement_time 

Environment
===
nova zed
Regulary updating all hypervisors -> which means migrate VMs away 
-> instance-actions/-events tables grow

** Affects: nova
 Importance: Undecided
 Status: New

** Patch added: "0001-feat-nova-manage-db-instace-events-cleanup.patch"
   
https://bugs.launchpad.net/bugs/2063342/+attachment/5770061/+files/0001-feat-nova-manage-db-instace-events-cleanup.patch

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/2063342

Title:
  slow db due to huge instance_actions tables

Status in OpenStack Compute (nova):
  New

Bug description:
  Description
  ===
  In a large environment the instance_actions and instance_actions_events 
tables for not deleted instances can grow infinitely.
  For example when doing many migrations/live-migrations the events will sum up.
  Every migration produces one action with ~10 events.

  Huge tables in combination with the api autojoins can produce slow
  queries.

  Keeping migration events for one month is usually enough.

  The patch in the attachment proposes a clean_events option for the
  nova-manage command.

  
  Steps to reproduce
  ==
  Have many VMs and instance actions e.g. live-migrations 

  
  Expected result
  ===
  fast api & db queries

  Actual result
  =
  slow api & overloaded db due to many entries in instance_actions and
  instance_actions_events - normmallymax_statement_time 

  Environment
  ===
  nova zed
  Regulary updating all hypervisors -> which means migrate VMs away 
  -> instance-actions/-events tables grow

To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/2063342/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1798475] Re: Fullstack test test_ha_router_restart_agents_no_packet_lost failing

2024-04-24 Thread Lajos Katona
I close this for now, the test
test_ha_router_restart_agents_no_packet_lost is still marked as
unstable, feel free to reopen it.

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1798475

Title:
  Fullstack test test_ha_router_restart_agents_no_packet_lost failing

Status in neutron:
  Won't Fix

Bug description:
  Found at least 4 times recently:

  
http://logs.openstack.org/97/602497/5/gate/neutron-fullstack/b8ba2f9/logs/testr_results.html.gz
  
http://logs.openstack.org/90/610190/2/gate/neutron-fullstack/1f633ed/logs/testr_results.html.gz
  
http://logs.openstack.org/52/608052/1/gate/neutron-fullstack/6d36706/logs/testr_results.html.gz
  
http://logs.openstack.org/48/609748/1/gate/neutron-fullstack/f74a133/logs/testr_results.html.gz

  
  Looks that sometimes during L3 agent restart there is some packets loss 
noticed and that cause failure. We need to investigate that.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1798475/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1775220] Re: Unit test neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase. test_get_objects_queries_constant fails often

2024-04-24 Thread Lajos Katona
The test 
(neutron.tests.unit.objects.test_base.BaseDbObjectTestCase.test_get_objects_queries_constant)
 is still unstable, but my query haven't found failure of the test.
I close this now but feel free to reopen it if you encounter this issue

** Changed in: neutron
   Status: Confirmed => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1775220

Title:
  Unit test
  neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase.
  test_get_objects_queries_constant fails often

Status in neutron:
  Won't Fix

Bug description:
  Since some time we have quite often issue with unit test
  neutron.tests.unit.objects.test_ports.PortBindingLevelDbObjectTestCase
  .test_get_objects_queries_constant

  It happens also for periodic jobs. Examples of failures from last
  week:

  
http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master/openstack-
  tox-py27-with-oslo-master/031dc64/testr_results.html.gz

  
http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master/openstack-
  tox-py35-with-neutron-lib-master/4f4b599/testr_results.html.gz

  
http://logs.openstack.org/periodic/git.openstack.org/openstack/neutron/master/openstack-
  tox-py35-with-oslo-master/348faa8/testr_results.html.gz

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1775220/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1774463] Re: RFE: Add support for IPv6 on DVR Routers for the Fast-path exit

2024-04-24 Thread Lajos Katona
I close now this bug due to long inactivity, please open in again if you
wish to work on it, or see the issue in your environment.

** Changed in: neutron
   Status: In Progress => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1774463

Title:
  RFE: Add support for IPv6 on DVR Routers for the Fast-path exit

Status in neutron:
  Won't Fix

Bug description:
  This RFE is to add support for IPv6 on DVR Routers for the Fast-Path-Exit.
  Today DVR support Fast-Path-Exit through the FIP Namespace, but FIP Namespace 
does not support IPv6 addresses for the Link local address and also we don't 
have any ra proxy enabled in the FIP Namespace.
  So this RFE should address those issues.

  1. Update the link local address for 'rfp' and 'fpr' ports to support both 
IPv4 and IPv6.
  2. Enable ra proxy in the FIP Namespace and also assign IPv6 address to the 
FIP gateway port.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1774463/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1744402] Re: fullstack security groups test fails because ncat process don't starts

2024-04-24 Thread Lajos Katona
Since https://review.opendev.org/c/openstack/neutron/+/830374
fullstack's unstable decorator is removed from test_security_groups .
(that is ~unmaintained/yoga at least since the tag is removed)

** Changed in: neutron
   Status: Incomplete => Fix Released

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1744402

Title:
  fullstack security groups test fails because ncat process don't starts

Status in neutron:
  Fix Released

Bug description:
  Sometimes fullstack test
  neutron.tests.fullstack.test_securitygroup.TestSecurityGroupsSameNetwork
  fails because "ncat" process don't starts properly:

  Captured traceback:
  ~~~
  Traceback (most recent call last):
File "neutron/tests/base.py", line 132, in func
  return f(self, *args, **kwargs)
File "neutron/tests/fullstack/test_securitygroup.py", line 163, in 
test_securitygroup
  net_helpers.NetcatTester.TCP)
File "neutron/tests/fullstack/test_securitygroup.py", line 68, in 
assert_connection
  self.assertTrue(netcat.test_connectivity())
File "neutron/tests/common/net_helpers.py", line 509, in 
test_connectivity
  self.client_process.writeline(testing_string)
File "neutron/tests/common/net_helpers.py", line 459, in client_process
  self.establish_connection()
File "neutron/tests/common/net_helpers.py", line 489, in 
establish_connection
  address=self.address)
File "neutron/tests/common/net_helpers.py", line 537, in 
_spawn_nc_in_namespace
  proc = RootHelperProcess(cmd, namespace=namespace)
File "neutron/tests/common/net_helpers.py", line 288, in __init__
  self._wait_for_child_process()
File "neutron/tests/common/net_helpers.py", line 321, in 
_wait_for_child_process
  "in %d seconds" % (self.cmd, timeout)))
File "neutron/common/utils.py", line 649, in wait_until_true
  raise exception
  RuntimeError: Process ['ncat', u'20.0.0.5', '', '-w', '20'] hasn't 
been spawned in 20 seconds

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1744402/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1675910] Re: segment event transaction semantics are wrong

2024-04-24 Thread Lajos Katona
I close this now as I understand more thing works if we keep
_delete_segments_for_network in PRECOMMIT_DELETE, (the revert mentioned
by Ihar  mentions some, see:
https://review.opendev.org/c/openstack/neutron/+/475955 )

** Changed in: neutron
   Status: Confirmed => Invalid

** Changed in: neutron
   Status: Invalid => Won't Fix

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to neutron.
https://bugs.launchpad.net/bugs/1675910

Title:
  segment event transaction semantics are wrong

Status in neutron:
  Won't Fix

Bug description:
  _delete_segments_for_network is currently being called inside of a
  transaction, which results in all of the BEFORE/PRECOMMIT/AFTER events
  for the segments themselves being inside of a transaction. This makes
  them all effectively PRECOMMIT in the database lifecycle which
  violates the semantics we've assigned to them.

To manage notifications about this bug go to:
https://bugs.launchpad.net/neutron/+bug/1675910/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp


[Yahoo-eng-team] [Bug 1542491] Re: Scheduler update_aggregates race causes incorrect aggregate information

2024-04-24 Thread sean mooney
** Changed in: nova
   Status: Confirmed => Opinion

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1542491

Title:
  Scheduler update_aggregates race causes incorrect aggregate
  information

Status in OpenStack Compute (nova):
  Opinion
Status in Ubuntu:
  Invalid

Bug description:
  It appears that if nova-api receives simultaneous requests to add a
  server to a host aggregate, then a race occurs that can lead to nova-
  scheduler having incorrect aggregate information in memory.

  One observed effect of this is that sometimes nova-scheduler will
  think a smaller number of hosts are a member of the aggregate than is
  in the nova database and will filter out a host that should not be
  filtered.

  Restarting nova-scheduler fixes the issue, as it reloads the aggregate
  information on startup.

  Nova package versions: 1:2015.1.2-0ubuntu2~cloud0

  Reproduce steps:

  Create a new os-aggregate and then populate an os-aggregate with
  simultaneous API POSTs, note timestamps:

  2016-02-04 20:17:08.538 13648 INFO nova.osapi_compute.wsgi.server 
[req-d07a006e-134a-46d8-9815-6becec5b185c 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.3 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates HTTP/1.1" status: 200 len: 
439 time: 0.1865470
  2016-02-04 20:17:09.204 13648 INFO nova.osapi_compute.wsgi.server 
[req-a0402297-9337-46d6-96d2-066e230e45e1 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.2 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates/1/action HTTP/1.1" status: 
200 len: 506 time: 0.2995598
  2016-02-04 20:17:09.243 13648 INFO nova.osapi_compute.wsgi.server 
[req-0f543525-c34e-418a-91a9-894d714ee95b 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.2 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates/1/action HTTP/1.1" status: 
200 len: 519 time: 0.3140590
  2016-02-04 20:17:09.273 13649 INFO nova.osapi_compute.wsgi.server 
[req-2f8d80b0-726f-4126-a8ab-a2eae3f1a385 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.2 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates/1/action HTTP/1.1" status: 
200 len: 506 time: 0.3759601
  2016-02-04 20:17:09.275 13649 INFO nova.osapi_compute.wsgi.server 
[req-80ab6c86-e521-4bf0-ab67-4de9d0eccdd3 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] 10.120.13.1 "POST 
/v2.1/326d453c2bd440b4a7160489b632d0a8/os-aggregates/1/action HTTP/1.1" status: 
200 len: 506 time: 0.3433032

  Schedule a VM

  Expected Result:
  nova-scheduler Availability Zone filter returns all members of the aggregate

  Actual Result:
  nova-scheduler believes there is only one hypervisor in the aggregate. The 
number will vary as it is a race:

  2016-02-05 07:48:04.411 13600 DEBUG nova.filters 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Starting with 4 host(s) 
get_filtered_objects /usr/lib/python2.7/dist-packages/nova/filters.py:70
  2016-02-05 07:48:04.411 13600 DEBUG nova.filters 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Filter RetryFilter returned 4 host(s) 
get_filtered_objects /usr/lib/python2.7/dist-packages/nova/filters.py:84
  2016-02-05 07:48:04.412 13600 DEBUG 
nova.scheduler.filters.availability_zone_filter 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Availability Zone 'temp' requested. 
(oshv0, oshv0) ram:122691 disk:13404160 io_ops:0 instances:0 has AZs: nova 
host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/availability_zone_filter.py:62
  2016-02-05 07:48:04.412 13600 DEBUG 
nova.scheduler.filters.availability_zone_filter 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Availability Zone 'temp' requested. 
(oshv2, oshv2) ram:122691 disk:13403136 io_ops:0 instances:0 has AZs: nova 
host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/availability_zone_filter.py:62
  2016-02-05 07:48:04.413 13600 DEBUG 
nova.scheduler.filters.availability_zone_filter 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Availability Zone 'temp' requested. 
(oshv1, oshv1) ram:122691 disk:13404160 io_ops:0 instances:0 has AZs: nova 
host_passes 
/usr/lib/python2.7/dist-packages/nova/scheduler/filters/availability_zone_filter.py:62
  2016-02-05 07:48:04.413 13600 DEBUG nova.filters 
[req-c24338b5-a3b8-4864-8140-04ea6fbcf68f 41812fc01c6549ac8ed15c6dab05c670 
326d453c2bd440b4a7160489b632d0a8 - - -] Filter AvailabilityZoneFilter returned 
1 host(s) get_filtered_objects 
/usr/lib/python2.7/dist-pack

[Yahoo-eng-team] [Bug 2063321] [NEW] CADF initiator name / username field is inconsistent

2024-04-24 Thread Jake Yip
Public bug reported:

The CADF notification generated by keystone and keystone middleware is
inconsistent. Specifically, the field for initiator's username is
`initiator.username` in keystone, and `initiator.name` in
keystonemiddleware.

It would be good for both keystone and keystonemiddleware to have the
same field, so we can grok for the relevant data consistently.

More information:

In Change I833e6e0d7792acf49f816050ad7a63e8ea4f702f, the username of the
initiator was added to the `initiator.username` field. However, this is
inconsistent with keystonemiddleware, which calls it
`initiator.name`[2]. It is also different from the specs, which states
it should be `initiator:name`[3].

[1] https://review.opendev.org/c/openstack/keystone/+/699013

[2]
https://opendev.org/openstack/keystonemiddleware/src/branch/stable/2023.2/keystonemiddleware/audit/_api.py#L290

[3]
https://www.dmtf.org/sites/default/files/standards/documents/DSP2038_1.1.0.pdf

** Affects: keystone
 Importance: Undecided
 Status: New

-- 
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Identity (keystone).
https://bugs.launchpad.net/bugs/2063321

Title:
  CADF initiator name / username field is inconsistent

Status in OpenStack Identity (keystone):
  New

Bug description:
  The CADF notification generated by keystone and keystone middleware is
  inconsistent. Specifically, the field for initiator's username is
  `initiator.username` in keystone, and `initiator.name` in
  keystonemiddleware.

  It would be good for both keystone and keystonemiddleware to have the
  same field, so we can grok for the relevant data consistently.

  More information:

  In Change I833e6e0d7792acf49f816050ad7a63e8ea4f702f, the username of
  the initiator was added to the `initiator.username` field. However,
  this is inconsistent with keystonemiddleware, which calls it
  `initiator.name`[2]. It is also different from the specs, which states
  it should be `initiator:name`[3].

  [1] https://review.opendev.org/c/openstack/keystone/+/699013

  [2]
  
https://opendev.org/openstack/keystonemiddleware/src/branch/stable/2023.2/keystonemiddleware/audit/_api.py#L290

  [3]
  https://www.dmtf.org/sites/default/files/standards/documents/DSP2038_1.1.0.pdf

To manage notifications about this bug go to:
https://bugs.launchpad.net/keystone/+bug/2063321/+subscriptions


-- 
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : yahoo-eng-team@lists.launchpad.net
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help   : https://help.launchpad.net/ListHelp