[Yahoo-eng-team] [Bug 2026284] Re: virtio-net-tx-queue-size reflects in nova conf but not for the vm even after a hard reboot
This does not appear to be a charm issue, but rather it appears to potentially be a nova issue. I can confirm that setting the rx_queue_size and tx_queue_size results in the nova.conf file being updated by the charm, but that the resulting hard rebooted guest does not get the tx_queue_size, only the rx_queue_size. ** Also affects: nova Importance: Undecided Status: New ** Changed in: nova Status: New => Incomplete -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/2026284 Title: virtio-net-tx-queue-size reflects in nova conf but not for the vm even after a hard reboot Status in OpenStack Nova Compute Charm: New Status in OpenStack Compute (nova): Incomplete Bug description: After modifying the nova compute config options, - virtio-net-rx-queue-size=512 - virtio-net-tx-queue-size=512 I hard rebooted my vm and spawned a new vm and what I see (on both of them) is: - virsh xml ``` # virsh dumpxml 2 | grep -i queue ``` - nova.conf ``` # grep -i queue /etc/nova/nova.conf tx_queue_size = 512 rx_queue_size = 512 ``` - inside the vm ``` root@jammy-135110:~# ethtool -g ens2 Ring parameters for ens2: Pre-set maximums: RX: 512 RX Mini:n/a RX Jumbo: n/a TX: 256 Current hardware settings: RX: 512 RX Mini:n/a RX Jumbo: n/a TX: 256 ``` The RX config gets propagated, but the TX config does not Please let me know if any more information is needed. -- env: - focal ussuri - nova-compute: charm: nova-compute channel: ussuri/stable revision: 669 - this is a freshly deployed openstack on vms (not on baremetal) - libvirt: 6.0.0-0ubuntu8.16 - nova-compute-libvirt 21.2.4-0ubuntu2.5 - qemu 4.2-3ubuntu6.27 To manage notifications about this bug go to: https://bugs.launchpad.net/charm-nova-compute/+bug/2026284/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 2026547] [NEW] [RBAC] Remove "AddressScope" shared field
Public bug reported: Remove "AddressScope" shared field. An address scope is shared via RBAC; this field is no longer needed. ** Affects: neutron Importance: Low Assignee: Rodolfo Alonso (rodolfo-alonso-hernandez) Status: New ** Changed in: neutron Assignee: (unassigned) => Rodolfo Alonso (rodolfo-alonso-hernandez) ** Changed in: neutron Importance: Undecided => Low -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/2026547 Title: [RBAC] Remove "AddressScope" shared field Status in neutron: New Bug description: Remove "AddressScope" shared field. An address scope is shared via RBAC; this field is no longer needed. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/2026547/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 2026489] [NEW] [RFE][quota] Implement "router_route" quota using the Neutron quota engine, replacing "max_routes" config option
Public bug reported: Replace the "max_routes" static configuration option with a new "router_routes" quota, defined per project, using the Neutron quota engine. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/2026489 Title: [RFE][quota] Implement "router_route" quota using the Neutron quota engine, replacing "max_routes" config option Status in neutron: New Bug description: Replace the "max_routes" static configuration option with a new "router_routes" quota, defined per project, using the Neutron quota engine. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/2026489/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 2025813] Re: test_rebuild_volume_backed_server failing 100% on nova-lvm job
this is a bug in devstack-plugin-ceph-multinode-tempest-py3 we need to backport https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/882987 ** Also affects: devstack-plugin-ceph Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/2025813 Title: test_rebuild_volume_backed_server failing 100% on nova-lvm job Status in devstack-plugin-ceph: New Status in OpenStack Compute (nova): In Progress Status in OpenStack Compute (nova) antelope series: In Progress Status in OpenStack Compute (nova) yoga series: Triaged Status in OpenStack Compute (nova) zed series: Triaged Bug description: After the tempest patch was merged [1] nova-lvm job started to fail with the following error in test_rebuild_volume_backed_server: Traceback (most recent call last): File "/opt/stack/tempest/tempest/common/utils/__init__.py", line 70, in wrapper return f(*func_args, **func_kwargs) File "/opt/stack/tempest/tempest/api/compute/servers/test_server_actions.py", line 868, in test_rebuild_volume_backed_server self.get_server_ip(server, validation_resources), File "/opt/stack/tempest/tempest/api/compute/base.py", line 519, in get_server_ip return compute.get_server_ip( File "/opt/stack/tempest/tempest/common/compute.py", line 76, in get_server_ip raise lib_exc.InvalidParam(invalid_param=msg) tempest.lib.exceptions.InvalidParam: Invalid Parameter passed: When validation.connect_method equals floating, validation_resources cannot be None As discussed on IRC with Sean [2], the SSH validation is mandatory now which is disabled in the job config [2]. [1] https://review.opendev.org/c/openstack/tempest/+/831018 [2] https://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2023-07-04.log.html#t2023-07-04T15:33:38 [3] https://opendev.org/openstack/nova/src/commit/4b454febf73cdd7b5be0a2dad272c1d7685fac9e/.zuul.yaml#L266-L267 To manage notifications about this bug go to: https://bugs.launchpad.net/devstack-plugin-ceph/+bug/2025813/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 2026361] [NEW] API traceback when creating token with body from v2 api on the v3 endpoint
Public bug reported: When requesting a Keystone token, a user may make a mistake and use the api V2 body on an api v3 endpoint. This will result with a Traceback showing is user/password in the logs Keystone logs : ERROR keystone.server.flask.application During handling of the above exception, another exception occurred: ERROR keystone.server.flask.application ERROR keystone.server.flask.application Traceback (most recent call last): ERROR keystone.server.flask.application File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 1823, in full_dispatch_request ERROR keystone.server.flask.application rv = self.dispatch_request() ERROR keystone.server.flask.application File "/usr/local/lib/python3.10/dist-packages/flask/app.py", line 1799, in dispatch_request ERROR keystone.server.flask.application return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args) ERROR keystone.server.flask.application File "/usr/local/lib/python3.10/dist-packages/flask_restful/__init__.py", line 467, in wrapper ERROR keystone.server.flask.application resp = resource(*args, **kwargs) ERROR keystone.server.flask.application File "/usr/local/lib/python3.10/dist-packages/flask/views.py", line 107, in view ERROR keystone.server.flask.application return current_app.ensure_sync(self.dispatch_request)(**kwargs) ERROR keystone.server.flask.application File "/usr/local/lib/python3.10/dist-packages/flask_restful/__init__.py", line 582, in dispatch_request ERROR keystone.server.flask.application resp = meth(*args, **kwargs) ERROR keystone.server.flask.application File "/opt/stack/keystone/keystone/server/flask/common.py", line 1064, in wrapper ERROR keystone.server.flask.application return f(*args, **kwargs) ERROR keystone.server.flask.application File "/opt/stack/keystone/keystone/api/auth.py", line 314, in post ERROR keystone.server.flask.application auth_schema.validate_issue_token_auth(auth_data) ERROR keystone.server.flask.application File "/opt/stack/keystone/keystone/auth/schema.py", line 113, in validate_issue_token_auth ERROR keystone.server.flask.application validation.lazy_validate(token_issue, auth) ERROR keystone.server.flask.application File "/opt/stack/keystone/keystone/common/validation/__init__.py", line 30, in lazy_validate ERROR keystone.server.flask.application schema_validator.validate(resource_to_validate) ERROR keystone.server.flask.application File "/opt/stack/keystone/keystone/common/validation/validators.py", line 89, in validate ERROR keystone.server.flask.application raise exception.SchemaValidationError(detail=detail) ERROR keystone.server.flask.application keystone.exception.SchemaValidationError: 'identity' is a required property Jul 07 09:35:00 devstack devstack@keystone.service[60249]: ERROR keystone.server.flask.application On instance: Jul 07 09:35:00 devstack devstack@keystone.service[60249]: ERROR keystone.server.flask.application {'passwordCredentials': {'password': 'password', 'username': 'admin'}} Jul 07 09:35:00 devstack devstack@keystone.service[60249]: ERROR keystone.server.flask.application Jul 07 09:35:00 devstack devstack@keystone.service[60249]: [pid: 60249|app: 0|req: 125/978] 57.128.26.217 () {58 vars in 979 bytes} [Fri Jul 7 09:35:00 2023] POST /identity/v3/auth/tokens => generated 3467 bytes in 14 msecs (HTTP/1.1 400) 5 headers in 187 bytes (1 switches on core 0) Steps to reproduce : REQ: stack@devstack:~/devstack$ curl -i http://57.128.26.217/identity/v3/auth/tokens -X POST -H "Content-Type: application/json" -H "User-Agent: python-keystoneclient" -d'{"auth":{"passwordCredentials":{"username": "admin", "password": "password"}}}' HTTP/1.1 400 BAD REQUEST Date: Fri, 07 Jul 2023 09:35:00 GMT Server: Apache/2.4.52 (Ubuntu) Content-Type: application/json Content-Length: 3467 Vary: X-Auth-Token x-openstack-request-id: req-39da835d-6c25-4dfc-9fbc-8326311c44bf Connection: close {"error":{"code":400,"message":"'identity' is a required property\n\nFailed validating 'required' in schema:\n{'properties': {'identity': {'properties': {'methods': {'items': {'type': 'string'},\n 'type': 'array'},\n 'password': {'properties': {'user': {'properties': {'domain': {'properties': {'id': {'type': 'string'},\n 'name': {'type': 'string'}},\n 'type': 'object'},\n 'id': {'type': 'string'},\n 'name': {'type': 'string'},\n 'password': {'type': 'string'}},\n 'type': 'object'}},\n 'type': 'object'},\n 'token': {'properties': {'id': {'type': 'string'}},\n 'required': ['id'],\n 'type': 'object'}},\n 'required': ['methods'],\n 'type': 'object'},\n 'scope': {'properties': {'OS-TRUST:trust': {'properties': {'id': {'type': 'string'}},\n 'type': 'object'},\n 'domain': {'properties': {'id': {'type': 'string'},\n 'name': {'type': 'string'}},\n 'type': 'object'},\n 'project': {'properties': {'domain': {'properties': {'id':
[Yahoo-eng-team] [Bug 2025740] Re: [FT] ``BaseOVSTestCase`` minimum bandwidth tests cannot be executed in parallel
** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/2025740 Title: [FT] ``BaseOVSTestCase`` minimum bandwidth tests cannot be executed in parallel Status in neutron: Fix Released Bug description: There is a small chance that the minimum bandwidth tests interfere to each other, as seen in some CI errors [1][2]. In this case, the "test_update_minimum_bandwidth_queue" test read the OVS database modified by "test_update_minimum_bandwidth_queue_no_qos_no_queue" test. [1]https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_709/886992/1/check/neutron-functional-with-uwsgi/70996ce/testr_results.html [2]https://paste.opendev.org/show/brqSrOgUepF3bLYYaNmS/ To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/2025740/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp