[Yahoo-eng-team] [Bug 1956788] Re: system_cfg not read on Oracle datasource
CPC has worked on removing /etc/cloud/cloud.cfg.d/99-disable-network- config.cfg from Oracle images and this is completed started with Images with build >=20221123 I just used our daily images to create 3 custom images with: - Jammy: 20221214 - Focal: 20221214 - Bionic: 20221205 And launched J/F/B bare-metal instances and I can validate this doesn't cause any issues and the networking is working well on all 3 releases: https://pastebin.ubuntu.com/p/nX7dM7mQSQ/ I believe next step is to proceed with the change proposed by this LP. As a reminder, as part of a post-install script, we'll need to remove /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg if it exists, so that we don't break network for existing instances when they upgrade cloud-init. Marking back as ~~triaged~~ confirmed, since we removed /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg fropm the images, which prevented our work here. EDIT: Marking as confirmed, as I can't mark as Triaged ** Changed in: cloud-init Status: Invalid => Confirmed -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1956788 Title: system_cfg not read on Oracle datasource Status in cloud-init: Confirmed Bug description: In https://github.com/canonical/cloud- init/commit/2c52e6e88b19f5db8d55eb7280ee27703e05d75f , the order of reading network config was changed for Oracle due to initramfs needing to take lower precedence than the datasource. However, this also bumped system_cfg to a lower precedence than ds, which means that any network configuration specified in /etc/cloud will not be applied. system_cfg should instead be moved above ds so network configuration in /etc/cloud takes precedence. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1956788/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1990854] Re: oslo_limit section not clear
Reviewed: https://review.opendev.org/c/openstack/glance/+/859359 Committed: https://opendev.org/openstack/glance/commit/1409cc94b77be1d771801ad783c9690b3b2fca20 Submitter: "Zuul (22348)" Branch:master commit 1409cc94b77be1d771801ad783c9690b3b2fca20 Author: Cyril Roelandt Date: Tue Sep 27 02:46:24 2022 +0200 Quota configuration: improve example oslo_limit section This patch: - uses "glance" instead of "MY_SERVICE"; - uses the already existing public glance endpoint id rather than "ENDPOINT_ID"; - uses the already existing "GLANCE_PASS" rather than introducing "MY_PASSWORD". Closes-Bug: #1990854 Change-Id: I8f5214b879818ec5f1a62d369274ad0d67396b9b ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1990854 Title: oslo_limit section not clear Status in Glance: Fix Released Bug description: In https://docs.openstack.org/glance/yoga/install/install-ubuntu.html it is not clear whether oslo_limit is required and the example does not explain concepts such as MY_SERVICE and ENDPOINT_ID adequately. I have read the oslo.limits docs and I am still no wiser. This bug tracker is for errors with the documentation, use the following as a template and remove or add fields as you see fit. Convert [ ] into [x] to check boxes: - [ ] This doc is inaccurate in this way: __ - [x] This is a doc addition request. - [ ] I have a fix to the document that I can paste below including example: input and output. If you have a troubleshooting or support issue, use the following resources: - The mailing list: https://lists.openstack.org - IRC: 'openstack' channel on OFTC --- Release: 24.1.1.dev4 on 2021-06-03 12:23:49 SHA: f3ae67fa1925375fef6af44e73db201db4fe359c Source: https://opendev.org/openstack/glance/src/doc/source/install/install-ubuntu.rst URL: https://docs.openstack.org/glance/yoga/install/install-ubuntu.html To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1990854/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1999813] [NEW] [ovn-octavia-provider] when a HM is created/deleted the listener remains in PENDING_UPDATE
Public bug reported: When a HM is created/deleted over a pool of a fully populated LB, is put the provisioning status of the listener owner of the pool in PENDING_UPDATE, the operation is done correctly but the status of the listener keeps on that stuck status. ** Affects: neutron Importance: Undecided Assignee: Fernando Royo (froyoredhat) Status: New ** Tags: ovn-octavia-provider ** Changed in: neutron Assignee: (unassigned) => Fernando Royo (froyoredhat) -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1999813 Title: [ovn-octavia-provider] when a HM is created/deleted the listener remains in PENDING_UPDATE Status in neutron: New Bug description: When a HM is created/deleted over a pool of a fully populated LB, is put the provisioning status of the listener owner of the pool in PENDING_UPDATE, the operation is done correctly but the status of the listener keeps on that stuck status. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1999813/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1999814] [NEW] Allow for specifying common baseline CPU model with disabled feature
Public bug reported: Hello, This is very similar to pad.lv/1852437 (and the related blueprint at https://blueprints.launchpad.net/nova/+spec/allow-disabling-cpu-flags), but there is a very different and important nuance. A customer I'm working with has two classes of blades that they're trying to use. Their existing ones are Cascade Lake-based; they are presently using the Cascadelake-Server-noTSX CPU model via libvirt.cpu_model in nova.conf. Their new blades are Ice Lake-based, which is a newer processor, which typically would also be able to run based on the Cascade Lake feature set - except that these Ice Lake processors lack the MPX feature defined in the Cascadelake-Server-noTSX model. The result of this is evident when I try to start nova on the new blades with the Ice Lake CPUs. Even if I specify the following in my nova.conf: [libvirt] cpu_mode = custom cpu_model = Cascadelake-Server-noTSX cpu_model_extra_flags = -mpx That is not enough to allow Nova to start; it fails in the libvirt driver in the _check_cpu_compatibility function: 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service Traceback (most recent call last): 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 771, in _check_cpu_compatibility 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service self._compare_cpu(cpu, self._get_cpu_info(), None) 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 8817, in _compare_cpu 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service raise exception.InvalidCPUInfo(reason=m % {'ret': ret, 'u': u}) 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service nova.exception.InvalidCPUInfo: Unacceptable CPU info: CPU doesn't have compatibility. 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service 0 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service Refer to http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service During handling of the above exception, another exception occurred: 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service Traceback (most recent call last): 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service File "/usr/lib/python3/dist-packages/oslo_service/service.py", line 810, in run_service 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service service.start() 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service File "/usr/lib/python3/dist-packages/nova/service.py", line 173, in start 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service self.manager.init_host() 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 1404, in init_host 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service self.driver.init_host(host=self.host) 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 743, in init_host 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service self._check_cpu_compatibility() 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 777, in _check_cpu_compatibility 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service raise exception.InvalidCPUInfo(msg) 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service nova.exception.InvalidCPUInfo: Configured CPU model: Cascadelake-Server-noTSX is not compatible with host CPU. Please correct your config and try again. Unacceptable CPU info: CPU doesn't have compatibility. 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service 0 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service Refer to http://libvirt.org/html/libvirt-libvirt-host.html#virCPUCompareResult 2022-12-15 17:20:59.562 1836708 ERROR oslo_service.service If I make a custom libvirt CPU map file which removes the "" feature and specify that as the cpu_model instead, I am able to make Nova start - so it does indeed seem to specifically be that single feature which is blocking me. However, editing the libvirt CPU mapping files is probably not the right way to fix this - hence why I'm filing this bug, for discussion of how to support cases like this. Currently the only "proper" way I'm aware of to work around this right now is to fall back to a Broadwell-based configuration which lacks the "mpx" feature to use as a common
[Yahoo-eng-team] [Bug 1999803] [NEW] Libvirt fails to start VM with virtio related "unsupported configuration"
Public bug reported: Libvirt fails to start any VM on a fresh OpenStack Zed install on Rocky 9, also freshly installed. The logs show the line below: [root@cs2 ~]# openstack server delete test; openstack server create --flavor "1CPU-512RAM-0Root-0Pad-0Swap-1RXTX" --image "cirros 0.4.0" --nic net-id=internal --security-group bf35c9bc-84c8-4f43-b1d2-a3fb04231245 --key-name nicolasm-gate test [root@cs2 ~]# tail -f /var/log/nova/nova.log libvirt.libvirtError: unsupported configuration: domain configuration does not support video model 'virtio' I attached the full log that also include the machine configuration nova generated to start the VM. I tried installing packages "qemu-kvm-device-display-virtio- vga-7.0.0-13.el9.x86_64", "qemu-kvm-device-display-virtio- gpu-7.0.0-13.el9.x86_64", "qemu-kvm-device-display-virtio-gpu- gl-7.0.0-13.el9.x86_64" and "qemu-kvm-device-display-virtio-vga- gl-7.0.0-13.el9.x86_64", nova/libvirt still fail with the same error, even after restarting nova-compute and libvirtd services, or even the whole computer. Trying to start a VM with virtio video directly on the host via vist-manager yields the same error: Unable to complete install: 'unsupported configuration: USB redirection is not supported by this version of QEMU' Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 89, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/create.py", line 2552, in _do_async_install guest.start_install(meter=meter) File "/usr/share/virt-manager/virtinst/guest.py", line 495, in start_install doboot, transient) File "/usr/share/virt-manager/virtinst/guest.py", line 431, in _create_guest domain = self.conn.createXML(install_xml or final_xml, 0) File "/usr/lib64/python2.7/site-packages/libvirt.py", line 3725, in createXML if ret is None:raise libvirtError('virDomainCreateXML() failed', conn=self) libvirtError: unsupported configuration: USB redirection is not supported by this version of QEMU Hinting that the problem is probably not on nova's side. Finally, qemu seems to be able to spawn a VM with virtio vga device: [root@cs2 ~]# /usr/libexec/qemu-kvm -vga virtio qemu-kvm: Machine type 'pc-i440fx-rhel7.6.0' is deprecated: machine types for previous major releases are deprecated qemu-kvm: warning: CPU model qemu64-x86_64-cpu is deprecated -- use at least 'Nehalem' / 'Opteron_G4', or 'host' / 'max' VNC server running on ::1:5900 ^Cqemu-kvm: terminating on signal 2 [root@cs2 ~]# So the issue seems to be on Libvirt side. I encountered the same issue on another install on rocky 9, but I do not remember how I solved it. Since the problem happened twice independently, I suppose you might have encountered it too. Are you aware of a solution? ** Affects: nova Importance: Undecided Status: New ** Attachment added: "nova.log" https://bugs.launchpad.net/bugs/1999803/+attachment/5635754/+files/nova.log -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1999803 Title: Libvirt fails to start VM with virtio related "unsupported configuration" Status in OpenStack Compute (nova): New Bug description: Libvirt fails to start any VM on a fresh OpenStack Zed install on Rocky 9, also freshly installed. The logs show the line below: [root@cs2 ~]# openstack server delete test; openstack server create --flavor "1CPU-512RAM-0Root-0Pad-0Swap-1RXTX" --image "cirros 0.4.0" --nic net-id=internal --security-group bf35c9bc-84c8-4f43-b1d2-a3fb04231245 --key-name nicolasm-gate test [root@cs2 ~]# tail -f /var/log/nova/nova.log libvirt.libvirtError: unsupported configuration: domain configuration does not support video model 'virtio' I attached the full log that also include the machine configuration nova generated to start the VM. I tried installing packages "qemu-kvm-device-display-virtio- vga-7.0.0-13.el9.x86_64", "qemu-kvm-device-display-virtio- gpu-7.0.0-13.el9.x86_64", "qemu-kvm-device-display-virtio-gpu- gl-7.0.0-13.el9.x86_64" and "qemu-kvm-device-display-virtio-vga- gl-7.0.0-13.el9.x86_64", nova/libvirt still fail with the same error, even after restarting nova-compute and libvirtd services, or even the whole computer. Trying to start a VM with virtio video directly on the host via vist-manager yields the same error: Unable to complete install: 'unsupported configuration: USB redirection is not supported by this version of QEMU' Traceback (most recent call last): File "/usr/share/virt-manager/virtManager/asyncjob.py", line 89, in cb_wrapper callback(asyncjob, *args, **kwargs) File "/usr/share/virt-manager/virtManager/create.py", line 2552, in _do_async_install guest.start_install(meter=meter) File "/usr/share/virt-manager/virtinst/guest.py", line
[Yahoo-eng-team] [Bug 1999800] [NEW] tempest.api.image.v2.test_images.ImageLocationsTest.test_set_location intermittently fails with The Store URI was malformed
Public bug reported: Example run with failure: https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_67c/854119/12/gate/nova-ceph-multistore/67cb433/testr_results.html Traceback (most recent call last): File "/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 844, in test_set_location self._check_set_location() File "/opt/stack/tempest/tempest/api/image/v2/test_images.py", line 820, in _check_set_location self.client.update_image(image['id'], [ File "/opt/stack/tempest/tempest/lib/services/image/v2/images_client.py", line 40, in update_image resp, body = self.patch('images/%s' % image_id, data, headers) File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 346, in patch return self.request('PATCH', url, extra_headers, headers, body) File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 720, in request self._error_checker(resp, resp_body) File "/opt/stack/tempest/tempest/lib/common/rest_client.py", line 831, in _error_checker raise exceptions.BadRequest(resp_body, resp=resp) tempest.lib.exceptions.BadRequest: Bad request Details: b'400 Bad Request\n\nThe Store URI was malformed.\n\n ' Dec 15 12:37:12.908234 ubuntu-focal-ovh-bhs1-0032504354 glance- api[109302]: DEBUG oslo_policy.policy [None req-10d1acb6-05d9-4b82-9f8b-15e91d78d7ff tempest- ImageLocationsTest-114614806 tempest- ImageLocationsTest-114614806-project] enforce: rule="context_is_admin" creds={"domain_id": null, "is_admin_project": true, "project_domain_id": "default", "project_id": "858e3d7134924eab94d5ed2b2be4e845", "roles": ["member", "reader"], "service_project_domain_id": null, "service_project_id": null, "service_roles": [], "service_user_domain_id": null, "service_user_id": null, "system_scope": null, "user_domain_id": "default", "user_id": "e5f710a5113b4259b676fe1d7a4d88c0"} target={"auth_token": "***", "domain": null, "global_request_id": null, "is_admin": false, "is_admin_project": true, "project": "858e3d7134924eab94d5ed2b2be4e845", "project_domain": "default", "project_id": "858e3d7134924eab94d5ed2b2be4e845", "read_only": false, "request_id": "req-10d1acb6-05d9-4b82-9f8b-15e91d78d7ff", "resource_uuid": null, "roles": ["member", "reader"], "service_catalog": [{"endpoints": [{"publicURL": "https://158.69.74.2/volume/v3/858e3d7134924eab94d5ed2b2be4e845;, "region": "RegionOne"}], "name": "cinder", "type": "block-storage"}, {"endpoints": [{"publicURL": "https://158.69.74.2/image;, "region": "RegionOne"}], "name": "glance", "type": "image"}, {"endpoints": [{"publicURL": "https://158.69.74.2/volume/v3/858e3d7134924eab94d5ed2b2be4e845;, "region": "RegionOne"}], "name": "cinderv3", "type": "volumev3"}, {"endpoints": [{"publicURL": "https://158.69.74.2:9696/networking;, "region": "RegionOne"}], "name": "neutron", "type": "network"}, {"endpoints": [{"publicURL": "https://158.69.74.2/compute/v2/858e3d7134924eab94d5ed2b2be4e845;, "region": "RegionOne"}], "name": "nova_legacy", "type": "compute_legacy"}, {"endpoints": [{"publicURL": "https://158.69.74.2/identity;, "region": "RegionOne"}], "name": "keystone", "type": "identity"}, {"endpoints": [{"publicURL": "https://158.69.74.2/compute/v2.1;, "region": "RegionOne"}], "name": "nova", "type": "compute"}, {"endpoints": [{"publicURL": "https://158.69.74.2/placement;, "region": "RegionOne"}], "name": "placement", "type": "placement"}], "show_deleted": false, "system_scope": null, "user": "e5f710a5113b4259b676fe1d7a4d88c0", "user_domain": "default", "user_identity": "e5f710a5113b4259b676fe1d7a4d88c0 858e3d7134924eab94d5ed2b2be4e845 - default default"} {{(pid=109302) enforce /usr/local/lib/python3.8/dist- packages/oslo_policy/policy.py:1036}} Dec 15 12:37:12.937378 ubuntu-focal-ovh-bhs1-0032504354 glance- api[109302]: DEBUG oslo_policy.policy [None req-10d1acb6-05d9-4b82-9f8b-15e91d78d7ff tempest- ImageLocationsTest-114614806 tempest- ImageLocationsTest-114614806-project] enforce: rule="set_image_location" creds={"domain_id": null, "is_admin_project": true, "project_domain_id": "default", "project_id": "858e3d7134924eab94d5ed2b2be4e845", "roles": ["member", "reader"], "service_project_domain_id": null, "service_project_id": null, "service_roles": [], "service_user_domain_id": null, "service_user_id": null, "system_scope": null, "user_domain_id": "default", "user_id": "e5f710a5113b4259b676fe1d7a4d88c0"} target={"checksum": null, "container_format": "bare", "created_at": "2022-12-15T12:37:13.00", "disk_format": "raw", "extra_properties": {}, "image_id": "67fb8cb9-03be-4974-aeef-08e12bbe3001", "member": null, "min_disk": 0, "min_ram": 0, "name": null, "os_hash_algo": null, "os_hash_value": null, "os_hidden": false, "owner": "858e3d7134924eab94d5ed2b2be4e845", "project_id": "858e3d7134924eab94d5ed2b2be4e845", "protected": false, "size": null, "status": "queued", "tags": [], "updated_at": "2022-12-15T12:37:13.00", "virtual_size": null, "visibility": "shared"} {{(pid=109302) enforce
[Yahoo-eng-team] [Bug 1647491] Re: Missing documentation for glance-manage db_purge command
Reviewed: https://review.opendev.org/c/openstack/glance/+/427634 Committed: https://opendev.org/openstack/glance/commit/b17f770b5fee3d52cece8451b7adb3f944270404 Submitter: "Zuul (22348)" Branch:master commit b17f770b5fee3d52cece8451b7adb3f944270404 Author: Danny Al-Gaaf Date: Wed Feb 1 11:48:30 2017 +0100 Adds purge command to glancemanage man page Closes-Bug: #1647491 Change-Id: I92ec228ebe9ac8eadb56dfe152535d3d6eedd62f Signed-off-by: Danny Al-Gaaf ** Changed in: glance Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to Glance. https://bugs.launchpad.net/bugs/1647491 Title: Missing documentation for glance-manage db_purge command Status in Glance: Fix Released Bug description: glance-manage db purge is an advanced operator command for purging deleted records from the database [1]. Documentation for the purpose and usage of this command should be added here [2]. [1] https://github.com/openstack/glance/blob/master/glance/cmd/manage.py#L146 [2] http://docs.openstack.org/developer/glance/man/glancemanage.html To manage notifications about this bug go to: https://bugs.launchpad.net/glance/+bug/1647491/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1999705] Re: neutron-ovs-grenade-dvr-multinode job failing in check queue
Reviewed: https://review.opendev.org/c/openstack/neutron/+/867764 Committed: https://opendev.org/openstack/neutron/commit/7c449f1833a94a544cf0b9cfcf63f3e7e46fae26 Submitter: "Zuul (22348)" Branch:master commit 7c449f1833a94a544cf0b9cfcf63f3e7e46fae26 Author: Sławek Kapłoński Date: Thu Dec 15 09:08:15 2022 +0100 Enable ML2/OVS backend in the -ovs- grenade jobs explicitly As with [1] all grenade jobs by default are using ML2/OVN backend now, we need to explicitly configure ML2/OVS backend to be used in all ovs related grenade jobs. This patch is doing exactly that. [1] https://review.opendev.org/c/openstack/grenade/+/862475 Closes-Bug: #1999705 Change-Id: I0770ab54064c49c86b7fefcc0e7eba35c6282ccc ** Changed in: neutron Status: In Progress => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1999705 Title: neutron-ovs-grenade-dvr-multinode job failing in check queue Status in neutron: Fix Released Bug description: The neutron-ovs-grenade-dvr-multinode job started failing in the check queue sometime today. Typical trace coming from the compute1 node: [...] + inc/python:setup_package:445 : safe_chown -R stack /opt/stack/old/neutron/neutron.egg-info + functions-common:safe_chown:2340 : _safe_permission_operation chown -R stack /opt/stack/old/neutron/neutron.egg-info + functions-common:_safe_permission_operation:2188 : sudo chown -R stack /opt/stack/old/neutron/neutron.egg-info + inc/python:_setup_package_with_constraints_edit:390 : use_library_from_git /opt/stack/old/neutron + inc/python:use_library_from_git:246 : local name=/opt/stack/old/neutron + inc/python:use_library_from_git:247 : local enabled=1 + inc/python:use_library_from_git:248 : [[ cinder,devstack,glance,grenade,keystone,neutron,nova,placement,requirements,swift,tempest = \A\L\L ]] + inc/python:use_library_from_git:248 : [[ ,cinder,devstack,glance,grenade,keystone,neutron,nova,placement,requirements,swift,tempest, =~ ,/opt/stack/old/neutron, ]] + inc/python:use_library_from_git:249 : return 1 + lib/neutron-legacy:install_mutnauq:480 : [[ ovn == \o\v\n ]] + lib/neutron-legacy:install_mutnauq:481 : install_ovn + lib/neutron_plugins/ovn_agent:install_ovn:363 : echo 'Installing OVN and dependent packages' Installing OVN and dependent packages + lib/neutron_plugins/ovn_agent:install_ovn:366 : ovn_sanity_check + lib/neutron_plugins/ovn_agent:ovn_sanity_check:350 : is_service_enabled q-agt neutron-agt + functions-common:is_service_enabled:2095 : return 0 + lib/neutron_plugins/ovn_agent:ovn_sanity_check:351 : die 351 'The q-agt/neutron-agt service must be disabled with OVN.' + functions-common:die:264 : local exitcode=0 [Call Trace] ./stack.sh:926:stack_install_service /opt/stack/old/devstack/lib/stack:32:install_neutron /opt/stack/old/devstack/lib/neutron:704:install_mutnauq /opt/stack/old/devstack/lib/neutron-legacy:481:install_ovn /opt/stack/old/devstack/lib/neutron_plugins/ovn_agent:366:ovn_sanity_check /opt/stack/old/devstack/lib/neutron_plugins/ovn_agent:351:die [ERROR] /opt/stack/old/devstack/lib/neutron_plugins/ovn_agent:351 The q-agt/neutron-agt service must be disabled with OVN. Error on exit Here's one example: https://zuul.opendev.org/t/openstack/build/a511fc140ad444bb8df77c670815663f Looking through open changes, this is the oldest one that seems to fail: https://review.opendev.org/c/openstack/neutron/+/866635 Started at 2022-12-14 06:37:25 I am guessing this is the culprit: https://review.opendev.org/c/openstack/grenade/+/862475 I'll create a revert of that since it's EOD here. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1999705/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1988803] Re: With image-volume cache enabled backend, volume creation for a smaller volume than image-volume cache has the same size of image-volume cache
** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1988803 Title: With image-volume cache enabled backend, volume creation for a smaller volume than image-volume cache has the same size of image-volume cache Status in Cinder: New Status in OpenStack Compute (nova): New Bug description: The issue is initially reported at https://bugzilla.redhat.com/show_bug.cgi?id=2100697. In the current implementation, cinder doesn't have a check point of the volume size when image volume cache is enabled. Due to that, the created volume whose requested size is smaller than the image cache volume has the same size of the image cache volume. Reproducing steps: 1. Enable volume image cache feature in cinder. 2. Upload a cirros image to glance 3. Create a volume, Volume A,(e.g. 5GB) with the image as the first volume with the image. 4. Then create a smaller volume, Volume B, (e.g. 1GB) with the image. 5. Check actual volume image size on storage backend side. Current behavior: Volume A and Volume B have the same size(e.g. 5GB) However, cinder shows Volume A is 5GB and Volume B is 1GB. Expected behavior: Volume A and Volume B have different size based on their requested size(Volume A is 5GB, Volume B is 1GB). To manage notifications about this bug go to: https://bugs.launchpad.net/cinder/+bug/1988803/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1999781] [NEW] How is the cloud_init_modules stage executed?
Public bug reported: Sorry, I've been looking at the code and documentation of cloud-init recently and have a question: How is the cloud_init_modules stage executed? Which command does it correspond to? "cloud-init modules --mode init" or "cloud-init init". Also, are there any modules must be defined at a certain stage(cloud_init_modules, cloud_config_modules, and cloud_final_modules)? eg. "ca-certs" modules in cloud_init_modules stages: https://github.com/canonical/cloud-init/blob/main/config/cloud.cfg.tmpl#L107 Can we put it in cloud_config_modules or even cloud_final_modules? If so, what's the impact? If not, why? I did not find instructions for this in the documentation. ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1999781 Title: How is the cloud_init_modules stage executed? Status in cloud-init: New Bug description: Sorry, I've been looking at the code and documentation of cloud-init recently and have a question: How is the cloud_init_modules stage executed? Which command does it correspond to? "cloud-init modules --mode init" or "cloud-init init". Also, are there any modules must be defined at a certain stage(cloud_init_modules, cloud_config_modules, and cloud_final_modules)? eg. "ca-certs" modules in cloud_init_modules stages: https://github.com/canonical/cloud-init/blob/main/config/cloud.cfg.tmpl#L107 Can we put it in cloud_config_modules or even cloud_final_modules? If so, what's the impact? If not, why? I did not find instructions for this in the documentation. To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1999781/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1999774] [NEW] SDK: Neutron stadiums use python bindings from python-neutronclient which will be deprecated
Public bug reported: As we discussed during the Antelope PTG (see [1]) the python binding code in python-neutronclient will be deprecated and the bindings from OpenstackSDK could be used. Neutronclient prints warning about the future deprecation since [2]. This bug is to track the efforts in Neutron stadium projects to change the code to use bindings from openstacksdk. [1]: https://etherpad.opendev.org/p/neutron-antelope-ptg#L163 [2]: https://review.opendev.org/c/openstack/python-neutronclient/+/862371?forceReload=true ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1999774 Title: SDK: Neutron stadiums use python bindings from python-neutronclient which will be deprecated Status in neutron: New Bug description: As we discussed during the Antelope PTG (see [1]) the python binding code in python-neutronclient will be deprecated and the bindings from OpenstackSDK could be used. Neutronclient prints warning about the future deprecation since [2]. This bug is to track the efforts in Neutron stadium projects to change the code to use bindings from openstacksdk. [1]: https://etherpad.opendev.org/p/neutron-antelope-ptg#L163 [2]: https://review.opendev.org/c/openstack/python-neutronclient/+/862371?forceReload=true To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1999774/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1999762] [NEW] Login failure after installing cloud-Init and reboot on an ARM machine
Public bug reported: As mentioned in the title, I just installed cloud-Init on an ARM machine with a newly installed OS, then reboot. Then the account can't be logged in. I used the root account. I also tested it on an x86 machine and the results were normal. This problem seems to be related to the following commit: https://github.com/canonical/cloud-init/commit/7f85a3a5b4586ac7f21309aac4edc39e6ffea9ef ** Affects: cloud-init Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to cloud-init. https://bugs.launchpad.net/bugs/1999762 Title: Login failure after installing cloud-Init and reboot on an ARM machine Status in cloud-init: New Bug description: As mentioned in the title, I just installed cloud-Init on an ARM machine with a newly installed OS, then reboot. Then the account can't be logged in. I used the root account. I also tested it on an x86 machine and the results were normal. This problem seems to be related to the following commit: https://github.com/canonical/cloud-init/commit/7f85a3a5b4586ac7f21309aac4edc39e6ffea9ef To manage notifications about this bug go to: https://bugs.launchpad.net/cloud-init/+bug/1999762/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp