[Yahoo-eng-team] [Bug 1855080] Re: Credentials API allows listing and retrieving of all users credentials
OSSA Report: https://review.opendev.org/#/c/698045/ ** Changed in: ossa Status: Confirmed => Fix Released -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Identity (keystone). https://bugs.launchpad.net/bugs/1855080 Title: Credentials API allows listing and retrieving of all users credentials Status in OpenStack Identity (keystone): Fix Released Status in OpenStack Security Advisory: Fix Released Status in keystone package in Ubuntu: New Bug description: Tested against Stein and Train. # User creating a credential, i.e totp or similar $ OS_CLOUD=1 openstack token issue | project_id | c3caf1b55bb84b78a795fd81838e5160 | user_id| 9971b0f13d2d4a578212d028a53c3209 $ OS_CLOUD=1 openstack credential create --type test 9971b0f13d2d4a578212d028a53c3209 test-data $ OS_CLOUD=1 openstack credential list +--+--+--+---++ | ID | Type | User ID | Data | Project ID | +--+--+--+---++ | 0a3a2d3b7dad4886b0bbf61b6cd7d2b0 | test | 9971b0f13d2d4a578212d028a53c3209 | test-data | None | +--+--+--+---++ # Different User but same Project $ OS_CLOUD=2 openstack token issue | project_id | c3caf1b55bb84b78a795fd81838e5160 | user_id| 6b28a0b073fc4ac7843f33190ebc5c3c $ OS_CLOUD=2 openstack credential list +--+--+--+---++ | ID | Type | User ID | Data | Project ID | +--+--+--+---++ | 0a3a2d3b7dad4886b0bbf61b6cd7d2b0 | test | 9971b0f13d2d4a578212d028a53c3209 | test-data | None | +--+--+--+---++ # Different User and Different Project $ OS_CLOUD=3 openstack token issue | project_id | d43f20ae5a7e4f36b701710277384401 | user_id| 2e48f1a7d1474391a826a2b9700e5949 $ OS_CLOUD=3 openstack credential list +--+--+--+---++ | ID | Type | User ID | Data | Project ID | +--+--+--+---++ | 0a3a2d3b7dad4886b0bbf61b6cd7d2b0 | test | 9971b0f13d2d4a578212d028a53c3209 | test-data | None | +--+--+--+---++ As shown anyone who's authenticated can retrieve any credentials including their 'secret'. This is a rather severe information disclosure vulnerability and completely defies the purpose of TOTP or MFA as these credentials are not kept secure or private whatsoever. If Auth-rules are configured allow login with only 'topt' it would be extremely easy to assume a different user's identity. A CVE should be issued for this. I can take care of that paperwork. Versions affected and tested: Train/ubuntu: $ dpkg -l | grep keystone ii keystone 2:16.0.0-0ubuntu1~cloud0 all OpenStack identity service - Daemons ii keystone-common 2:16.0.0-0ubuntu1~cloud0 all OpenStack identity service - Common files ii python-keystoneauth1 3.13.1-0ubuntu1~cloud0 all authentication library for OpenStack Identity - Python 2.7 ii python-keystoneclient1:3.19.0-0ubuntu1~cloud0 all client library for the OpenStack Keystone API - Python 2.x ii python-keystonemiddleware6.0.0-0ubuntu1~cloud0 all Middleware for OpenStack Identity (Keystone) - Python 2.x ii python3-keystone 2:16.0.0-0ubuntu1~cloud0 all OpenStack identity service - Python 3 library ii python3-keystoneauth13.17.1-0ubuntu1~cloud0 all authentication library for OpenStack Identity - Python 3.x ii python3-keystoneclient 1:3.21.0-0ubuntu1~cloud0 all client library for the OpenStack Keystone API - Python 3.x ii python3-keystonemiddleware 7.0.1-0ubuntu1~cloud0 all
[Yahoo-eng-team] [Bug 1858091] Re: Nova compute api v2.1/servers in train
Well, API is versioned to keep compatibility constraints. This is not Kolla-specific issue, notifying Nova. ** Also affects: nova Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1858091 Title: Nova compute api v2.1/servers in train Status in kolla-ansible: New Status in OpenStack Compute (nova): New Bug description: **Environment**: * OS (e.g. from /etc/os-release): Ubuntu * Kernel (e.g. `uname -a`): Linux host 4.15.0-55-generic #60-Ubuntu SMP Tue Jul 2 18:22:20 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux * Docker version if applicable (e.g. `docker version`): 19.03.2 * Kolla-Ansible version (e.g. `git head or tag or stable branch` or pip package version if using release): 9.0.0 * Docker image Install type (source/binary): source * Docker image distribution: train * Are you using official images from Docker Hub or self built? official * If self built - Kolla version and environment used to build: * Share your inventory file, globals.yml and other configuration files if relevant - I have updated kolla-ansible(to 9.0.0) and openstack images(to train) recently. Thus, I was using Rancher node driver to provision openstack instances and use it to deploy k8s cluster. With Stein everything was working smoothly. However, after I updated to Train version, Rancher started getting 400-403 error codes: ``` Error creating machine: Error in driver during machine creation: Expected HTTP response code [200] when accessing [POST http://10.0.225.254:8774/v2.1/os-keypairs], but got 403 instead or Error creating machine: Error in driver during machine creation: Expected HTTP response code [200] when accessing [POST http://10.0.225.254:8774/v2.1/servers], but got 400 instead ``` Thus, I am wondering if anything was changed to nova compute api's in Train version and what action can be done in order to fix that issue? I have reported that bug on Rancher github as well: https://github.com/rancher/rancher/issues/24813 cause I am not sure if its fully openstack-version related issue. Regards To manage notifications about this bug go to: https://bugs.launchpad.net/kolla-ansible/+bug/1858091/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1858102] [NEW] multiqueue vhostuser interface should not have kernel limits applied
Public bug reported: When multiqueue support was introduced for vhost-user it reuesed the same code to calculate the queue count as kernel vhost. In https://bugs.launchpad.net/nova/+bug/1570631 and commit https://review.opendev.org/#/c/332660/, a bug was fixed by making the assumption that the kernel version should also dictate the max number of queues on the tap interface when setting hw:vif_multiqueue_enabled=True. this change did not take account of the fact that this was shared code and incorrectly applied the hard coded kernel limits to vhost-user interface not just tap devices. ** Affects: nova Importance: Low Assignee: sean mooney (sean-k-mooney) Status: Triaged ** Affects: nova/queens Importance: Low Status: Triaged ** Affects: nova/rocky Importance: Low Status: Triaged ** Affects: nova/stein Importance: Low Status: Triaged ** Affects: nova/train Importance: Low Status: Triaged ** Affects: nova/ussuri Importance: Low Assignee: sean mooney (sean-k-mooney) Status: Triaged ** Tags: dpdk libvirt ** Also affects: nova/train Importance: Undecided Status: New ** Also affects: nova/queens Importance: Undecided Status: New ** Also affects: nova/ussuri Importance: Low Assignee: sean mooney (sean-k-mooney) Status: Triaged ** Also affects: nova/rocky Importance: Undecided Status: New ** Also affects: nova/stein Importance: Undecided Status: New ** Changed in: nova/stein Importance: Undecided => Low ** Changed in: nova/stein Status: New => Triaged ** Changed in: nova/rocky Status: New => Triaged ** Changed in: nova/rocky Importance: Undecided => Low ** Changed in: nova/queens Status: New => Triaged ** Changed in: nova/queens Importance: Undecided => Low ** Changed in: nova/train Status: New => Triaged ** Changed in: nova/train Importance: Undecided => Low ** Bug watch added: Red Hat Bugzilla #1714075 https://bugzilla.redhat.com/show_bug.cgi?id=1714075 -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to OpenStack Compute (nova). https://bugs.launchpad.net/bugs/1858102 Title: multiqueue vhostuser interface should not have kernel limits applied Status in OpenStack Compute (nova): Triaged Status in OpenStack Compute (nova) queens series: Triaged Status in OpenStack Compute (nova) rocky series: Triaged Status in OpenStack Compute (nova) stein series: Triaged Status in OpenStack Compute (nova) train series: Triaged Status in OpenStack Compute (nova) ussuri series: Triaged Bug description: When multiqueue support was introduced for vhost-user it reuesed the same code to calculate the queue count as kernel vhost. In https://bugs.launchpad.net/nova/+bug/1570631 and commit https://review.opendev.org/#/c/332660/, a bug was fixed by making the assumption that the kernel version should also dictate the max number of queues on the tap interface when setting hw:vif_multiqueue_enabled=True. this change did not take account of the fact that this was shared code and incorrectly applied the hard coded kernel limits to vhost-user interface not just tap devices. To manage notifications about this bug go to: https://bugs.launchpad.net/nova/+bug/1858102/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp
[Yahoo-eng-team] [Bug 1858086] [NEW] qrouter's local link route cannot be restored
Public bug reported: When a virtual router connects a subnet of 192.168.1.0/24, the qrouter namespace generates a local link route like '192.168.1.0/24 dev qr- 0f8e7575-3b proto kernel scope link src 192.168.1.1 metric 100'. If we add a self-defined route '192.168.1.0/24, nexthop 192.168.1.100' into the qrouter from dashboard, this route whll replace the prior local link route. Then we remove this self-defined route from dashboard, the route should be disappeared but the local link route cannot be restored, in which case all the vms in 192.168.1.0/24 are unreachable any more. ** Affects: neutron Importance: Undecided Status: New -- You received this bug notification because you are a member of Yahoo! Engineering Team, which is subscribed to neutron. https://bugs.launchpad.net/bugs/1858086 Title: qrouter's local link route cannot be restored Status in neutron: New Bug description: When a virtual router connects a subnet of 192.168.1.0/24, the qrouter namespace generates a local link route like '192.168.1.0/24 dev qr- 0f8e7575-3b proto kernel scope link src 192.168.1.1 metric 100'. If we add a self-defined route '192.168.1.0/24, nexthop 192.168.1.100' into the qrouter from dashboard, this route whll replace the prior local link route. Then we remove this self-defined route from dashboard, the route should be disappeared but the local link route cannot be restored, in which case all the vms in 192.168.1.0/24 are unreachable any more. To manage notifications about this bug go to: https://bugs.launchpad.net/neutron/+bug/1858086/+subscriptions -- Mailing list: https://launchpad.net/~yahoo-eng-team Post to : yahoo-eng-team@lists.launchpad.net Unsubscribe : https://launchpad.net/~yahoo-eng-team More help : https://help.launchpad.net/ListHelp