[Yahoo-eng-team] [Bug 1828937] Re: Getting allocation candidates is slow with "placement microversion < 1.29" from rocky release

2019-05-14 Thread Tetsuro Nakamura
** Also affects: nova/rocky Importance: Undecided Status: New ** No longer affects: nova ** Description changed: Description === - In rocky cycle, 'GET /allocation_candidates' started to be aware of nested providers from microversion 1.29. + In rocky cycle, 'GET

[Yahoo-eng-team] [Bug 1828937] [NEW] Getting allocation candidates is slow with "placement microversion < 1.29" from rocky release

2019-05-14 Thread Tetsuro Nakamura
Public bug reported: Description === In rocky cycle, 'GET /allocation_candidates' started to be aware of nested providers from microversion 1.29. >From microversion 1.29, it can join allocations from resource providers in the >same tree. To keep the behavior of microversion before

[Yahoo-eng-team] [Bug 1817458] [NEW] duplicate allocation candidates with granular request

2019-02-24 Thread Tetsuro Nakamura
Public bug reported: Description === When we request shared resource granularly, we can get duplicate allocation candidates for a same resource provider How to reproduce 1. Set up 1-1. Set up two compute nodes (cn1, cn2 with VCPU resources) 1-2. Set up one

[Yahoo-eng-team] [Bug 1812829] [NEW] `placement-status upgrade check` fails

2019-01-22 Thread Tetsuro Nakamura
oslo_db/sqlalchemy/enginefacade.py", line 532, in _setup_for_connection "No sql_connection parameter is established") oslo_db.exception.CantStartEngineError: No sql_connection parameter is established ** Affects: nova Importance: High Assignee: Tetsuro Nakamura (tetsuro0

[Yahoo-eng-team] [Bug 1803925] Re: There is no interface for operators to migrate *all* the existing compute resource providers to be ready for nested providers

2019-01-21 Thread Tetsuro Nakamura
Abandoned https://review.openstack.org/#/c/619126/ in favor of https://review.openstack.org/#/c/624943/, which is now committed. ** Changed in: nova Status: New => Won't Fix ** Changed in: nova Status: Won't Fix => Confirmed ** Changed in: nova Status: Confirmed => In

[Yahoo-eng-team] [Bug 1803925] [NEW] There is no interface for operators to migrate *all* the existing compute resource providers to be ready for nested providers

2018-11-18 Thread Tetsuro Nakamura
Public bug reported: When nested resource provider feature was added in Rocky, root_provider_uuid column, which should be non-None value is created in the resource provider DB. For existing resource providers created before queens, we have an online data migration:

[Yahoo-eng-team] [Bug 1792503] [NEW] allocation candidates "?member_of=" doesn't work with nested providers

2018-09-13 Thread Tetsuro Nakamura
Public bug reported: "GET /allocation_candidates" now supports "member_of" parameter. With nested providers present, this should work with the following constraints. - (a) With "member_of" qparam, aggregates on the root should span on the whole tree

[Yahoo-eng-team] [Bug 1785382] [NEW] GET /resource_providers/{uuid}/allocations doesn't get all the allocations

2018-08-04 Thread Tetsuro Nakamura
Public bug reported: Description === GET /resource_providers/{uuid}/allocations doesn't get all the allocations Reproduce = 1. Set 1 resource provider with some inventories 2. A user (userA) in a project(projectX) makes 1 consumer (Consumer1) allocate on the rp 3. The same

[Yahoo-eng-team] [Bug 1779818] [NEW] child's root provider is not updated.

2018-07-03 Thread Tetsuro Nakamura
Public bug reported: Description === You can update a resource provider(old root RP)'s parent RP from None to a specific existing RP(original root RP). But if the resource provider(old root RP) has a child RP, the child RP's root RP is not updated automatically to the new root RP.

[Yahoo-eng-team] [Bug 1769853] Re: Local disk without enough capacity appears in allocation candidates

2018-06-11 Thread Tetsuro Nakamura
This was novel when I reported this, but now the bug was fixed during granular-candidates work. Linked the related patches manually just now. Unit test submitted in https://review.openstack.org/#/c/566842/ Fixed in https://review.openstack.org/#/c/517757/ ** Changed in: nova Status:

[Yahoo-eng-team] [Bug 1771707] [NEW] allocation candidates with nested providers have inappropriate candidates when traits specified

2018-05-16 Thread Tetsuro Nakamura
Public bug reported: * We are setting up two compute nodes with numa node & pf nested providers.   And only one pf from cn1 has HW_NIC_OFFLOAD_GENEVE trait.    compute node (cn1) [CPU:16, MEMORY_MB:32768]  /+++\

[Yahoo-eng-team] [Bug 1769854] [NEW] Local disk without enough capacity appears in allocation candidates

2018-05-08 Thread Tetsuro Nakamura
Public bug reported: How to reproduce - In placement, 1. Setup a compute node resource provider with inventories of - 24 VCPU - 2048 MEMORY_MB - 1600 DISK_GB 2. Setup a shared storage resource provider with "MISC_SHARES_VIA_AGGREGATE" tag with - 2000

[Yahoo-eng-team] [Bug 1769853] [NEW] Local disk without enough capacity appears in allocation candidates

2018-05-08 Thread Tetsuro Nakamura
Public bug reported: How to reproduce - In placement, 1. Setup a compute node resource provider with inventories of - 24 VCPU - 2048 MEMORY_MB - 1600 DISK_GB 2. Setup a shared storage resource provider with "MISC_SHARES_VIA_AGGREGATE" tag with - 2000

[Yahoo-eng-team] [Bug 1763907] [NEW] allocation candidates member_of gets all the shared providers

2018-04-14 Thread Tetsuro Nakamura
Public bug reported: When the `member_of` parameter is present, only non shared providers in the specified aggregation are picked, but the non shared provider brings shared providers from out of the specified aggregation. For example, with the following set up, ```    CN1 (VCPU)

[Yahoo-eng-team] [Bug 1760276] [NEW] "provider_summaries" doesn't include resources that are not requested

2018-03-31 Thread Tetsuro Nakamura
Public bug reported: Description In ``GET /allocation_candidates`` API, ``provider_summaries`` should show all the inventories for all the resource classes in all the resource providers. However, ``provider_summaries`` doesn't contain resources that aren't requested. Steps to

[Yahoo-eng-team] [Bug 1750701] [NEW] NUMATopologyFilter doesn't exclude Hyper-V when cpu pinning specified.

2018-02-20 Thread Tetsuro Nakamura
Public bug reported: Description === As described in [1], Hyper-V driver supports NUMA placement policies. But it doesn't support cpu pinning policy[2]. So the host should be excluded in NUMATopologyFilter if the end user try to build a VM with cpu pinning policy. [1]

[Yahoo-eng-team] [Bug 1746674] [NEW] isolated cpu thread policy doesn't work with multi numa node

2018-01-31 Thread Tetsuro Nakamura
Public bug reported: Description === As described in test_multi_nodes_isolate() in https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/test_hardware.py#L3006-L3024, numa_fit_instance_to_host() function returns None for cpuset_reserved for cells with id >1. -

[Yahoo-eng-team] [Bug 1746393] [NEW] 'cpu_thread_policy' impacts on emulator threads

2018-01-30 Thread Tetsuro Nakamura
Public bug reported: In bug/1744965(https://bugs.launchpad.net/nova/+bug/1744965), it is reported that the way emulator_threads_policy allocates the extra cpu resource for emulator is not optimal. This report reports the bug also stays when `cpu_thread_policy=isolate`. The instance I use for

[Yahoo-eng-team] [Bug 1737449] [NEW] [libvirt] virt_type=qemu doesn't support NUMA related features

2017-12-10 Thread Tetsuro Nakamura
Public bug reported: With the change of https://review.openstack.org/#/c/465160, NUMA related features like CPU pinning, hugepages, and realtime are now explicitly disabled when using libvirt driver with `virt_type=qemu`, and compute hosts with libvirt/qemu driver are filtered out with

[Yahoo-eng-team] [Bug 1737450] [NEW] [libvirt] virt_type=xen doesn't support NUMA related features

2017-12-10 Thread Tetsuro Nakamura
Public bug reported: With the change of https://review.openstack.org/#/c/465160, NUMA related features like CPU pinning, hugepages, and realtime are now explicitly disabled when using libvirt driver with `virt_type=xen`, and compute hosts with libvirt/xen driver are filtered out with