Public bug reported:
Description
===
When we request shared resource granularly, we can get duplicate
allocation candidates for a same resource provider
How to reproduce
1. Set up
1-1. Set up two compute nodes (cn1, cn2 with VCPU resources)
1-2. Set up one shar
Public bug reported:
Description
===
In rocky cycle, 'GET /allocation_candidates' started to be aware of nested
providers from microversion 1.29.
>From microversion 1.29, it can join allocations from resource providers in the
>same tree.
To keep the behavior of microversion before 1.2
** Also affects: nova/rocky
Importance: Undecided
Status: New
** No longer affects: nova
** Description changed:
Description
===
- In rocky cycle, 'GET /allocation_candidates' started to be aware of nested
providers from microversion 1.29.
+ In rocky cycle, 'GET /allocat
Public bug reported:
In bug/1744965(https://bugs.launchpad.net/nova/+bug/1744965), it is
reported that the way emulator_threads_policy allocates the extra cpu
resource for emulator is not optimal.
This report reports the bug also stays when `cpu_thread_policy=isolate`.
The instance I use for tes
Public bug reported:
Description
===
As described in test_multi_nodes_isolate() in
https://github.com/openstack/nova/blob/master/nova/tests/unit/virt/test_hardware.py#L3006-L3024,
numa_fit_instance_to_host() function returns None for cpuset_reserved for cells
with id >1.
-
Public bug reported:
Description
===
As described in [1], Hyper-V driver supports NUMA placement policies.
But it doesn't support cpu pinning policy[2].
So the host should be excluded in NUMATopologyFilter if the end user try to
build a VM with cpu pinning policy.
[1]
https://docs.open
Public bug reported:
Description
In ``GET /allocation_candidates`` API, ``provider_summaries`` should show all
the inventories for all the resource classes in all the resource providers.
However, ``provider_summaries`` doesn't contain resources that aren't requested.
Steps to repro
Public bug reported:
When the `member_of` parameter is present, only non shared providers in
the specified aggregation are picked, but the non shared provider brings
shared providers from out of the specified aggregation.
For example, with the following set up,
```
CN1 (VCPU)
Public bug reported:
How to reproduce
-
In placement,
1. Setup a compute node resource provider with inventories of
- 24 VCPU
- 2048 MEMORY_MB
- 1600 DISK_GB
2. Setup a shared storage resource provider with "MISC_SHARES_VIA_AGGREGATE"
tag with
- 2000 DISK
Public bug reported:
How to reproduce
-
In placement,
1. Setup a compute node resource provider with inventories of
- 24 VCPU
- 2048 MEMORY_MB
- 1600 DISK_GB
2. Setup a shared storage resource provider with "MISC_SHARES_VIA_AGGREGATE"
tag with
- 2000 DISK
Public bug reported:
* We are setting up two compute nodes with numa node & pf nested providers.
And only one pf from cn1 has HW_NIC_OFFLOAD_GENEVE trait.
compute node (cn1)
[CPU:16, MEMORY_MB:32768]
/+++\
/
Public bug reported:
Some candidates are missing when multiple sharing provider have multiple
shared resources
Description
===
There can be legitimately distinct allocation requests with the same
combination of providers.
But placement filter out them if the combination of the providers
This was novel when I reported this, but now the bug was fixed during
granular-candidates work.
Linked the related patches manually just now.
Unit test submitted in https://review.openstack.org/#/c/566842/
Fixed in https://review.openstack.org/#/c/517757/
** Changed in: nova
Status: Confi
Public bug reported:
Description
===
You can update a resource provider(old root RP)'s parent RP from None to a
specific existing RP(original root RP).
But if the resource provider(old root RP) has a child RP, the child RP's root
RP is not updated automatically to the new root RP.
Repr
Public bug reported:
Description
===
GET /resource_providers/{uuid}/allocations doesn't get all the
allocations
Reproduce
=
1. Set 1 resource provider with some inventories
2. A user (userA) in a project(projectX) makes 1 consumer (Consumer1) allocate
on the rp
3. The same use
Public bug reported:
"GET /allocation_candidates" now supports "member_of" parameter.
With nested providers present, this should work with the following constraints.
-
(a) With "member_of" qparam, aggregates on the root should span on the whole
tree
Public bug reported:
When nested resource provider feature was added in Rocky,
root_provider_uuid column, which should be non-None value is created in
the resource provider DB. For existing resource providers created before
queens, we have an online data migration:
https://review.openstack.org/#/
Abandoned https://review.openstack.org/#/c/619126/ in favor of
https://review.openstack.org/#/c/624943/, which is now committed.
** Changed in: nova
Status: New => Won't Fix
** Changed in: nova
Status: Won't Fix => Confirmed
** Changed in: nova
Status: Confirmed => In Progre
o_db/sqlalchemy/enginefacade.py",
line 532, in _setup_for_connection
"No sql_connection parameter is established")
oslo_db.exception.CantStartEngineError: No sql_connection parameter is
established
** Affects: nova
Importance: High
Assignee: Tetsuro Nakamura (tetsuro0
Public bug reported:
With the change of https://review.openstack.org/#/c/465160,
NUMA related features like CPU pinning, hugepages, and realtime are now
explicitly disabled when using libvirt driver with `virt_type=xen`,
and compute hosts with libvirt/xen driver are filtered out with
NUMATopolo
Public bug reported:
With the change of https://review.openstack.org/#/c/465160,
NUMA related features like CPU pinning, hugepages, and realtime are now
explicitly disabled when using libvirt driver with `virt_type=qemu`,
and compute hosts with libvirt/qemu driver are filtered out with
NUMATopo
21 matches
Mail list logo