attributes: numCpuThreads,
cpuMhz, and numCpuCores. Probably with some "what features are turned
on" magic for extra accuracy.
The correct math is being researched, I'll hang it on this bug when it
is figured out.
** Affects: nova
Importance: Low
Assignee: Chris Dent (cdent)
Status:
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1837995
Title:
"Unexpected API Error" when use "openstack usage
The [placement] section is still necessary for nova.conf to configure
how nova talks to the placement service. As it says in there you are
configuring "access to the Placement service".
So I'm pretty sure this is not a bug, and will mark it as such. However,
if I've missed your point, please
** Changed in: nova
Status: In Progress => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1782690
Title:
Consider forbidden traits in early exit of
** Changed in: nova/pike
Status: Fix Committed => Fix Released
** Changed in: nova
Status: Confirmed => Fix Committed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Since there's been a lot of churn since this conversation started, and
because we're over on storyboard now, and other issues have taken
greater priority, I'm going to mark this as a wontfix. Not because we
won't fix the conceptual situation, but that the concept isn't really
caught by the bug.
Public bug reported:
It's hard to find a single place in the nova docs where the impact of
availability zones (including default) on the capabilities of live or cold
migrations is clear
In https://docs.openstack.org/nova/latest/user/aggregates.html
#availability-zones-azs is probably a good
Public bug reported:
When running 'nova-manage simple_cell_setup ...' if hosts are already
the _map_cell_and_hosts method prints a message of 'All hosts are
already mapped to cell(s), exiting.' and then proceeds to map instances.
It does not, in fact, exit.
This isn't the end of the world, but
After speaking with coreycb we're going to drop nova/placement from this
as it no longer quite fits and the existing snap related bugs are
sufficient to provide an aide-mémoire.
** Changed in: nova
Status: In Progress => Won't Fix
--
You received this bug notification because you are a
Public bug reported:
See: http://logs.openstack.org/98/538498/22/gate/nova-tox-functional-
py35/7673d3e/testr_results.html.gz
The failing tests there are failing because the Database fixture from
placement is used directly, and configuration opts are not being
registered properly. This was an
** Changed in: nova/rocky
Status: In Progress => Won't Fix
** Changed in: nova
Status: In Progress => Fix Committed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1809401
Title:
os-resource-classes: Could not satisfy
Public bug reported:
See: http://logs.openstack.org/89/639889/3/check/placement-
perfload/e56f0a0/logs/placement-api.log (or any other recent perfload
run) where there are multiple errors when trying to create aggregates.
Various bits of work have been done to try to fix that up, but
apparently
/607508/14/check/placement-gabbi-
tempest/f7c3eca/controller/logs/screen-placement-
api.txt.gz#_Feb_25_22_37_31_423135 for an example.
This can be fixed by casting the 'used' value to an int when creating a
Usage.
** Affects: nova
Importance: High
Assignee: Chris Dent (cdent
Public bug reported:
When listing allocations by resource provider or consumer uuid, the
updated_at and created_at fields in the database are not loaded into the
object, so default to now when their times are used to generate last-
modified headers in http responses.
This isn't a huge problem
Public bug reported:
'test_name_with_non_printable_characters' in the 'test_flavors' unit
tests checks to see that a non-printable character cannot be allowed in
a flavor name. This fails in python 3.7.
The reason it fails is because in Python 3.7 the 'unicodedata' package
was updated [1] to
Public bug reported:
Generators used in multi cell list handling raise StopIteration, which
is not something python 3.7 likes. Efforts to add python 3.7 testing to
nova [1] revealed this (and similar for neighboring tests):
Public bug reported:
In the extracted placement, the coverage tests run both unit and
functional tests. This is because so much of the functionality is tested
by gabbi. In those runs we can see that a few methods in
placement/objects/resource_provider.py are not reached. They are:
def
Public bug reported:
The ServerGroup functional tests do not adequately manage the _SUPPORTS*
globals used at
https://github.com/openstack/nova/blob/1a1ea8e2aa66a2654e6cc141c735e47bbd8c4fef/nova/scheduler/utils.py#L805
leading to tests that come sometimes fail.
The easy fix is to reset the
Public bug reported:
It's possible for the _ensure_aggregate code in
objects/resource_provider.py to, under unusual circumstances, reach a
maximum recursion error, because it calls itself when there is a
DBDuplicateEntry error.
http://logs.openstack.org/84/602484/30/check/placement-
Public bug reported:
The check for double words in test_hacking is failing in python 3.6.7
(released in ubuntu 18.04 within the last few days) and in new versions
of 3.7.x. This is is because of this change to python:
https://bugs.python.org/issue33899 .
This is causing failures in python 36
Public bug reported:
oslo_config allows configuration to come from the process environment
(e.g., OS_PLACEMENT_DATABASE__CONNECTION). This was developed to allow
services that use oslo_config to hosted within immutable containers.
However, as currently written, the placement-wsgi app (in both
Public bug reported:
I'm using a recent devstack in late October 2018, no special keystone
configuration, it is running under uwsgi and apache2.
If I make a request of the service to a bogus URL:
curl -v http://localhost/identity/v3/narf
> GET /identity/v3/narf HTTP/1.1
> Host: localhost
This is differently out of date now that we have an extracted placement.
** Changed in: nova
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
The nova-api-wsgi script provides a quick and easy way to run the nova-
api from the command line. In at least python3.6 it fails with:
ERROR nova Traceback (most recent call last):
ERROR nova File "/usr/local/bin/nova-api-wsgi", line 50, in
ERROR nova
catenating the response
header with other values, such as base URLs.
** Affects: nova
Importance: Undecided
Assignee: Chris Dent (cdent)
Status: In Progress
** Tags: api
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
Not sure if this is in itself a bug, but instead indicates that there
are issues with aggregate handling that may show up in the real world
under some conditions.
The placecat tooling at https://github.com/cdent/placecat has a suite of
automated tests that run every now and
long term global conf leakage that
we may wish to consider fixing, but the short term fix is to register
the opts in the gabbi fixture. Patch forthcoming.
** Affects: nova
Importance: Medium
Assignee: Chris Dent (cdent)
Status: In Progress
** Tags: placement testing
--
You receive
Public bug reported:
NOTE: This may be just a postgresql problem, not sure.
When doing some further experiments with load testing placement, my
resource provider create script, which uses asyncio was able to cause
several 500 errors from the placement service of the following form:
```
Public bug reported:
With the advent of placement, the FilterScheduler no longer provides
granular information about which class of resource (disk, VCPU, RAM) is
not available in sufficient quantities to allow a host to be found.
This is because placement is now making those choices and does not
Public bug reported:
When oslo policy checks were added to placement, fixtures and functional
tests were updated to hide warnings related to scope checks that cannot
(yet) work in the way placement is managing policy.
Those some warnings happen with every request on an actually running
service.
Public bug reported:
Using today's master, there is a big performance degradation in GET
/allocation_candidates when there is a large number of resource
providers (in my tests 1000, each with the same inventory as described
in [1]). 17s when querying all three resource classes with
Public bug reported:
Both the placement and nova apis allow oslo_middleware.cors in their
WSGI middleware stacks.
Placement has some gabbi functional tests which test that the middleware
is present and does the right thing when using the middleware's own
configuration defaults. Both when it is
Public bug reported:
Both the placement and nova apis allow oslo_middleware.cors in their
WSGI middleware stacks.
Placement has some gabbi functional tests which test that the middleware
is present and does the right thing when using the middleware's own
configuration defaults. Both when it is
Public bug reported:
When running the nova functional tests under python 3.6 the
nova.tests.functional.api.openstack.placement.db.test_allocation_candidates.AllocationCandidatesTestCase.test_all_sharing_providers.*
tests (there are 3) all fail because incorrect results are produced on
the call to
While the behavior on this is as described: you can't move a resource
provider between cells, that's how things are designed. You no longer
get the DB error, instead the 409 happens.
So I think this is invalid, working as designed.
That the design is imperfect is a different problem...
**
Public bug reported:
The /reshaper API is willing to accept an empty dictionary for the
inventories attribute of a resource provider. This is intended to mean
"clear all the inventory".
However, the backend transformer code is not prepared to handle this:
File
Public bug reported:
In microversion 1.12 of placement, a schema for allocations was
introduced that required the allocations, project_id and user_id fields.
This schema is used for subsequent microversions, copied and manipulated
as required.
However, it has a flaw. It does not set
As the main issue here has been sort of addressed and there are many
related other issues, it's better to close this and deal with the more
granular issues as they come up.
** Changed in: nova
Status: Confirmed => Fix Released
--
You received this bug notification because you are a
Marking as fix released. If it becomes an issue we can consider
backporting it.
** Changed in: nova
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Gonna kills this one. We seem to have reached the consensus that
overhead that an operator may manage however they like, it is not
something we will generically manage.
In the future it might make sense for the virt drivers to handle
overhead via resereved when they are working with
** Changed in: nova
Status: Confirmed => Won't Fix
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708958
Title:
disabling a compute service does not disable the
Public bug reported:
This is using microversion 1.28 of the placement API. I will start the
process of finding when this went wrong after submitting this bug. I'm
guessing at the start of POST to /allocations, but we'll see.
When a POST to /allocations contains multiple consumers each writing
Public bug reported:
If we write some allocations with PUT /allocations/{uuid} at modern
microversions, a consumer record is created for {uuid} and a generation
is created for that consumer. Each subsequent attempt to PUT
/allocations/{uuid} must include a matching consumer generation.
If the
: Chris Dent (cdent)
Status: New
** Tags: placement
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1778576
Title:
making new allocations for one consumer against
Public bug reported:
When using consumer generations to create new allocations, the value of
the generation is expected to be None on the python side, and 'null' in
JSON. The error response sent over the API says "expected None but got
1" which doesn't help much since the api is in JSON.
**
placement only has one version.
This is easily fixable and easily backportable, so I'll get on that.
This is causing problems for at least mnaser when trying to write his
own client code.
** Affects: nova
Importance: Medium
Assignee: Chris Dent (cdent)
Status: Triaged
** Tags
he caching or syncing a bit differently (see
https://review.openstack.org/#/c/553857/ for an example).
** Affects: nova
Importance: Medium
Assignee: Chris Dent (cdent)
Status: Triaged
** Tags: placement
--
You received this bug notification because you are a member of Yahoo!
Engine
Public bug reported:
Modern webob has improved its management of accept headers to be more in
alignment with the HTTP RFCs (see bug
https://bugs.launchpad.net/nova/+bug/1765748 ), deprecating their old
handling:
DeprecationWarning: The behavior of AcceptValidHeader.best_match is
currently being
nova
Importance: Undecided
Assignee: Chris Dent (cdent)
Status: In Progress
** Tags: placement
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1771384
Title:
Public bug reported:
In stress testing of a nova+placement scenario where there is only one
nova-compute process (and thus only one resource provider) but more than
one thread worth of nova-scheduler it is fairly easy to trigger the
"Failed scheduler client operation claim_resources: out of
perimental job
for checking with postgres every now and again.
[1] https://hub.docker.com/r/cdent/placedock/
** Affects: nova
Importance: Low
Assignee: Chris Dent (cdent)
Status: In Progress
** Tags: placement
--
You received this bug notification because you are a membe
this piece of code is very
very rarely called and we don't have tests (can) cover it.
I just happened to notice while doing a readthrough.
** Affects: nova
Importance: Medium
Assignee: Chris Dent (cdent)
Status: In Progress
** Tags: placement
--
You received this bug notification
t it wrong).
** Affects: nova
Importance: Medium
Assignee: Chris Dent (cdent)
Status: Triaged
** Tags: placement testing
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.lau
Public bug reported:
It is possible to create two different resource providers (and probably
other entities) with the same UUID by creating one with '-' and the
other without. This is because in both json schema and ovo validate
UUIDs using the same route (different code but same concept): with
Public bug reported:
Most requests to the placement service eventually reach the database
code in the resource_provider.py file and as a result eventually run
_ensure_trait_sync to make sure that this process has synced the os-
traits library to its traits database table.
While there is a flag
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749797
Title:
placement returns 503 when keystone is down
This is actually done now, but my use of partial-bug on both changes
made the automation not happen.
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute
cts: nova
Importance: Undecided
Assignee: Chris Dent (cdent)
Status: In Progress
** Tags: low-hanging-fruit testing
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nov
** Also affects: keystonemiddleware
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1749797
Title:
placement returns 503 when
I disagree with Sylvain on this one so going to re-open, but it is low-
ish priority because the impact isn't significant: if max_unit is
greater than reserved and allocation_ratio is 1 requesting a single
max_unit resource will fail in an expected way that does not involve
max_unit:
It is, yes. I think the duplicate was created during one of those times
when launchpad was doing timeouts and I didn't notice that I created it
twice.
** Changed in: nova
Status: Triaged => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering
The strings in the log don't seem to be showing up in recent logstash,
so I'm going to mark this one dead, so it's not lingering.
** Changed in: nova
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
I've added api-sig to this because the fact that this issue has shown up
in the wild should be good motivation for us to make a guideline about
how to address it. The code for adding the requisite headers to
placement may be a useful starting point:
https://review.openstack.org/#/c/521640/
**
Public bug reported:
In nova/tests/unit/scheduler/client/test_report.py there are several
tests which confirm the URLs that get passed to the placement service.
These create query strings by using code like:
expected_url = '/allocation_candidates?%s' % parse.urlencode(
Public bug reported:
In nova/tests/unit/scheduler/client/test_report.py there are several
tests which confirm the URLs that get passed to the placement service.
These create query strings by using code like:
expected_url = '/allocation_candidates?%s' % parse.urlencode(
Public bug reported:
Very rarely (so rarely in fact that it only seems to happen when test
order is much different from the norm) some unit tests which encounter
the resource_class_cache can fail as follows:
http://logs.openstack.org/49/540049/2/check/openstack-tox-
Apparently this was mostly expected, but not fully documented, so going
to invalidate the bug.
https://review.openstack.org/#/c/535642/
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
The code for /allocation_candidates is set up to be able to process a
'required' parameter alongside the 'resources' parameter. This results in a
collection of RequestGroups which are used by the AllocationCandidates code in
nova/objects/resource_provider.py
But we can't
that this is not the only case of inadvertent imports, but is one
of the main vectors. Others will be identified and additional bugs
created.
** Affects: nova
Importance: Low
Assignee: Chris Dent (cdent)
Status: Triaged
** Tags: placement
--
You received this bug notification
Public bug reported:
When the placement service is supposed to restart in grenade (pike to
master) it doesn't actually restart:
http://logs.openstack.org/93/385693/84/check/legacy-grenade-dsvm-
neutron-multinode-live-
migration/9fa93e0/logs/grenade.sh.txt.gz#_2017-12-05_00_08_01_111
This leads
Public bug reported:
Nova master, late november
When a resource provider fails to create after a POST
/resource_providers for some reason, the error message identifies the
provider by uuid. However, the uuid may not have been supplied by the
client, it may be generated server side. So the name
file
handling as used in wsgi.py, but it's not clear if the arg will get
passed through. Further experimentation required.
** Affects: nova
Importance: Undecided
Assignee: Chris Dent (cdent)
Status: New
** Tags: placement
--
You received this bug notification because you
because it provides control and possibility to
test exactly this problem.
** Affects: nova
Importance: Medium
Assignee: Chris Dent (cdent)
Status: Triaged
** Tags: placement testing
--
You received this bug notification because you are a member of Yahoo!
Engineering Team
Public bug reported:
If the placement service (as of microversion 1.10) if you request a
microversion that is outside the acceptable range of VERSIONS and do
_not_ include an 'Accept' header in the request, there is a 500 and a
KeyError while webob tries to look up the Accept header.
The issue
Public bug reported:
This is a note to self for sake of bookkeeping.
Coverage reports show that there's zero gabbi coverage for a request
with an empty `resources` query string. As this is relatively simple to
add, may as well have it.
** Affects: nova
Importance: Low
Assignee: Chris
Public bug reported:
This is a note to self for sake of bookkeeping.
Coverage reports show that there's zero gabbi coverage for a request
that attempts to PUT to update a standard resource class. This is easy
to fix, so may as well.
** Affects: nova
Importance: Low
Assignee: Chris
I think the above patch got us far as we're going to get on this issue
for now, so gonna mark it done.
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack
Public bug reported:
Long time ago a todo was made in placement:
https://github.com/openstack/nova/blob/faede889d3620f8ff0131a7a4c6b9c1bc844cd06/nova/objects/resource_provider.py#L1837-L1839
We need to implement that TODO, this is a note to self.
This is related to what may be a different bug:
Importance: Undecided
Assignee: Chris Dent (cdent)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1716247
Title
Public bug reported:
nova master as of 20170831
The _set_allocations method used to write allocations to the placement
API will raise a 400 when a resource class results in a NotFound
exception. We want that 400. The problem is that the message associated
with the error users the resource
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1708205
Title:
placement allocation representation asymetric on
This is effectively a duplicate of #1707071 , which has been released,
so I'm going to mark this as such.
** Changed in: nova
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack
** Changed in: nova
Status: Fix Committed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1653122
Title:
Placement API should support DELETE
https://bugs.launchpad.net/nova/+bug/1709902 duplcates this, and that
one has code, so invalidating this one.
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack
Public bug reported:
(master as of 2017-08-15)
If the merge of two resources of the same class can result in a sum of
zero, or one of the provided keys has a value of zero in the first place
and it is only in one of the provided resource dicts, the result dict of
resources will have a zero value
Public bug reported:
The newly added test_evacuate test in ServerMovingTests is lightly
racey. It seems to fail about 1 in 10 times. A recent failure is at
http://logs.openstack.org/72/489772/2/gate/gate-nova-tox-functional-py35
-ubuntu-xenial/07f4a29/console.html#_2017-08-12_12_51_52_867765
Public bug reported:
Nova master, as of August 6th, 2017 (head is
5971dde5d945bcbe1e81b87d342887abd5d2eece).
If you make multiple instances from one request:
openstack server create --flavor c1 --image $IMAGE --nic net-
id=$NET_ID --min 5 --max 10 x2
and then try to migrate just one of
Public bug reported:
If you make a multi node devstack (nova master as of August 6th, 2017),
or otherwise have multiple compute nodes, all of those compute nodes
will create resource providers and relevant inventory.
Later if you disable one of the compute nodes with a nova service-
disable
.
** Affects: nova
Importance: Medium
Assignee: Chris Dent (cdent)
Status: Confirmed
** Tags: pike-rc-potential placment
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https
, as updates of those on the
placement server side are relatively infrequent.
We need to balance between doing the updates too often and there being a
gap between when an aggregate change does happen and the map getting
updated.
** Affects: nova
Importance: Low
Assignee: Chris Dent (cdent
Public bug reported:
GET /allocations/{consumer_uuid} is a dict keyed by resource provider
uuid.
PUT /allocations/{consumer_uuid} is an array of anonymous json objects
with 'resource_provider' and 'resources' objects
This asymmetry is undesirable and confusing. It's probably the result of
Public bug reported:
GET /allocations/{consumer_uuid} is a dict keyed by resource provider
uuid.
PUT /allocations/{consumer_uuid} is an array of anonymous json objects
with 'resource_provider' and 'resources' objects
This asymmetry is undesirable and confusing. It's probably the result of
to deal with.
** Affects: nova
Importance: Low
Assignee: Chris Dent (cdent)
Status: In Progress
** Tags: placement
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https
Public bug reported:
Currently the report client doesn't consisently use an accept header of
'application/json' when making requests of the placement API. This means
that sometimes the bodies of the error responses are in HTML which means
processing and inspection of the error response is
Public bug reported:
The presumption all along has been that when doing a PUT
/allocations/{consumer_uuid} that the body of that request would fully
replace any allocations associated with that consumer.
This has turned out not to be the case. The code [1] to clean up the
current allocations was
be suppressed after 10
occurrences)
(util.ellipses_string(value),))
This is annoying when trying to evaluate test logs. It's noise.
** Affects: nova
Importance: Low
Assignee: Chris Dent (cdent)
Status: In Progress
** Tags: placement
--
You received this bug notification
Looks like your database connection setup is not correct, either in
nova.conf or with username and password on the databasea:
2017-07-19 02:14:14.901 24784 ERROR nova.api.openstack.extensions
OperationalError: (pymysql.err.OperationalError) (1045, u"Access denied
for user 'nova'@'controller'
Public bug reported:
I made a typo while writing some gabbi tests and uncovered a 500 in the
placement service. If you try to allocate to a resource provider that
does not host that class of resource it can have a KeyError during
capacity checking. given the following gabbi in microversion 1.10:
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1619723
Title:
in placement api an allocation reporter sometimes needs
1 - 100 of 159 matches
Mail list logo