Public bug reported:
Since [1] nova-live-migration failures can be seen in devstack-subnodes-
early.txt.gz like
+ ./stack.sh:main:1158 : is_glance_enabled
+ lib/glance:is_glance_enabled:90 : [[ , =~ ,glance ]]
+ lib/glance:is_glance_enabled:91 : [[
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1858877
Title:
Silent wasted storage with multiple RBD backends
S
*** This bug is a duplicate of bug 1855752 ***
https://bugs.launchpad.net/bugs/1855752
Sorry, I didn't know about this bug when we opened 1855752. The issue
has been fixed under that bug.
** This bug has been marked a duplicate of bug 1855752
Inappropriate HTTP error status from os-server-
*** This bug is a duplicate of bug 1844568 ***
https://bugs.launchpad.net/bugs/1844568
I clearly don't know how to make logstash links properly, but I had that
query open for the past 10 days and saw hits on many different jobs
across multiple projects, including sdk, cinder, and various
netwo
Public bug reported:
There was something similar before [1] but it was 100% and in one job.
This is intermittent and in multiple jobs across multiple projects.
http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22Multiple%20possible%20networks%20found,%20use%20a%20Netw
Public bug reported:
In [1] the tempest-slow-py3 job was dropped and non-redundant bits
folded into the nova-next job.
Except we forgot to move over some of the config necessary to make this
QoS bandwidth test [2] work, so it gets skipped:
setUpClass
(tempest.scenario.test_minbw_allocation_place
> It's possible https://review.openstack.org/#/c/500956/ will help with
this.
It did.
I still think we could stand to figure out how to get rid of
_ContextAuthPlugin, but it's not breaking anything anymore, so closing
this bug out for the time being.
** Changed in: nova
Status: Confirmed
Public bug reported:
stack@nucle:/opt/stack/cyborg$ openstack endpoint list
+--+---+--++-+---+-+
| ID | Region| Service Name | Servi
Public bug reported:
Since 20190910 we've hit this 10x: 8x in functional and 2x in
functional-py36
http://logstash.openstack.org/#dashboard/file/logstash.json?query=message%3A%5C%22OpenStackApiException%20not%20raised%20by%20_set_az_aggregate%5C%22
It looks to be a NoValidHosts caused by
2019-0
Public bug reported:
Following discussion on IRC [1], it would be nice to have a contributor
document describing the differences among the various move-ish
operations -- for purposes of this bug, just rebuild and evacuate -- in
terms of what happens to their allocations, images, UUIDs (instance vs
https://review.opendev.org/#/c/677070/ is merged to blacklist kombu 4.6.4
https://review.opendev.org/#/c/677071/ ought to prevent similar snafus in future
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engin
ainst requirements to blacklist kombu 4.6.4.
[0] https://review.opendev.org/#/c/675816/
[1]
http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008553.html
[2] https://review.opendev.org/#/c/677070/
** Affects: nova
Importance: Critical
Assignee: Eric Fried (e
Public bug reported:
Based on noted usage in Nova [1], it appears as though PUT
/v2/ports/{port_id} [2] with a payload like
{ "port":
"foo": ...,
}
will update only port.foo on the server, leaving all the other contents
of the port untouched.
That is, you can GET-extract-PUT rather tha
Public bug reported:
Over the weekend (so since about 3/29) the nova-live-migration job has
been failing 100% with the message:
"Multiple possible networks found, use a Network ID to be more specific"
Example: http://logs.openstack.org/12/648912/1/check/nova-live-
migration/48932a5/job-output.tx
Public bug reported:
NB: This comes from code inspection, not observed behavior.
When the compute service is deleted, we attempt to delete from placement
the resource provider associated with the compute node associated with
the service [1].
But ironic deployments can have multiple compute nodes
This was fixed by https://review.openstack.org/#/c/598365/
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/17
*** This bug is a duplicate of bug 1729621 ***
https://bugs.launchpad.net/bugs/1729621
** This bug has been marked a duplicate of bug 1729621
Inconsistent value for vcpu_used
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to
More core team discussion [1] concluded that, if we're going to do this,
it's going to need a blueprint/spec and most likely a microversion. If
you wish to pursue it, you may register a blueprint here [2] and submit
a spec to the nova-specs repository. More information on this process
can be found
Public bug reported:
I978fdea51f2d6c2572498ef80640c92ab38afe65 /
https://review.openstack.org/#/c/565604/ added placement microversion
1.28, which made various API operations consumer generation-aware. One
of the affected routes was /resource_providers/{u}/allocations - but
this route wasn't cover
Public bug reported:
This is to track [1] so we don't lose sight of it. We first need to
figure out a way to test this scenario to see if this is an issue at
all.
[1]
https://review.openstack.org/#/c/581139/3/nova/api/openstack/placement/util.py@650
** Affects: nova
Importance: Undecided
This bug is still relevant. Excerpt from
https://review.openstack.org/#/c/579163/:
The current behavior
status: 200
{
"allocations": {}
}
is wrong because the response payload doesn't conform to the expected
format, which would contain a consumer_generation, project_id, and
user_id.
This is fixed as part of https://review.openstack.org/#/c/579921/
** Changed in: nova
Status: In Progress => Fix Released
** Changed in: nova
Assignee: Chris Dent (cdent) => Jay Pipes (jaypipes)
--
You received this bug notification because you are a member of Yahoo!
Engineering Tea
Public bug reported:
The resource tracker (in n-cpu) used to be the only place we were
pushing changes to placement, all funneled through a single mutex
(COMPUTE_RESOURCE_SEMAPHORE) to prevent conflicts.
When we started mirroring host aggregates as placement aggregates [1],
which happens in the n
This appears to be a user error:
'htpp://'
is misspelled, should be 'http://'.
This URL comes from nova.conf or the service catalog.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
Public bug reported:
We don't have an API to delete a consumer. It gets created implicitly
when allocations are created against it, but it doesn't get deleted when
the consumer's last allocation is removed. In some uses of placement,
such as nova's, there is a high rate of turnover of consumers
This is invalid for the reason you state in comment #1. More
specifically: when you call the API with versions <2.19, you get the
name as the description; when you call with >=2.19, you get the user-
supplied description or None. We can't change that behavior, per the
rules of API versioning.
So
This was fixed incidentally via
9af073384cca305565e741c1dfbb359c1e562a4e
See related fix.
** Changed in: nova
Status: Confirmed => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
http
Public bug reported:
Long-standing code in pypowervm [1] used isinstance(..., str) to help
identify whether an input was a UUID or an integer short ID. This is
used with to find SCSI mappings [2] with Instance.uuid [3] when
disconnecting a disk during destroy [4].
Then this change in oslo.versio
This means that, as written,
ResourceClass.normalize_name('ß') will yield 'CUSTOM__' in py2, but
'CUSTOM_SS' in py3.
[1] https://bugs.python.org/issue4610
** Affects: nova
Importance: Undecided
Assignee: Eric Fried (efried)
Status: In Progre
Public bug reported:
If the first trait you try to retrieve from placement doesn't exist,
traits are not synced from os_traits into the database, so it winds up
empty.
try:
rp_obj.Trait.get_by_name(self.ctx, 'CUSTOM_GOLD')
except exception.TraitNotFound:
pa
pute nodes which
are also associated with a sharing provider.
[1] https://review.openstack.org/#/c/540111/4/specs/rocky/approved
/update-provider-tree.rst@48
** Affects: nova
Importance: Undecided
Assignee: Eric Fried (efried)
Status: In Progress
** Tags: placement queens-rc
don't have os-resource-classes (yet), the linkitude should just be
removed.
[1] https://developer.openstack.org/api-ref/placement/#create-resource-class
[2] https://docs.openstack.org/os-traits/latest/
** Affects: nova
Importance: Undecided
Assignee: Eric Fried (efried)
Statu
Public bug reported:
SchedulerReportClient.set_inventory_for_provider uses this logic [1] to
pre-create custom resource classes found in the input inventory.
list(map(self._ensure_resource_class,
(rc_name for rc_name in inv_data
if rc_name not in fields.
Public bug reported:
Placement has a few APIs which affect resource provider generation, but
which do not accept the current resource provider generation and
therefore cannot ensure consistency. They are as follows:
DELETE /resource_providers/{u}/inventories
DELETE /resource_providers/{u}/traits
Public bug reported:
SchedulerReportClient._delete_inventory uses the DELETE
/resource_providers/{u}/inventories API, which does not provide a way to
send the generation down (see related bug [1]), and is therefore subject
to concurrency errors.
Until/unless an alternative becomes available, we s
Public bug reported:
Today the report client makes assumptions about how resource provider
generation is calculated by the placement service. Specifically, that
it starts at zero [1], and that it increases by 1 when the provider's
inventory is deleted [2].
While these assumptions happen to be tr
Public bug reported:
https://github.com/openstack/nova/blob/f0d830d56d20c7f34372cd3c68d13a94bdf645a6/nova/scheduler/client/report.py#L295-L302
295 def put(self, url, data, version=None):
296 # NOTE(sdague): using json= instead of data= sets the
297 # media type to application/jso
Public bug reported:
This [1] is clearly wrong.
elif not result:
placement_req_id = get_placement_request_id(result)
LOG.warning('[%(placement_req_id)s] Failed to update inventory '
'for resource provider %(uuid)s: %(status)i %(text)s',
Public bug reported:
POST /resource_providers can fail with conflict (HTTP status 409) for
(at least) two reasons: A provider with the specified UUID exists; *or*
a provider with the specified *name* already exists.
In SchedulerReportClient, _ensure_resource_provider uses helper method
_create_re
This got fixed somewhere along the way in The Big Refactor series
(starting https://review.openstack.org/#/c/516778/)
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to O
Public bug reported:
The placement API GET / returns the version document:
{
"versions": [
{
"min_version": "1.0",
"max_version": "1.10",
"id": "v1.0"
}
]
}
However, it requires authentication:
# curl http://9.1.2.3/placement/
{"error": {"message": "Th
nova
Importance: Undecided
Assignee: Eric Fried (efried)
Status: In Progress
** Tags: placement
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/173303
Per hangout, we decided this bug is valid - that we would like to get
extra candidates involving shared RPs when those satisfy the request.
** Changed in: nova
Status: Invalid => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
I set up a test scenario with multiple providers (sharing and non),
across multiple aggregates. Requesting allocation candidates gives some
candidates as expected, but some are garbled. Bad behaviors include:
(1) When inventory in a given RC is provided both by a non-sharin
Public bug reported:
If my placement database is set up with only sharing providers (no
"compute nodes"), the results are broken.
Steps to reproduce
==
Here's one example:
SS1 has inventory in IPV4_ADDRESS, SRIOV_NET_VF, and DISK_GB.
SS2 has inventory in just DISK_GB.
Both are a
ssing the
auth_type option. But this is a pretty good first indicator that the
admin forgot to populate auth options in general.
[1] http://paste.openstack.org/show/623721/
** Affects: nova
Importance: Undecided
Assignee: Eric Fried (efried)
Status: In Progress
--
You rec
Public bug reported:
When requesting multiple resources with multiple traits, placement
doesn't know that a particular trait needs to be associated with a
particular resource. As currently conceived, it will return allocation
candidates from the main RP plus shared RPs such that all traits are
sa
Public bug reported:
When both the compute node resource provider and the shared resource
provider have inventory in the same resource class,
AllocationCandidates.get_by_filters will not return an AllocationRequest
including the shared resource provider.
Example:
cnrp { VCPU: 24,
MEMORY
** Changed in: nova-powervm
Importance: Undecided => High
** Changed in: nova-powervm
Status: New => Fix Released
** Changed in: nova-powervm
Assignee: (unassigned) => Eric Fried (efried)
--
You received this bug notification because you are a member of Yahoo!
Enginee
** Also affects: nova-powervm
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1718545
Title:
[vnc]vncserver_proxyclient_addres
Public bug reported:
[0] removed options [vnc]vncserver_proxyclient_address and
[vnc]vncserver_listen without a deprecation period. Lemme splain:
Take for example vncserver_proxyclient_address. It was originally
[DEFAULT]vncserver_proxyclient_address. That was moved to
[vnc]vncserver_proxyclie
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714284
Title:
Placement user doc: add link to API referenc
Meh, since I opened it, might as well use it.
** Changed in: nova
Status: Invalid => New
** Changed in: nova
Assignee: (unassigned) => Eric Fried (efried)
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to Ope
Sorry, opened the wrong bug.
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714283
Title:
Placement API reference: GE
Affects: nova
Importance: Undecided
Assignee: Eric Fried (efried)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1714283
Title:
Public bug reported:
The Placement API user doc [1] says:
API Reference
A full API reference is forthcoming, but until then ...
That reference has since been published [2].
[1] https://docs.openstack.org/nova/pike/user/placement.html#api-reference
[2] https://developer.openstack.org/api-ref
Public bug reported:
GET /resource_providers returns:
{
"resource_providers": [
{
"generation": 39,
"uuid": "213fd7f8-1e9f-466b-87bf-0902b0b3bc13",
"links": [
{
"href":
"/placement/resource_providers/213fd7f8-1e9f-466b-87bf-0902b0b3bc13",
"re
https://review.openstack.org/#/c/498461 (master branch fix) merged on
8/28. Bot didn't auto-update the status, so I did it.
** Changed in: nova
Status: In Progress => Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscrib
missed.
[1] https://review.openstack.org/#/c/356604/
[2] https://docs.openstack.org/nova/pike/admin/pci-passthrough.html
** Affects: nova
Importance: Undecided
Assignee: Eric Fried (efried)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo
Public bug reported:
nova.context._ContextAuthPlugin is (sometimes) being used as the basis
for keystoneauth1 endpoint discovery. With the advent of ksa 3.1.0,
there are some new methods consumers are expecting to be able to run on
an auth via a ksa Session or Adapter. (Note that they were added
Likely same root cause as https://bugs.launchpad.net/nova/+bug/1682195
This is more of a support request than a bug. Please ask for help on
IRC in #openstack.
** This bug is no longer a duplicate of bug 1678493
oslo_messaging.exceptions.MessagingTimeout
** Changed in: nova
Status: New
Likely same root cause as https://bugs.launchpad.net/nova/+bug/1682195
This is more of a support request than a bug. Please ask for help on
IRC in #openstack.
** This bug is no longer a duplicate of bug 1678493
oslo_messaging.exceptions.MessagingTimeout
** Changed in: nova
Status: New
Public bug reported:
Description
===
The nova.image.glance.GlanceImageServiceV2.download method recently added fsync
[1][2] before closing the download file.
Some hypervisors don't use regular files for download. For example,
PowerVM uses a FIFO pipe, the other end of which is read by a
Not sure if adding "affects nova" is entirely appropriate, but we
encountered this when testing this patch with the PowerVM driver:
https://review.openstack.org/443189 which is currently based on commit
80af8a9dacf6fbd853b5b0e07ffb8bffac3aa8fa of the nova tree.
After I ran the nova-manage command
Public bug reported:
In attempting to track down build failures in an out-of-tree project, I
submitted a change set to try out the nova gate in mitaka. It failed
the docs build on the vine.five switch in kombu.
The change set is here:
https://review.openstack.org/#/c/392200/
The failure log is
Public bug reported:
1) Create a port with custom binding:profile values, e.g.:
$ neutron port-create --vnic-type direct ab85306c-0861-4ef0-b127-1bedc8fc94f3
-- --binding:profile type=dict vnic_required_vfs=3,capacity=0.06
Created a new port:
+---+---
Seems that all of the rules are being ignored for good (or at least
agreed-upon) reasons.
See https://review.openstack.org/#/c/351253/
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
Public bug reported:
tox.ini contains a number of entries in the 'ignore' list for flake8,
allowing developers to kick certain minor infractions down the road,
resulting in creeping technical debt.
Wherever feasible, these ignore entries should be weeded out and their
respective issues resolved i
** Changed in: nova-powervm
Status: Fix Released => Fix Committed
** Changed in: nova-powervm
Assignee: (unassigned) => Drew Thorstensen (thorst)
** Changed in: nova-powervm
Importance: Undecided => Medium
--
You received this bug notification because you are a member of Yahoo!
E
** Also affects: nova
Importance: Undecided
Status: New
** Summary changed:
- nova-powervm no longer loading from out of tree
+ Out-of-tree compute drivers no longer loading
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed
70 matches
Mail list logo