Public bug reported:
Assigning an SR-IOV VF device to an instance when PFs are whitelisted
too correctly marks the PF as unavailable if one of it's VFs got
assigned. However when we delete the instance, the PF is not makred as
available.
Steps to reproduce:
1) Whitelist PFs and VFs in nova.conf
Public bug reported:
Enable PCI passthrough on a compute host (whitelist devices explained in
more detail in the docs), and create a network, subnet and a port that
represents a SR-IOV physical function passthrough:
$ neutron net-create --provider:physical_network=phynet
Public bug reported:
Enable PCI passthrough on a compute host (whitelist devices explained in
more detail in the docs), and create a network, subnet and a port that
represents a SR-IOV physical function passthrough:
$ neutron net-create --provider:physical_network=phynet
Public bug reported:
libvirt driver methods that are used for determining whether a port is
an SR-IOV port do not check properly for all possible SR-IOV port types:
https://github.com/openstack/nova/blob/f15d9a9693b19393fcde84cf4bc6f044d39ffdca/nova/virt/libvirt/driver.py#L3378
should be
** Changed in: nova
Status: Fix Released => Confirmed
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1543149
Title:
Reserve host pages on compute nodes
Status
Public bug reported:
The following change adds an online data migration to the PciDevice
object.
https://review.openstack.org/#/c/249015/ (50355c45)
When we do that we normally want to couple it together with a script
that will allow operators to run the migration code even for rows that
do not
This seems to be by design i.e. Scheduler can get out of sync, and we
have the claim-and-retry mechanism in place so request for vm3 would
fail and trigger a reschedule.
** Changed in: nova
Status: Confirmed => Invalid
--
You received this bug notification because you are a member of
Yes. as discussed - that is to be expected. Closing the bug for now.
Feel free to reopen if you feel it needs more looking into.
** Changed in: nova
Status: Incomplete => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is
Public bug reported:
Calculating CPU pinning for an instance for the host with hyperthreading
fails in certain cases. Most notably when the instance has an odd number
of CPUs, due to a bug in the logic we might either fail to pin entirely
or end up avoiding siblings by accident, although the
Public bug reported:
It is possible for a libvirt to report libvirt.VIR_DOMAIN_JOB_UNBOUNDED
in _live_migration_monitor
(https://github.com/openstack/nova/blob/ccea5d6b0ace535b375d3e63bd572885cb5dbc91/nova/virt/libvirt/driver.py#L5823)
but return 0s for data_remaining, which in turn makes out
Public bug reported:
https://review.openstack.org/#/c/200485/ patch makes rebuild use
migration context added earlier in Liberty for proper resource tracking
when doing rebuild/evacuate.
Sadly the above patch missed that we need to make sure we set the proper
data from the context when calling
** Changed in: nova
Status: New => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1496135
Title:
libvirt live-migration will not honor destination
Public bug reported:
All of these are being reported upon code inspection - I have yet to
confirm all of these as they are in fact edge cases and subtle race
conditions:
* We update the instance.host field to the value of the destination_node
in resize_migration which runs on the source host.
Moving this to Invalid - but please feel free to move back if you
disagree.
** Changed in: nova
Status: In Progress => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
Reporting this based on code inspection of the current master (commit:
9f61d1eb642785734f19b5b23365f80f033c3d9a)
When we attempt to live-migrate an instance onto a host that has a
different vcpu_pin_set than the one that was on the source host, we may
either break the policy
Not sure why this was moved to won't fix the fix is up and has a +2.
Moving back
** Changed in: nova
Status: Won't Fix = In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
I think Alex was saying that this needs to be fixed in the openstack-
client, not Nova client. Nova client does the right thing for what the
server expects, it's the unified client that gets it wrong.
** Also affects: python-openstackclient
Importance: Undecided
Status: New
** Changed
Public bug reported:
NovaObjectSerializer will call obj_from_primitive, and tries to guard
against IncompatibleObjectVersion in which case it will call on the
conductor to backport the object to the highest version it knows about.
See:
** Also affects: oslo.versionedobjects
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1275675
Title:
Version change in
Public bug reported:
The following commit:
https://review.openstack.org/#/c/140289/4/nova/objects/pci_device.py
missed to bump the PciDeviceList version.
We should do it now (master @ 4bfb094) and backport this to stable Kilo
as well
** Affects: nova
Importance: High
Status:
Public bug reported:
$ nova boot --image cirros-0.3.4-x86_64-uec --flavor 1 --block-device
source=blank,dest=volume testvm-blank
The above line would be accepted as a valid boot request, but no blank
volume would be created. The reason is that:
Public bug reported:
libvirt driver needs to use it's own logic for determining the device
name that will be persisted in Nova instead of the generic methods in
nova.compute.utils, since libvirt cannot really assign the device name
to a block device of an instance (it's treated as a ordering hint
So as commented on the patch - I really think that we need to make sure
that whatever gets created, also gets cleaned up on errors - while the
patch https://review.openstack.org/166695 has some good ideas.
What I also noticed (when I was testing this some time ago) is that what
really happens
** Changed in: nova
Status: In Progress = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1435748
Title:
save method is getting called two times in
A patch that does a partial revert of
https://review.openstack.org/49455 from comment #16 and is under
discussion at the time of writing so I am linking it here.
https://review.openstack.org/#/c/175742/
Basically - just checking quotas and not reserving them is a bit of a
fool's errand. We
Public bug reported:
The following patch that introduces the method for some reason
completely missed the Blank volume type
https://review.openstack.org/#/c/150090/
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member
Public bug reported:
Due to the fact that loopingcall.py uses time.time for recording wall-
clock time which is not guaranteed to be monotonic, if a time drift to
the future occurs, and then gets corrected, all the timers will get
blocked until the actual time reaches the moment of the original
*** This bug is a duplicate of bug 1415768 ***
https://bugs.launchpad.net/bugs/1415768
** This bug has been marked a duplicate of bug 1415768
the pci deivce assigned to instance is inconsistent with DB record when
restarting nova-compute
--
You received this bug notification because you
** Changed in: nova
Status: In Progress = Fix Released
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1383465
Title:
[pci-passthrough] nova-compute fails to
.
** Affects: nova
Importance: High
Assignee: Nikola Đipanov (ndipanov)
Status: Confirmed
** Tags: kilo-rc-potential
** Tags added: kilo-rc-potential
** Changed in: nova
Status: New = Confirmed
** Changed in: nova
Importance: Undecided = High
--
You received this bug
It is enough to specify --boot-volume option, (see
https://wiki.openstack.org/wiki/BlockDeviceConfig for more details about
the block device mapping syntax).
Setting max_local_block_devices to 0 means that any request that
attempts to create a local disk will fail. This option is meant to limit
Public bug reported:
A user reports:
Nova's Block device mappings can become invalid/inconsistent if errors
are encountered while calling for Cinder to attach a volume.
2014-12-18 11:14:41.594 19473 ERROR nova.compute.manager
[req-6f65b7d5-0930-4adf-9b5f-dd20eb1a707e
Public bug reported:
The issue happens when multiple scheduling attempts that request CPU pinning
are done in parallel.
015-03-25T14:18:00.222 controller-0 nova-scheduler err Exception during
message handling: Cannot pin/unpin cpus [4] from the following pinned
set [3, 4, 5, 6, 7, 8, 9]
Public bug reported:
The following commit removed the code in the python nova client that
would add an image block device mapping entry (source_type: image,
destination_type: local) in preparation for fixing
https://bugs.launchpad.net/nova/+bug/1377958.
However this makes some valid instance
Public bug reported:
More in-depth discussion can be found here:
http://lists.openstack.org/pipermail/openstack-
dev/2015-February/056695.html
Basically - there is a number of filters that need to be re-run even if
we force a host. The reasons are two-fold. Placing some instances on
some hosts
Public bug reported:
https://review.openstack.org/#/c/122557/7/nova/scheduler/utils.py broke
this by removing the 2 lines that would make sure extra_specs are dug up
from the DB before passing adding the instance_type to the request_spec,
which eventually get's passed as part of the
Public bug reported:
THis came up when analyzing https://bugs.launchpad.net/nova/+bug/1371677
and there is a lot information on there. The bug in short is that
_get_instance_disk_info will rely on db information to filter out the
volumes from the list of discs it gets from libvirt XML, but due to
Public bug reported:
As part of the blueprint https://blueprints.launchpad.net/nova/+spec
/serial-ports we introduced an API extension and a websocket proxy
binary. The problem with the 2 is that a lot of the stuff was copied
verbatim from the novnc-proxy API and service which relies heavily on
Public bug reported:
Looking at this branch of the NUMA fitting code
https://github.com/openstack/nova/blob/51de439a4d1fe5e17d59d3aac3fd2c49556e641b/nova/virt/libvirt/driver.py#L3738
We do not account for allowed cpus when choosing viable cells for the
given instance. meaning we could chose a
Now that https://bugs.launchpad.net/nova/+bug/1369984 is fixed - we can
mark this as invalid.
** Changed in: nova
Status: Confirmed = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
When we resize (change the flavor) of an instance that has a NUMA
topology defined, the NUMA info from the new flavor will not be
considered during scheduling. The instance will get re-scheduled based
on the old NUMA information, but the claiming on the host will use the
new
Public bug reported:
Libvirt reports even single numa nodes in it's hypervisor capabilities
(which we use to figure out if a compute host is a NUMA host). This is
technically correct, but in Nova we assume that to mean - no NUMA
capabilities when scheduling instances.
Right now we just pass what
: High
Assignee: Nikola Đipanov (ndipanov)
Status: In Progress
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1369502
Title:
NUMA topology
Public bug reported:
This was reported by Michael Turek as he was testing this while the
patches were still in flight See:
https://review.openstack.org/#/c/114938/26/nova/virt/hardware.py
As described on there - the code there makes a bad assumption about the
format in which it will get the data
Public bug reported:
Since this change https://review.openstack.org/#/c/98607/, if the
conductor sends back a field of type ListOfObjects field in the updates
dictionary after a remotable decorator has called the object_action RPC
method, restoring them into objects will fail since they will
Public bug reported:
Following change https://review.openstack.org/#/c/114594 adds checking
for related versions of objects. This is imho wrong because it will make
for unnecessary versioning code that will need to be written by
developers. Better way to do this would be to declare version on the
Public bug reported:
Libvirt driver will attempt to connect the volume on the hipervisor
twice for every volume provided to the instance when booting. If you
examine the libvirt driver's spawn() method, both _get_guest_xml (by
means of get_guest_storage_config) and _create_domain_and_network
Public bug reported:
This is a spin-off of https://bugs.launchpad.net/nova/+bug/1347028
As per the example given there - currently source=blank,
destination=volume will not work. We should either make it create an
empty volume and attach it, or disallow it in the API.
** Affects: nova
All of this is by design - image field on the instance means that the
instance was started with the particular image. If the volume was
created from an image at any point, and instance was booted from that
volume at a later stage - it may or may not have anything to do with the
image, so setting
** No longer affects: horizon
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1337821
Title:
VMDK Volume attach fails while attaching to an instance that is booted
*** This bug is a duplicate of bug 1255449 ***
https://bugs.launchpad.net/bugs/1255449
Ah so looks like this is actually fixed in icehouse - we just need to
backport it to Havana. See https://bugs.launchpad.net/nova/+bug/1255449
and the related fix.
Let me close this is a dublicate of that -
** Also affects: nova
Importance: Undecided
Status: New
** Changed in: nova
Importance: Undecided = Medium
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
Public bug reported:
When there is a race for a volume between 2 or more instances, it is
possible for more than one to pass the API check. All of them will get
scheduled as a result, and only one will actually successfully attach
the volume, while others will go to ERROR.
This is not ideal
I'd say this is a reasonable thing to propose, although since forcing in
cinder is an admin only command - I am thinking this should be as well.
Also I fear there could be edge cases where we really should not allow
even the force detach (see https://bugs.launchpad.net/nova/+bug/1240922
where we
As discussed on several proposed patches around this (see
https://review.openstack.org/#/c/80619/ or
https://review.openstack.org/#/c/80619/ which actually rejects this
solution).
I will move this bug to won't fix, and will raise a BP targeted for
Juno to use some of the code added in
Similar as the bug https://bugs.launchpad.net/nova/+bug/1280357, I think
marking this one as a won't fix and getting the cinder interactions with
events done early in juno makes the most sense to me here.
** Changed in: nova
Status: Confirmed = Won't Fix
--
You received this bug
Public bug reported:
The reason is 2-fold:
* wrap_instance_fault decorator expects the argument to be 'instance'
* We are using new-wold objects in live migration and instance_ref used to
imply a dict.
** Affects: nova
Importance: Medium
Assignee: Nikola Đipanov (ndipanov
** Changed in: nova
Status: In Progress = Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1180040
Title:
Race condition in attaching/detaching volumes
Since it is now possible to both boot instances and attach volumes
without specifying device names after
https://blueprints.launchpad.net/nova/+spec/improve-block-device-
handling BP has been implemented. in which case the device names will be
handled properly by Nova.
It is still possible to
Public bug reported:
After the port of Nova to oslo.messaging
(https://review.openstack.org/#/c/39929) graceful shutdown of services
introduced by https://blueprints.launchpad.net/nova/+spec/graceful-
shutdown in I-1 got broken.
In order to make this work again we need to make sure that Nova
Ok so I've looked at this and seems to work as expected now:
$ for i in {1..5}; do cinder create --display-name volume_$i 1; done
$ cinder list
+--+---+--+--+-+--+-+
| ID |
Public bug reported:
_default_block_device_names method of the compute manager, would call
the conductor block_device_mapping_update method with the wrong
arguments, causing a TypeError and ultimately the instance to fail.
This bug happens only when using a driver that does not provid it's own
62 matches
Mail list logo