Public bug reported:
Description
===
When a target host is preparing live-migration, it creates disk
according nova.conf setting CONF.libvirt.images_type
and do not preserve image type disk of instance, making fail live-migration
Use case:
===
We have to convert a set of
Public bug reported:
Description
===
When we spawn instances without cache enabled (cache='none') on a file system
there a check in nova code that test if file system support direct IO:
https://github.com/openstack/nova/blob/master/nova/privsep/utils.py#L34
Because this test use 512b
Public bug reported:
Description
===
When we run live block migration on instance with a deleted glance image,
it may failed with following logs:
-- nova-compute-log: --
2019-05-10 11:06:27.417 248758 ERROR nova.virt.libvirt.driver
[req-b28b9aca-9135-4258-93a6-a802e6192c60
Public bug reported:
Description
===
Look likes rescue may update instances.root_device_name if rescue image has
different disk bus (image property hw_disk_bus) than instance.
This introduce a mimatch between device name and driver used for instance:
During instance config generation,
Public bug reported:
Description
===
After issues in control plane during instance creation,
Instance may stay stuck in BUILD state.
Even after deleting them, placement allocation may remain,
and compute host log is complaining that:
Instance eba20a0f-5856-4600-bcaa-7b758d04b5c5 has
Public bug reported:
Description
===
Calculation of available disk space on compute host can be a bit inaccurate
from few KB to few GB,
involving possible bad scheduler decision.
availability disk for new instance on a specific host is calculated this way:
available_disk_least =
Public bug reported:
Description
===
It seems that when nova-compute process run I/O intensive task on a busy file
system,
it can become stuck and got disconnected from rabbitmq cluster.
>From my understanding nova-compute do not use true OS multithreading,
but internal python
Public bug reported:
Description
===
available_disk_least r(free disk for a new instance) seems not be calculated
correctly when instance is in raw (images_type=raw) and preallocate_space
option is not set.
This may lead placement/scheduler to take wrong decision regarding space
I think this one is partially invalid/duplicate/fixed in release.
I don't think this property exists hw_scsi_controller=virtio-scsi the good one
is
hw_scsi_model=virtio-scsi
because of that legacy scsi support is used instead of virtio-scsi, then we
fall into that bug that have been fixed and
Public bug reported:
Description
===
When we run multiple concurrent detach interface on the same instance
uuid/port-id,
a lot of those requests are accepted (HTTP 202) an processed because info_cache
is updated
only when first request finish, so all request after the first one will
Public bug reported:
Description
===
When no more pci slot are available for hot pluggable network interface,
nova api return HTTP 500 internal error which is not very helpfull from
client point of view.
It seems that nova catch all libvirt error and raise:
job is interrupted,
avoiding
the guest resume on target host.
Actual result
=
If issue happen libvirt job continue and bring up guest on target host,
nova still consider it on source.
** Affects: nova
Importance: Undecided
Assignee: Alexandre arents (aarents)
Status
** Changed in: nova
Status: Triaged => Invalid
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1832248
Title:
xpected result
===
interface should not be attached to guest
Actual result
=
zombie interface is attached to guest
** Affects: nova
Importance: Undecided
Assignee: Alexandre arents (aarents)
Status: New
--
You received this bug notification because you are a m
14 matches
Mail list logo