Public bug reported:
Mohammed reported this in the nova channel today [1] and the RDO cloud
people have run into the same issue too. The deployment got into a
situation where instances would show up in a 'nova list' in
BUILD/scheduling state but were unable to be deleted. (They show up in
'nova
Reviewed: https://review.openstack.org/586402
Committed:
https://git.openstack.org/cgit/openstack/nova/commit/?id=8e17c3784ec2a394f01eb6e6a242a921076cc5dc
Submitter: Zuul
Branch:master
commit 8e17c3784ec2a394f01eb6e6a242a921076cc5dc
Author: Matt Riedemann
Date: Thu Jul 26 23:10:58 2018
Public bug reported:
There has been situations where due to an unrelated issue such as an RPC
or DB problem, the nova_api instance_mappings table can end up with
instances that have cell_id set to NULL which can cause annoying and
weird behaviour such as undeletable instances, etc.
This seems to
Public bug reported:
When trying to lock an instance immediately after boot `nova lock
` we are encountering the following error
2018-07-27 18:35:40.064 37341 ERROR nova.api.openstack.extensions
[req-25225429-0656-4c61-95d6-8624fe9022ce fb155fef17ca4693af4edf04ec7406d7
This isn't an issue after all because we move the allocations on the
source node from the instance to the migration *before* we do the copy:
https://github.com/openstack/nova/blob/6be7f7248fb1c2bbb890a0a48a424e205e173c9c/nova/conductor/tasks/live_migrate.py#L82
Looks like this was regressed in Queens:
https://review.openstack.org/#/c/507638/29/nova/compute/manager.py@a6289
And I even pointed it out on the review but we didn't think about the
forced live migration case:
https://review.openstack.org/#/c/507638/25/nova/compute/manager.py@6252
** Also
Public bug reported:
***This is purely based on code inspection right now.***
With a forced host live migration, we bypass the scheduler and copy the
instance's resource allocations from the source node to the dest node:
Public bug reported:
https://review.openstack.org/#/c/560459/ in Rocky changed the libvirt
driver such that if the compute node provider is in a shared storage
provider aggregate relationship (in the same aggregate with a resource
provider that has DISK_GB inventory and the
** Also affects: keystone/ocata
Importance: Undecided
Status: New
** Also affects: keystone/queens
Importance: Undecided
Status: New
** Also affects: keystone/rocky
Importance: Critical
Assignee: Lance Bragstad (lbragstad)
Status: Fix Released
** Also affects:
Public bug reported:
Instances misses neutron QoS on their ports after unrescue and soft
reboot
Description
===
After some operations with instance: such as unrescue and soft reboot
libvirt domains are created, but neutron doesn't set QoS on ports for VM.
So user can
Public bug reported:
None of the keystone events (like identity.project.created,
identity.project.updated, identity.authenticate.success) are getting
stored in the panko db.
All these events were added to /etc/ceilometer/event_pipeline.yaml
I tried debugging the problem. Below is the flow I
Public bug reported:
Ovs agent supports l2 extension framework.
But the logic process each device in a single loop[1], if any l2
extensions process a single device and failed with some unforeseeable
errors when ext_manager calls l2 extenstion. The error will be catched
in out layer and this loop
Public bug reported:
Current openvswitch agent need to be stronger for more cases.
Please see [1]
This line will clean up all stale ovs flows. Try to think, if there is a
case, when the ovs agent restart and try to get its hold device info(rpc
to server get them and store into local cache if
13 matches
Mail list logo