Public bug reported:
While working on a patch for
https://bugs.launchpad.net/nova/+bug/1946752, I ran into the following
bug.
Description
===========
If a VM is in paused, and it live-migrated twice, it is lost.
Steps to reproduce
==================
$ openstack server pause <UUID>
(wait until done)
$ openstack server migrate --live-migration <UUID>
(wait until done)
$ openstack server migrate --live-migration <UUID>
Expected result
===============
Migration succeeds and VM is usable afterwards.
Actual result
=============
$ openstack server list
+--------------------------------------+------------------+-----------+------------------------------------------------+---------------------------------+----------+
| ID | Name | Status |
Networks | Image
| Flavor |
+--------------------------------------+------------------+-----------+------------------------------------------------+---------------------------------+----------+
| de2b27d2-345c-45fc-8f37-2fa0ed1a1151 | large2-kickstart | MIGRATING |
large2-kickstart-net=10.0.0.25, 185.46.136.254 | Ubuntu Focal 20.04
(2021-09-23) | m1.large |
+--------------------------------------+------------------+-----------+------------------------------------------------+---------------------------------+----------+
$ openstack server list
+--------------------------------------+------------------+--------+------------------------------------------------+---------------------------------+----------+
| ID | Name | Status | Networks
| Image | Flavor
|
+--------------------------------------+------------------+--------+------------------------------------------------+---------------------------------+----------+
| de2b27d2-345c-45fc-8f37-2fa0ed1a1151 | large2-kickstart | ERROR |
large2-kickstart-net=10.0.0.25, 185.46.136.254 | Ubuntu Focal 20.04
(2021-09-23) | m1.large |
+--------------------------------------+------------------+--------+------------------------------------------------+---------------------------------+----------+
The VM is now in ERROR state because it has disappeared:
libvirt.libvirtError: Domain not found: no domain with matching uuid
'de2b27d2-345c-45fc-8f37-2fa0ed1a1151'
from nova-compute.log on the target host:
2021-10-18 14:32:34.166 32686 INFO nova.compute.manager
[req-f2ce6f2e-bc3e-443a-aaca-ed3dbf536019 8708914a29ce4c92929a9af91e5a33d1
1142d4b9561746bd9e279c43803f50ed - default default] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Post operation of migration started
2021-10-18 14:32:35.913 32686 ERROR nova.compute.manager
[req-f2ce6f2e-bc3e-443a-aaca-ed3dbf536019 8708914a29ce4c92929a9af91e5a33d1
1142d4b9561746bd9e279c43803f50ed - default default] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Unexpected error during post live
migration at destination host.: nova.exception.InstanceNotFound: Instance
de2b27d2-345c-45fc-8f37-2fa0ed1a1151 could not be found.
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
[req-f2ce6f2e-bc3e-443a-aaca-ed3dbf536019 8708914a29ce4c92929a9af91e5a33d1
1142d4b9561746bd9e279c43803f50ed - default default] Exception during message
handling: nova.exception.InstanceNotFound: Instance
de2b27d2-345c-45fc-8f37-2fa0ed1a1151 could not be found.
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server Traceback (most
recent call last):
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/host.py", line 605, in
_get_domain
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
conn.lookupByUUIDString(instance.uuid)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server result =
proxy_call(self._autowrap, f, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in proxy_call
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server rv =
execute(f, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in execute
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
six.reraise(c, e, tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise value
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server rv =
meth(*args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/libvirt.py", line 4508, in lookupByUUIDString
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server if ret is
None:raise libvirtError('virDomainLookupByUUIDString() failed', conn=self)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
libvirt.libvirtError: Domain not found: no domain with matching uuid
'de2b27d2-345c-45fc-8f37-2fa0ed1a1151'
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server During handling
of the above exception, another exception occurred:
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server Traceback (most
recent call last):
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, in
_process_incoming
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server res =
self.dispatcher.dispatch(message)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 276, in
dispatch
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
self._do_dispatch(endpoint, method, ctxt, args)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 196, in
_do_dispatch
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server result =
func(ctxt, **new_args)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 79, in wrapped
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
function_name, call_dict, binary, tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
self.force_reraise()
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in
force_reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
six.reraise(self.type_, self.value, self.tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise value
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 69, in wrapped
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
f(self, context, *args, **kw)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/compute/utils.py", line 1456, in
decorated_function
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
function(self, context, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/compute/manager.py", line 206, in
decorated_function
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
function(self, context, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/compute/manager.py", line 8752, in
post_live_migration_at_destination
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server 'destination
host.', instance=instance)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
self.force_reraise()
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in
force_reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
six.reraise(self.type_, self.value, self.tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise value
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/compute/manager.py", line 8747, in
post_live_migration_at_destination
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
block_device_info)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 9941, in
post_live_migration_at_destination
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
self._reattach_instance_vifs(context, instance, network_info)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 9541, in
_reattach_instance_vifs
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server guest =
self._host.get_guest(instance)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/host.py", line 589, in
get_guest
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
libvirt_guest.Guest(self._get_domain(instance))
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/host.py", line 609, in
_get_domain
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise
exception.InstanceNotFound(instance_id=instance.uuid)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
nova.exception.InstanceNotFound: Instance de2b27d2-345c-45fc-8f37-2fa0ed1a1151
could not be found.
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
2021-10-18 14:32:47.754 32686 INFO nova.compute.manager [-] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] VM Stopped (Lifecycle Event)
2021-10-18 14:32:49.596 32686 ERROR nova.scheduler.client.report
[req-6ee0e137-28ab-40a7-bf29-2da1260ca7b1 - - - - -]
[req-bf431b52-6667-4d1b-8f7a-48e19211fbb3] Failed to update inventory to
[{'MEMORY_MB': {'total': 128586, 'min_unit': 1, 'max_unit': 128586,
'step_size': 1, 'allocation_ratio': 1.5, 'reserved': 0}, 'VCPU': {'total': 40,
'min_unit': 1, 'max_unit': 40, 'step_size': 1, 'allocation_ratio': 16.0,
'reserved': 0}, 'DISK_GB': {'total': 13124, 'min_unit': 1, 'max_unit': 13124,
'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 0}}] for resource provider
with UUID 7386bced-89d0-452a-8cbb-cf2187a3dbee. Got 409: {"errors":
[{"status": 409, "title": "Conflict", "detail": "There was a conflict when
trying to complete your request.\n\n resource provider generation conflict ",
"code": "placement.concurrent_update", "request_id":
"req-bf431b52-6667-4d1b-8f7a-48e19211fbb3"}]}
2021-10-18 14:38:00.913 32686 WARNING nova.compute.manager
[req-6ee0e137-28ab-40a7-bf29-2da1260ca7b1 - - - - -] While synchronizing
instance power states, found 3 instances in the database and 2 instances on the
hypervisor.
2021-10-18 14:40:29.104 32686 INFO nova.compute.manager
[req-aa942b4b-e3c0-4cdb-8165-c6a5a6d3790d 58f19303ac3049688eff9d8ef041956d
9c2c9ed68ba242fdb8206bf573aed265 - default default] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Instance is already powered off in the
hypervisor when stop is called.
2021-10-18 14:40:29.165 32686 INFO nova.virt.libvirt.driver [-] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Instance destroyed successfully.
2021-10-18 14:40:35.365 32686 INFO nova.virt.libvirt.driver [-] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Instance destroyed successfully.
2021-10-18 14:40:35.427 32686 INFO os_vif
[req-9d755643-a02b-4464-b549-35c6dffd512e 58f19303ac3049688eff9d8ef041956d
9c2c9ed68ba242fdb8206bf573aed265 - default default] Successfully unplugged vif
VIFOpenVSwitch(active=True,address=fa:16:3e:da:03:56,bridge_name='br-int',has_traffic_filtering=True,id=3a71aa63-6a39-41d8-9602-04b84834db9e,network=Network(96b506b1-5b9-4ab6-9ef6-8f25c49e9123),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a71aa63-6a')
Environment
===========
I tested this and I see this problem in Ussuri.
# dpkg -l | grep nova
ii nova-common
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute - common files
ii nova-compute
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute - compute node base
ii nova-compute-kvm
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute - compute node libvirt support
ii python3-nova
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1~cloud0
all client library for OpenStack Compute API - 3.x
We are using Qemu + kvm + livirt:
ii libvirt-clients 6.0.0-0ubuntu8.12~cloud0
amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.12~cloud0
amd64 Virtualization daemon
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.12~cloud0
amd64 Virtualization daemon QEMU connection driver
ii libvirt-daemon-system 6.0.0-0ubuntu8.12~cloud0
amd64 Libvirt daemon configuration files
ii libvirt-daemon-system-systemd 6.0.0-0ubuntu8.12~cloud0
amd64 Libvirt daemon configuration files (systemd)
ii libvirt0:amd64 6.0.0-0ubuntu8.12~cloud0
amd64 library for interfacing with different
virtualization systems
ii python3-libvirt 6.1.0-1~cloud0
amd64 libvirt Python 3 bindings
ii qemu-kvm 1:4.2-3ubuntu6.18+syseleven0
amd64 QEMU Full virtualization on x86 hardware
We use shared storage (Quobyte).
We use Queens with Midonet and Ussuri with OVN.
** Affects: nova
Importance: Undecided
Status: New
--
You received this bug notification because you are a member of Yahoo!
Engineering Team, which is subscribed to OpenStack Compute (nova).
https://bugs.launchpad.net/bugs/1947725
Title:
If a VM is in paused, and it live-migrated twice, it is lost.
Status in OpenStack Compute (nova):
New
Bug description:
While working on a patch for
https://bugs.launchpad.net/nova/+bug/1946752, I ran into the following
bug.
Description
===========
If a VM is in paused, and it live-migrated twice, it is lost.
Steps to reproduce
==================
$ openstack server pause <UUID>
(wait until done)
$ openstack server migrate --live-migration <UUID>
(wait until done)
$ openstack server migrate --live-migration <UUID>
Expected result
===============
Migration succeeds and VM is usable afterwards.
Actual result
=============
$ openstack server list
+--------------------------------------+------------------+-----------+------------------------------------------------+---------------------------------+----------+
| ID | Name | Status |
Networks | Image
| Flavor |
+--------------------------------------+------------------+-----------+------------------------------------------------+---------------------------------+----------+
| de2b27d2-345c-45fc-8f37-2fa0ed1a1151 | large2-kickstart | MIGRATING |
large2-kickstart-net=10.0.0.25, 185.46.136.254 | Ubuntu Focal 20.04
(2021-09-23) | m1.large |
+--------------------------------------+------------------+-----------+------------------------------------------------+---------------------------------+----------+
$ openstack server list
+--------------------------------------+------------------+--------+------------------------------------------------+---------------------------------+----------+
| ID | Name | Status | Networks
| Image |
Flavor |
+--------------------------------------+------------------+--------+------------------------------------------------+---------------------------------+----------+
| de2b27d2-345c-45fc-8f37-2fa0ed1a1151 | large2-kickstart | ERROR |
large2-kickstart-net=10.0.0.25, 185.46.136.254 | Ubuntu Focal 20.04
(2021-09-23) | m1.large |
+--------------------------------------+------------------+--------+------------------------------------------------+---------------------------------+----------+
The VM is now in ERROR state because it has disappeared:
libvirt.libvirtError: Domain not found: no domain with matching uuid
'de2b27d2-345c-45fc-8f37-2fa0ed1a1151'
from nova-compute.log on the target host:
2021-10-18 14:32:34.166 32686 INFO nova.compute.manager
[req-f2ce6f2e-bc3e-443a-aaca-ed3dbf536019 8708914a29ce4c92929a9af91e5a33d1
1142d4b9561746bd9e279c43803f50ed - default default] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Post operation of migration started
2021-10-18 14:32:35.913 32686 ERROR nova.compute.manager
[req-f2ce6f2e-bc3e-443a-aaca-ed3dbf536019 8708914a29ce4c92929a9af91e5a33d1
1142d4b9561746bd9e279c43803f50ed - default default] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Unexpected error during post live
migration at destination host.: nova.exception.InstanceNotFound: Instance
de2b27d2-345c-45fc-8f37-2fa0ed1a1151 could not be found.
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
[req-f2ce6f2e-bc3e-443a-aaca-ed3dbf536019 8708914a29ce4c92929a9af91e5a33d1
1142d4b9561746bd9e279c43803f50ed - default default] Exception during message
handling: nova.exception.InstanceNotFound: Instance
de2b27d2-345c-45fc-8f37-2fa0ed1a1151 could not be found.
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server Traceback (most
recent call last):
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/host.py", line 605, in
_get_domain
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
conn.lookupByUUIDString(instance.uuid)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 193, in doit
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server result =
proxy_call(self._autowrap, f, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 151, in proxy_call
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server rv =
execute(f, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 132, in execute
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
six.reraise(c, e, tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise value
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/eventlet/tpool.py", line 86, in tworker
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server rv =
meth(*args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/libvirt.py", line 4508, in lookupByUUIDString
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server if ret is
None:raise libvirtError('virDomainLookupByUUIDString() failed', conn=self)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
libvirt.libvirtError: Domain not found: no domain with matching uuid
'de2b27d2-345c-45fc-8f37-2fa0ed1a1151'
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server During handling
of the above exception, another exception occurred:
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server Traceback (most
recent call last):
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/server.py", line 165, in
_process_incoming
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server res =
self.dispatcher.dispatch(message)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 276, in
dispatch
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
self._do_dispatch(endpoint, method, ctxt, args)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_messaging/rpc/dispatcher.py", line 196, in
_do_dispatch
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server result =
func(ctxt, **new_args)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 79, in wrapped
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
function_name, call_dict, binary, tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
self.force_reraise()
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in
force_reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
six.reraise(self.type_, self.value, self.tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise value
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/exception_wrapper.py", line 69, in wrapped
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
f(self, context, *args, **kw)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/compute/utils.py", line 1456, in
decorated_function
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
function(self, context, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/compute/manager.py", line 206, in
decorated_function
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
function(self, context, *args, **kwargs)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/compute/manager.py", line 8752, in
post_live_migration_at_destination
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
'destination host.', instance=instance)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 220, in __exit__
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
self.force_reraise()
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 196, in
force_reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
six.reraise(self.type_, self.value, self.tb)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/six.py", line 703, in reraise
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise value
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/compute/manager.py", line 8747, in
post_live_migration_at_destination
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
block_device_info)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 9941, in
post_live_migration_at_destination
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
self._reattach_instance_vifs(context, instance, network_info)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 9541, in
_reattach_instance_vifs
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server guest =
self._host.get_guest(instance)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/host.py", line 589, in
get_guest
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server return
libvirt_guest.Guest(self._get_domain(instance))
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server File
"/usr/lib/python3/dist-packages/nova/virt/libvirt/host.py", line 609, in
_get_domain
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server raise
exception.InstanceNotFound(instance_id=instance.uuid)
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
nova.exception.InstanceNotFound: Instance de2b27d2-345c-45fc-8f37-2fa0ed1a1151
could not be found.
2021-10-18 14:32:36.130 32686 ERROR oslo_messaging.rpc.server
2021-10-18 14:32:47.754 32686 INFO nova.compute.manager [-] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] VM Stopped (Lifecycle Event)
2021-10-18 14:32:49.596 32686 ERROR nova.scheduler.client.report
[req-6ee0e137-28ab-40a7-bf29-2da1260ca7b1 - - - - -]
[req-bf431b52-6667-4d1b-8f7a-48e19211fbb3] Failed to update inventory to
[{'MEMORY_MB': {'total': 128586, 'min_unit': 1, 'max_unit': 128586,
'step_size': 1, 'allocation_ratio': 1.5, 'reserved': 0}, 'VCPU': {'total': 40,
'min_unit': 1, 'max_unit': 40, 'step_size': 1, 'allocation_ratio': 16.0,
'reserved': 0}, 'DISK_GB': {'total': 13124, 'min_unit': 1, 'max_unit': 13124,
'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 0}}] for resource provider
with UUID 7386bced-89d0-452a-8cbb-cf2187a3dbee. Got 409: {"errors":
[{"status": 409, "title": "Conflict", "detail": "There was a conflict when
trying to complete your request.\n\n resource provider generation conflict ",
"code": "placement.concurrent_update", "request_id":
"req-bf431b52-6667-4d1b-8f7a-48e19211fbb3"}]}
2021-10-18 14:38:00.913 32686 WARNING nova.compute.manager
[req-6ee0e137-28ab-40a7-bf29-2da1260ca7b1 - - - - -] While synchronizing
instance power states, found 3 instances in the database and 2 instances on the
hypervisor.
2021-10-18 14:40:29.104 32686 INFO nova.compute.manager
[req-aa942b4b-e3c0-4cdb-8165-c6a5a6d3790d 58f19303ac3049688eff9d8ef041956d
9c2c9ed68ba242fdb8206bf573aed265 - default default] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Instance is already powered off in the
hypervisor when stop is called.
2021-10-18 14:40:29.165 32686 INFO nova.virt.libvirt.driver [-] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Instance destroyed successfully.
2021-10-18 14:40:35.365 32686 INFO nova.virt.libvirt.driver [-] [instance:
de2b27d2-345c-45fc-8f37-2fa0ed1a1151] Instance destroyed successfully.
2021-10-18 14:40:35.427 32686 INFO os_vif
[req-9d755643-a02b-4464-b549-35c6dffd512e 58f19303ac3049688eff9d8ef041956d
9c2c9ed68ba242fdb8206bf573aed265 - default default] Successfully unplugged vif
VIFOpenVSwitch(active=True,address=fa:16:3e:da:03:56,bridge_name='br-int',has_traffic_filtering=True,id=3a71aa63-6a39-41d8-9602-04b84834db9e,network=Network(96b506b1-5b9-4ab6-9ef6-8f25c49e9123),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap3a71aa63-6a')
Environment
===========
I tested this and I see this problem in Ussuri.
# dpkg -l | grep nova
ii nova-common
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute - common files
ii nova-compute
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute - compute node base
ii nova-compute-kvm
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute - compute node (KVM)
ii nova-compute-libvirt
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute - compute node libvirt support
ii python3-nova
2:21.2.2-0ubuntu1+syseleven3~bionic~202110151329.gbp05b1cd all
OpenStack Compute Python 3 libraries
ii python3-novaclient 2:17.0.0-0ubuntu1~cloud0
all client library for OpenStack Compute API -
3.x
We are using Qemu + kvm + livirt:
ii libvirt-clients 6.0.0-0ubuntu8.12~cloud0
amd64 Programs for the libvirt library
ii libvirt-daemon 6.0.0-0ubuntu8.12~cloud0
amd64 Virtualization daemon
ii libvirt-daemon-driver-qemu 6.0.0-0ubuntu8.12~cloud0
amd64 Virtualization daemon QEMU connection driver
ii libvirt-daemon-system 6.0.0-0ubuntu8.12~cloud0
amd64 Libvirt daemon configuration files
ii libvirt-daemon-system-systemd 6.0.0-0ubuntu8.12~cloud0
amd64 Libvirt daemon configuration files (systemd)
ii libvirt0:amd64 6.0.0-0ubuntu8.12~cloud0
amd64 library for interfacing with different
virtualization systems
ii python3-libvirt 6.1.0-1~cloud0
amd64 libvirt Python 3 bindings
ii qemu-kvm 1:4.2-3ubuntu6.18+syseleven0
amd64 QEMU Full virtualization on x86 hardware
We use shared storage (Quobyte).
We use Queens with Midonet and Ussuri with OVN.
To manage notifications about this bug go to:
https://bugs.launchpad.net/nova/+bug/1947725/+subscriptions
--
Mailing list: https://launchpad.net/~yahoo-eng-team
Post to : [email protected]
Unsubscribe : https://launchpad.net/~yahoo-eng-team
More help : https://help.launchpad.net/ListHelp