Other APIs that internally use QEMU migration and need to temporarily
suspend a domain already report failure to resume vCPUs by setting
VIR_DOMAIN_PAUSED_API_ERROR state reason and emitting
VIR_DOMAIN_EVENT_SUSPENDED event with
VIR_DOMAIN_EVENT_SUSPENDED_API_ERROR.

Let's do the same in qemuMigrationSrcRestoreDomainState for consistent
behavior.

Signed-off-by: Jiri Denemark <[email protected]>
---
 src/qemu/qemu_migration.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 2720f0b083..efec1b3be6 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -251,6 +251,16 @@ qemuMigrationSrcRestoreDomainState(virQEMUDriver *driver, 
virDomainObj *vm)
              * overwrite the previous error, though, so we just throw something
              * to the logs and hope for the best */
             VIR_ERROR(_("Failed to resume guest %s after failure"), 
vm->def->name);
+            if (virDomainObjGetState(vm, NULL) == VIR_DOMAIN_PAUSED) {
+                virObjectEvent *event;
+
+                virDomainObjSetState(vm, VIR_DOMAIN_PAUSED,
+                                     VIR_DOMAIN_PAUSED_API_ERROR);
+                event = virDomainEventLifecycleNewFromObj(vm,
+                                                          
VIR_DOMAIN_EVENT_SUSPENDED,
+                                                          
VIR_DOMAIN_EVENT_SUSPENDED_API_ERROR);
+                virObjectEventStateQueue(driver->domainEventState, event);
+            }
             goto cleanup;
         }
         ret = true;
-- 
2.39.2

Reply via email to