On 12/04/18 10:01, Juergen Gross wrote:
> On 11/04/18 22:32, Olaf Hering wrote:
>> I was testing 'virsh migrate domU host' and did some libvirtd debugging
>> on 'host'. This means the migration was attempted a few times, but did
>> not actually start because libvirtd was in gdb. Not sure if libvirt on
>> the sender does anything with the domU before a connection to the remote
>> host is fully established.
>> Finally I installed the fixed libvirtd on 'host' and started the
>> migration again. This time the sender died like this:
> Unfortunately I can reproduce that easily.
> Unfortunately because this happens with my XPTI series after the first
> failed migration when I try to migrate again.
> I guess this is related to some missing cleanup when suspending the
> guest failed (e.g. due to a timeout).

Here is more data:

The first migrate attempt failed with:

# xl migrate 1 localhost
migration target: Ready to receive domain.
Saving to migration stream new xl format (info 0x3/0x0/1218)
Loading new save file <incoming migration stream> (new xl fmt info
 Savefile contains xl domain config in JSON format
Parsing config from <saved>
xc: info: Saving domain 1, type x86 PV
xc: info: Found x86 PV domain from Xen 4.11
xc: info: Restoring domain
libxl: error:
Domain 1:guest didn't acknowledge suspend, cancelling request
xc: error: Domain has not been suspended: shutdown 0, reason 255:
Internal error
xc: error: Save failed (0 = Success): Internal error
libxl: error: libxl_stream_write.c:350:libxl__xc_domain_save_done:
Domain 1:saving domain: domain did not respond to suspend request: Success
migration sender: libxl_domain_suspend failed (rc=-8)
xc: error: Failed to read Record Header from stream (0 = Success):
Internal error
xc: error: Restore failed (0 = Success): Internal error
libxl: error: libxl_stream_read.c:850:libxl__xc_domain_restore_done:
restoring domain: Success
libxl: error: libxl_create.c:1265:domcreate_rebuild_done: Domain
2:cannot (re-)build domain: -3
libxl: error: libxl_domain.c:1034:libxl__destroy_domid: Domain
2:Non-existant domain
libxl: error: libxl_domain.c:993:domain_destroy_callback: Domain
2:Unable to destroy guest
libxl: error: libxl_domain.c:920:domain_destroy_cb: Domain 2:Destruction
of domain failed
migration target: Domain creation failed (code -3).
libxl: info: libxl_exec.c:118:libxl_report_child_exitstatus: migration
transport process [2393] exited with error status 1
Migration failed, failed to suspend at sender.

The second attempt immediately produced:

(XEN) sh error: _shadow_prealloc(): Can't pre-allocate 1 shadow pages!
(XEN)   shadow pages total = 6, free = 0, p2m=0
(XEN) Xen BUG at common.c:1315
(XEN) ----[ Xen-4.11-unstable  x86_64  debug=y   Tainted:  C   ]----
(XEN) CPU:    0
(XEN) RIP:    e008:[<ffff82d08033f8ff>]
(XEN) RFLAGS: 0000000000010292   CONTEXT: hypervisor (d0v2)
(XEN) rax: 0000000000000200   rbx: ffff83021e125000   rcx: 0000000000000000
(XEN) rdx: ffff8300dba87fff   rsi: 000000000000000a   rdi: ffff82d0804766b8
(XEN) rbp: ffff8300dba87af8   rsp: ffff8300dba87aa8   r8:  ffff830217f78000
(XEN) r9:  0000000000000001   r10: 0000000000000000   r11: 0000000000000001
(XEN) r12: 0000000000000020   r13: 0000000000000000   r14: ffff82d0805bfff8
(XEN) r15: ffff8300dba87fff   cr0: 0000000080050033   cr4: 00000000001526e0
(XEN) cr3: 00000000d0747000   cr2: 00007f231d31f272
(XEN) fsb: 00007f231d93f700   gsb: ffff88020f700000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen code around <ffff82d08033f8ff>
(XEN)  00 00 00 e8 f4 29 f1 ff <0f> 0b f6 40 10 40 0f 85 2d fb ff ff e9
21 fb ff
(XEN) Xen stack trace from rsp=ffff8300dba87aa8:
(XEN)    ffff8300dba87ac0 000000011e125000 0000000000000000 0000000000000000
(XEN)    0000000000000000 ffff8300db7fe000 ffff83021e125000 0000000000000002
(XEN)    ffff82d0803e77c0 0000000000000195 ffff8300dba87b08 ffff82d08033f9a6
(XEN)    ffff8300dba87b48 ffff82d08034325b 00007f231d960004 ffff8300db7fe000
(XEN)    0000000000001000 0000000000000000 00007f231d960004 ffff830217f3b000
(XEN)    ffff8300dba87b68 ffff82d0803435a2 ffff8300dba87b78 ffff83021e125000
(XEN)    ffff8300dba87b88 ffff82d08034391a ffff83021e125000 ffff83021e125650
(XEN)    ffff8300dba87bb8 ffff82d080343a33 0000000000000001 ffff83021e125000
(XEN)    0000000000000001 0000000000000000 ffff8300dba87bd8 ffff82d080321f47
(XEN)    ffff83021e125000 ffff8300dba87d98 ffff8300dba87c68 ffff82d080322943
(XEN)    ffff8300dba87c98 ffff82d08022df69 80100000c06e5067 ffff8300dba87fff
(XEN)    ffff8300d8dfe000 0000000000000001 ffff830217f3b000 0000000400000025
(XEN)    0000000000000008 0000000000000000 0000000000000000 ffff8300dba87d88
(XEN)    ffff83021e125000 00007f231d960004 00007f231d960004 ffff830217f3b000
(XEN)    ffff8300dba87d28 ffff82d080272a4c ffff8300dba87d08 ffff82d080290269
(XEN)    0000000000000206 00000000000c0733 0000000000000000 ffff8300dba87cd0
(XEN)    0000000000000000 ffff830217f3b000 ffff8200400088f8 0000000000000000
(XEN)    ffff8300dba87d08 0000000000000000 0000000000000009 0000000000000206
(XEN)    ffff820040008000 0000000000000000 0000000000000292 ffff83021e125000
(XEN)    0000000000000000 00007f231d960004 0000000000000000 deadbeefdeadf00d
(XEN) Xen call trace:
(XEN)    [<ffff82d08033f8ff>] common.c#_shadow_prealloc+0x5b1/0x638
(XEN)    [<ffff82d08033f9a6>] shadow_prealloc+0x20/0x22
(XEN)    [<ffff82d08034325b>] common.c#sh_update_paging_modes+0xf5/0x3de
(XEN)    [<ffff82d0803435a2>] common.c#sh_new_mode+0x5e/0x6e
(XEN)    [<ffff82d08034391a>] common.c#shadow_one_bit_enable+0xd3/0xf2
(XEN)    [<ffff82d080343a33>] common.c#sh_enable_log_dirty+0xfa/0x14d
(XEN)    [<ffff82d080321f47>] paging_log_dirty_enable+0x47/0x61
(XEN)    [<ffff82d080322943>] paging_domctl+0x1cc/0xaca
(XEN)    [<ffff82d080272a4c>] arch_do_domctl+0x219/0x2648
(XEN)    [<ffff82d080206cb4>] do_domctl+0x1872/0x1bce
(XEN)    [<ffff82d08036c2aa>] pv_hypercall+0x1f4/0x43e
(XEN)    [<ffff82d0803734a5>] lstar_enter+0x115/0x120
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Xen BUG at common.c:1315
(XEN) ****************************************


Xen-devel mailing list

Reply via email to