On Wed, Feb 07, 2018 at 11:26:14PM +0100, David Hildenbrand wrote:
> On 07.02.2018 16:31, Kashyap Chamarthy wrote:


> Sounds like a similar problem as in
> https://bugzilla.kernel.org/show_bug.cgi?id=198621
> In short: there is no (live) migration support for nested VMX yet. So as
> soon as your guest is using VMX itself ("nVMX"), this is not expected to
> work.

Actually, live migration with nVMX _does_ work insofar as you have
_identical_ CPUs on both source and destination — i.e. use the QEMU
'-cpu host' for the L1 guests.  At least that's been the case in my
experience.  FWIW, I frequently use that setup in my test environments.

Just to be quadruple sure, I did the test: Migrate an L2 guest (with
non-shared storage), and it worked just fine.  (No 'oops'es, no stack
traces, no "kernel BUG" in `dmesg` or serial consoles on L1s.  And I can
login to the L2 guest on the destination L1 just fine.)

Once you have the password-less SSH between source and destination, and
a bit of libvirt config setup.  I ran the migrate command as following:

    $ virsh migrate --verbose --copy-storage-all \
        --live cvm1 qemu+tcp://root@f26-vm2/system
    Migration: [100 %]
    $ echo $?

Full details:

(At the end of the document above, I also posted the libvirt config and
the version details across L0, L1 and L2.  So this is a fully repeatable


libvirt-users mailing list

Reply via email to