I believe there is an issue with online migration from qemu 2.6.0. I hit it 
too. Online migration from latest 2.6.0 to latest 2.9.0 worked with no random 
VM hangs for me.

On 11/10/2018, 07:58, "t HORIKOSHI" <t-horiko...@ijk.com> wrote:

    Hi, I'm using RHV.
    When live migration of guest VM (CentOS 6) from RHV Host 4.0 to RHV Host 
4.2.6, it hung up immediately after migration.
    When ssh login to the RHV Host and executing the top command, the CPU usage 
rate of the qemu - kvm process of the corresponding VM was close to 400%. (I 
assigned 4 CPUs to this VM)
    To solve this hangup, there was only power off.
    This problem has already occurred at 5 VM(kernel-2.6.32-504.3.3.el6, 
kernel-2.6.32-504.el6, kernel-2.6.32-358.14.1.el6, kernel-2.6.32-642.15.1.el6 
*2 )
    This problem does not occur in guest OS VM, CentOS 7, ubuntu. 
    
    Why does this problem occur?
    
    
    [[Migration source host]]
    OS Version: RHEL - 4.0 - 7.1.el7
    OS Description: Red Hat Virtualization Host 4.0 (el7.3)
    Kernel Version: 3.10.0 - 514.10.2.el7.x86_64
    KVM Version: 2.6.0 - 28.el7_3.6
    LIBVIRT Version: libvirt-2.0.0-10.el7_3.5
    VDSM Version: vdsm-4.18.24-3.el7ev
    SPICE Version: 0.12.4 - 20.el7_3
    GlusterFS Version: [N/A]
    CEPH Version: librbd1-0.94.5-1.el7
    Kernel Features: N/A
    
    CPU model: Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
    CPU Type: Intel Broadwell-noTSX Family
    
    [[Migration destination host]]
    OS Version: RHEL - 7.5 - 6.2.el7
    OS Description: Red Hat Virtualization Host 4.2.6 (el7.5)
    Kernel Version: 3.10.0 - 862.14.4.el7.x86_64
    KVM Version: 2.10.0 - 21.el7_5.4
    LIBVIRT Version: libvirt-3.9.0-14.el7_5.7
    VDSM Version: vdsm-4.20.39.1-1.el7ev
    SPICE Version: 0.14.0 - 2.el7_5.4
    GlusterFS Version: glusterfs-3.8.4-54.15.el7rhgs
    CEPH Version: librbd1-0.94.5-2.el7
    Kernel Features: RETP: 1, IBRS: 0, PTI: 1
    
    CPU model: Intel(R) Xeon(R) CPU E5-2697 v4 @ 2.30GHz
    CPU Type: Intel Broadwell-noTSX-IBRS Family
    
    [[ /var/log/libvirt/qemu/web-001.log]]
    2018-10-10 03:56:04.828+0000: starting up libvirt version: 3.9.0, package: 
14.el7_5.7 (Red Hat, Inc. <http://bugzilla.redhat.com/bugzilla>, 
2018-07-27-12:56:56, x86-037.build.eng.bos.redhat.com), qemu version: 
2.10.0(qemu-kvm-rhev-2.10.0-21.el7_5.4), hostname: host-04
    LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin 
QEMU_AUDIO_DRV=spice /usr/libexec/qemu-kvm -name guest=web-001,debug-threads=on 
-S -object 
secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-xyxon_verify_web-001/master-key.aes
 -machine pc-i440fx-rhel7.2.0,accel=kvm,usb=off,dump-guest-core=off -cpu 
SandyBridge -m size=12582912k,slots=16,maxmem=4294967296k -realtime mlock=off 
-smp 4,maxcpus=16,sockets=16,cores=1,threads=1 -numa 
node,nodeid=0,cpus=0-3,mem=12288 -uuid 9b038cf5-1913-42a1-adfe-3b1eb06d4611 
-smbios 'type=1,manufacturer=Red Hat,product=RHEV 
Hypervisor,version=7.2-20160711.0.el7ev,serial=7602B47E-E827-11E3-B8AB-6CAE8B08B02B,uuid=9b038cf5-1913-42a1-adfe-3b1eb06d4611'
 -no-user-config -nodefaults -chardev 
socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-13-web-001/monitor.sock,server,nowait
 -mon chardev=charmonitor,id=monitor,mode=control -rtc 
base=2018-10-10T03:56:05,driftfix=slew -global kvm-pit.lost_tick_policy=delay 
-no-hpet -no-shutdo
     wn -boot strict=on -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 
-device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device 
virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive 
if=none,id=drive-ide0-1-0,readonly=on -device 
ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0,bootindex=2 -drive 
file=/rhev/data-center/00000001-0001-0001-0001-000000000383/450774af-11c3-4ea0-9d42-845809159f51/images/36a3d651-6b37-4dab-b4aa-911dcf0218ad/9e89f573-6931-4beb-937b-d50e8af67aa6,format=qcow2,if=none,id=drive-virtio-disk0,serial=36a3d651-6b37-4dab-b4aa-911dcf0218ad,cache=none,werror=stop,rerror=stop,aio=native
 -device 
virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1
 -netdev tap,fd=56,id=hostnet0,vhost=on,vhostfd=58 -device 
virtio-net-pci,netdev=hostnet0,id=net0,mac=00:1a:4a:16:01:70,bus=pci.0,addr=0x3 
-netdev tap,fd=59,id=hostnet1,vhost=on,vhostfd=60 -device 
virtio-net-pci,netdev=hostnet1,id=net1,mac=00:1a:4a:16:01
     :71,bus=pci.0,addr=0x8 -chardev 
socket,id=charserial0,path=/var/run/ovirt-vmconsole-console/9b038cf5-1913-42a1-adfe-3b1eb06d4611.sock,server,nowait
 -device isa-serial,chardev=charserial0,id=serial0 -chardev 
socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/9b038cf5-1913-42a1-adfe-3b1eb06d4611.com.redhat.rhevm.vdsm,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm
 -chardev 
socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/9b038cf5-1913-42a1-adfe-3b1eb06d4611.org.qemu.guest_agent.0,server,nowait
 -device 
virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0
 -chardev spicevmc,id=charchannel2,name=vdagent -device 
virtserialport,bus=virtio-serial0.0,nr=3,chardev=charchannel2,id=channel2,name=com.redhat.spice.0
 -device usb-tablet,id=input0 -vnc 0:21,password -k ja -spice 
tls-port=5922,addr=0,x509-dir=/etc/pki/vdsm/libvirt-spice,tls-channel=default,tls-c
     
hannel=main,tls-channel=display,tls-channel=inputs,tls-channel=cursor,tls-channel=playback,tls-channel=record,tls-channel=smartcard,tls-channel=usbredir,seamless-migration=on
 -k ja -device 
qxl-vga,id=video0,ram_size=67108864,vram_size=8388608,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2
 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg 
timestamp=on
    2018-10-10T03:56:05.017861Z qemu-kvm: -drive 
file=/rhev/data-center/00000001-0001-0001-0001-000000000383/450774af-11c3-4ea0-9d42-845809159f51/images/36a3d651-6b37-4dab-b4aa-911dcf0218ad/9e89f573-6931-4beb-937b-d50e8af67aa6,format=qcow2,if=none,id=drive-virtio-disk0,serial=36a3d651-6b37-4dab-b4aa-911dcf0218ad,cache=none,werror=stop,rerror=stop,aio=native:
 'serial' is deprecated, please use the corresponding option of '-device' 
instead
    2018-10-10T03:56:05.100865Z qemu-kvm: warning: CPU(s) not present in any 
NUMA nodes: CPU 4 [socket-id: 4, core-id: 0, thread-id: 0], CPU 5 [socket-id: 
5, core-id: 0, thread-id: 0], CPU 6 [socket-id: 6, core-id: 0, thread-id: 0], 
CPU 7 [socket-id: 7, core-id: 0, thread-id: 0], CPU 8 [socket-id: 8, core-id: 
0, thread-id: 0], CPU 9 [socket-id: 9, core-id: 0, thread-id: 0], CPU 10 
[socket-id: 10, core-id: 0, thread-id: 0], CPU 11 [socket-id: 11, core-id: 0, 
thread-id: 0], CPU 12 [socket-id: 12, core-id: 0, thread-id: 0], CPU 13 
[socket-id: 13, core-id: 0, thread-id: 0], CPU 14 [socket-id: 14, core-id: 0, 
thread-id: 0], CPU 15 [socket-id: 15, core-id: 0, thread-id: 0]
    2018-10-10T03:56:05.100893Z qemu-kvm: warning: All CPU(s) up to maxcpus 
should be described in NUMA config, ability to start up with partial NUMA 
mappings is obsoleted and will be removed in future
    main_channel_link: add main channel client
    _______________________________________________
    Users mailing list -- users@ovirt.org
    To unsubscribe send an email to users-le...@ovirt.org
    Privacy Statement: https://www.ovirt.org/site/privacy-policy/
    oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
    List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/NKK2VJWH6R67KP5GWVDVFO5ZISU5V3JF/
    

_______________________________________________
Users mailing list -- users@ovirt.org
To unsubscribe send an email to users-le...@ovirt.org
Privacy Statement: https://www.ovirt.org/site/privacy-policy/
oVirt Code of Conduct: 
https://www.ovirt.org/community/about/community-guidelines/
List Archives: 
https://lists.ovirt.org/archives/list/users@ovirt.org/message/ZG32WMVQBG5MMUF7MTEWPDA3X2JSPQN5/

Reply via email to