* Grundmann, Christian (christian.grundm...@fabasoft.com) wrote: > Hi, > Dan sent me over to you, > please let me know if i can provide additional informations
Hi Christian, Thanks for reporting this, > Softwareversions: > ovirt-node-iso-3.6-0.999.201510221942.el7.centos.iso > > qemu-img-ev-2.3.0-29.1.el7.x86_64 > qemu-kvm-ev-2.3.0-29.1.el7.x86_64 > qemu-kvm-common-ev-2.3.0-29.1.el7.x86_64 > qemu-kvm-tools-ev-2.3.0-29.1.el7.x86_64 > ipxe-roms-qemu-20130517-7.gitc4bce43.el7.noarch > kernel-3.10.0-229.14.1.el7.x86_64 > gperftools-libs-2.4-7.el7.x86_64 > > Commandline: > /usr/libexec/qemu-kvm -name myvmname -S -machine rhel6.5.0,accel=kvm,usb=off > -cpu Westmere -m 7168 -realtime mlock=off -smp > 2,maxcpus=16,sockets=16,cores=1,threads=1 -uuid > 5b6b8899-5a9d-4c07-a6aa-6171527ad319 -smbios > type=1,manufacturer=oVirt,product=oVirt > Node,version=3.6-0.999.201510221942.el7.centos,serial=30343536-3138-5A43-4A34-323630303253,uuid=5b6b8899-5a9d-4c07-a6aa-6171527ad319 > -nographic -no-user-config -nodefaults -chardev > socket,id=charmonitor,path=/var/lib/libvirt/qemu/myvmname.monitor,server,nowait > -mon chardev=charmonitor,id=monitor,mode=control -rtc > base=2015-11-15T20:04:35,driftfix=slew -global > kvm-pit.lost_tick_policy=discard -no-hpet -no-shutdown -boot strict=on > -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device > virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x4 -device > virtio-serial-pci,id=virtio-serial0,max_ports=16,bus=pci.0,addr=0x5 -drive > if=none,id=drive-ide0-1-0,readonly=on,format=raw,serial= -device > ide-cd,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive > file=/rhev/data-center/00000002-0002-0002-0002-0000000000e2/5df61b84-8746-4460-b148-65cc0eb8d29c/images/8202b81d-6191-495f-8c9d-7d90baffaecf/d7665e07-1786-4051-aa26-0a3e1c9d2574,if=none,id=drive-virtio-disk0,format=qcow2,serial=8202b81d-6191-495f-8c9d-7d90baffaecf,cache=none,werror=stop,rerror=stop,aio=native > -device > virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1 > -netdev tap,fd=39,id=hostnet0,vhost=on,vhostfd=65 -device > virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:83:a2:0e,bus=pci.0,addr=0x3 > -chardev > socket,id=charchannel0,path=/var/lib/libvirt/qemu/channels/5b6b8899-5a9d-4c07-a6aa-6171527ad319.com.redhat.rhevm.vdsm,server,nowait > -device > virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=com.redhat.rhevm.vdsm > -chardev > socket,id=charchannel1,path=/var/lib/libvirt/qemu/channels/5b6b8899-5a9d-4c07-a6aa-6171527ad319.org.qemu.guest_agent.0,server,nowait > -device > virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=org.qemu.guest_agent.0 > -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -device > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x7 -msg timestamp=on > > Stack Trace: > > gdb --batch /usr/libexec/qemu-kvm core.14750.1447544080.dump -ex "set > pagination off" -ex "thread apply all bt" Can you please use a 'thread apply all bt full' the full gives a little more info. Also, if you've not already got it installed can you please install the debuginfo package for qemu, it gives a lot more information in backtraces. > Thread 1 (Thread 0x7fa8b16afc00 (LWP 14750)): > #0 0x00007fa8ad2febe1 in tc_malloc () from /lib64/libtcmalloc.so.4 > #1 0x00007fa8b186b489 in malloc_and_trace () > #2 0x00007fa8afbc047f in g_malloc () from /lib64/libglib-2.0.so.0 > #3 0x00007fa8afbd666e in g_slice_alloc () from /lib64/libglib-2.0.so.0 > #4 0x00007fa8b17cbffd in virtio_blk_handle_output () > #5 0x00007fa8b197e6b6 in qemu_iohandler_poll () > #6 0x00007fa8b197e296 in main_loop_wait () > #7 0x00007fa8b177da4e in main () Does this part always look the same in your backtraces? The segfault in tc_malloc is probably due to a heap corruption, or double free or similar - although it can be a bit tricky to find out what did it, since the corruption might have happened a bit before the place it crashed. Some other ideas: 1) Was there anything nasty in the /var/log/libvirt/qemu/yourvmname.log ? 2) Did you hit any IO errors and need to tell the VM to continue after a problem? 3) If this is pretty repeatable, then it would be interesting to try changing to a different disk emulation and see if the problem goes away - e.g. virtio-scsi would be a good one to try. Dave > > > Thx Christian > > -----Ursprüngliche Nachricht----- > Von: Dan Kenigsberg [mailto:dan...@redhat.com] > Gesendet: Freitag, 13. November 2015 20:00 > An: Grundmann, Christian <christian.grundm...@fabasoft.com> > Cc: 'us...@ovirt.org' <us...@ovirt.org> > Betreff: Re: [ovirt-users] Segmentation fault in libtcmalloc > > On Fri, Nov 13, 2015 at 07:56:14AM +0000, Grundmann, Christian wrote: > > Hi, > > i am using "ovirt-node-iso-3.6-0.999.201510221942.el7.centos.iso" (is > > there something better to use?) fort he nodes, and have random crashes > > of VMs The dumps are always the Same > > > > gdb --batch /usr/libexec/qemu-kvm core.45902.1447199164.dump [Thread > > debugging using libthread_db enabled] Using host libthread_db library > > "/lib64/libthread_db.so.1". > > Core was generated by `/usr/libexec/qemu-kvm -name vmname -S -machine > > rhel6.5.0,accel=kvm,usb=o'. > > Program terminated with signal 11, Segmentation fault. > > #0 0x00007f0c559c4353 in > > tcmalloc::ThreadCache::ReleaseToCentralCache(tcmalloc::ThreadCache::Fr > > eeList*, unsigned long, int) () from /lib64/libtcmalloc.so.4 > > > > > > Didn't have the Problem with 3.5 el6 nodes, so don't no if ist centos7 > > or 3.6 > > Due to the low-leveled-ness of the problem, I'd guess it's a > qemu//lib64/libtcmalloc malloc bug, and not directly related to ovirt. > > Please report the precise version of qemu,kernel,libvirt and gperftools-libs > to qemu-devel mailing list and the complete stack trace and qemu command > line, if possible. > -- Dr. David Alan Gilbert / dgilb...@redhat.com / Manchester, UK