I don't think it's related to huge pages...

I was using phoronix-test-suite to run benchmarks.  The 'batch/compilation' 
group shows the slowdown for all tests, the 'batch/computation' show some 
performance degradation, but not nearly as significant.

You could probably easily test this way without phoronix -  Start a VM with 
almost nothing running.  Download mainline Linux kernel, compile.  This takes 
about 45 seconds in my case (72GB memory, 16 virtual CPUs, idle physical host 
running this VM.)  Run as many times as you want, still takes ~45 seconds.

Migrate to a new idle host, kernel compile now takes ~90 seconds, wait 3 hours 
(should give khugepaged a change to do its thing I imagine), kernel compiles 
still take 90 seconds.  Reboot virtual machine (run 'shutdown -r now', reboot, 
whatever.)  First compile will take ~45 seconds after reboot.  You don't even 
need to reset/destroy/shutdown the VM, just a reboot in the guest fixes the 
issue.

I'm going to test more with qemu-kvm 1.3 tomorrow as I have a new/dedicated lab 
setup and recently built the 1.3 code base.  I'd be happy to run any test that 
would help in diagnosing the real issue here, I'm just not sure how to best 
diagnose this issue.

Thanks,
Mark
 
-----Original Message-----

Can you describe more details of the test you are performing? 

If transparent hugepages are being used then there is the possibility that 
there has been no time for khugepaged to back guest memory with huge pages, in 
the destination (don't recall the interface for retrieving number of hugepages 
for a given process, probably somewhere in /proc/pid/).

On Wed, Dec 19, 2012 at 12:43:37AM +0000, Mark Petersen wrote:
> Hello KVM,
> 
> I'm seeing something similar to this 
> (http://thread.gmane.org/gmane.comp.emulators.kvm.devel/100592) as well when 
> doing live migrations on Ubuntu 12.04 (Host and Guest) with a backported 
> libvirt 1.0 and qemu-kvm 1.2 (improved performance for live migrations on 
> guests with large memory guests is great!)  The default libvirt  0.9.8 and 
> qemu-kvm 1.0 have the same issue.
> 
> Kernel is 3.2.0-34-generic and eglicb 2.15 on both host/guest.  I'm seeing 
> similar issues with both virtio and ide bus.  Hugetblfs is not used, but 
> transparent hugepages are.  Host machines are dual core Xeon E5-2660 
> processors.  I tried disabling EPT but that doesn't seem to make a difference 
> so I don't think it's a requirement to reproduce.
> 
> If I use Ubuntu 10.04 guest with eglibc 2.11 and any of these kernels I don't 
> seem to have the issue:
> 
> linux-image-2.6.32-32-server - 2.6.32-32.62 
> linux-image-2.6.32-38-server - 2.6.32-38.83 
> linux-image-2.6.32-43-server - 2.6.32-43.97 
> linux-image-2.6.35-32-server - 2.6.35-32.68~lucid1 
> linux-image-2.6.38-16-server - 2.6.38-16.67~lucid1 
> linux-image-3.0.0-26-server  - 3.0.0-26.43~lucid1
> linux-image-3.2-5 - mainline 3.2.5 kernel
> 
> I'm guess it's a libc issue (or at least a libc change causing the issue) as 
> it doesn't seem to a be kernel related.
> 
> I'll try other distributions as a guest (probably Debian/Ubuntu) with newer 
> libc's and see if I can pinpoint the issue to a libc version.  Any other 
> ideas?
> 
> Shared disk backend is clvm/LV via FC to EMC SAN, not sure what else might be 
> relevant.
> 
> Thanks,
> Mark
> 
> 
> ______________________________________________
> 
> See http://www.peak6.com/email_disclaimer/ for terms and conditions 
> related to this email
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in the 
> body of a message to [email protected] More majordomo info at  
> http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to