Broken link on KVM website

2009-04-20 Thread Matteo Settenvini
Hello,

just a quick bugreport: on http://www.linux-kvm.org/page/Documents,
the Doxygen Documentation link is broken (http://kvmapi.ath.cx/
doesn't respond).
Does a mirror exist?

Thanks for any help,
-- 
Matteo Settenvini
FSF Associated Member
Email : mat...@member.fsf.org


-BEGIN GEEK CODE BLOCK-
Version: 3.12
GCS d--(-) s+:- a-- C++ UL+++
P?++ L+++$ E W+++ N+ o?
w--- O- M++ V? PS++ PE- Y+++
PGP+++ t+ 5 X- R tv-(--) b+++
DI+ D++ G++ e h+ r- y?
--END GEEK CODE BLOCK--
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Avi Kivity

Gerd Hoffmann wrote:

Right now everything in the vcpu is emulated in the kernel. Everything
else is emulated either in the kernel (irqchip) or in userspace. This
makes things easier to understand, and is more future friendly if more
cpu features become virtualized by hardware.

While these are not compelling reasons, they at least lean the balance
in favour of a kernel implementation.


The xen pv-on-hvm drivers use an msr to indicate please place the 
hypercall page here.  Handling that in kernel isn't an option IMHO.


The contents of the hypercall page are vendor specific.  This can be 
handled from userspace (though ideally we'd abstract the cpu vendor 
away).  The Hyper-V hypercall page is more problematic, as it's 
specified to be an overlay; the page doesn't have to exist in guest RAM.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Avi Kivity

Gerd Hoffmann wrote:

On 04/20/09 10:26, Avi Kivity wrote:

Gerd Hoffmann wrote:

The xen pv-on-hvm drivers use an msr to indicate please place the
hypercall page here. Handling that in kernel isn't an option IMHO.


The contents of the hypercall page are vendor specific. This can be
handled from userspace (though ideally we'd abstract the cpu vendor
away).


Well, xenner doesn't do vmcalls, so the page isn't vendor specific.  


Well, for true pv (not pv-on-hvm) it wouldn't use the MSR, would it?

It looks different for 32bit / 64bit guests though.  And it actually 
can be multiple pages (with one msr write per page).  So the interface 
for in-kernel handling would be more complex than here is a hypercall 
page for you.


To different MSRs, or multiple writes to the same MSR?



 The Hyper-V hypercall page is more problematic, as it's specified to 
 be an overlay; the page doesn't have to exist in guest RAM.


In userspace it should be easy to handle though as qemu can just 
create a new memory slot, right?


It depends if the MSR may be considered global, or is required to be 
per-cpu.  Need to refresh my memory on this, but I remember reading this 
and saying 'yuck'.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Gerd Hoffmann

On 04/20/09 10:26, Avi Kivity wrote:

Gerd Hoffmann wrote:

The xen pv-on-hvm drivers use an msr to indicate please place the
hypercall page here. Handling that in kernel isn't an option IMHO.


The contents of the hypercall page are vendor specific. This can be
handled from userspace (though ideally we'd abstract the cpu vendor
away).


Well, xenner doesn't do vmcalls, so the page isn't vendor specific.  It 
looks different for 32bit / 64bit guests though.  And it actually can be 
multiple pages (with one msr write per page).  So the interface for 
in-kernel handling would be more complex than here is a hypercall page 
for you.


 The Hyper-V hypercall page is more problematic, as it's specified to 
 be an overlay; the page doesn't have to exist in guest RAM.


In userspace it should be easy to handle though as qemu can just create 
a new memory slot, right?


cheers,
  Gerd
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] add ksm kernel shared memory driver.

2009-04-20 Thread Alan Cox
The minor number you are using already belongs to another project.

10,234 is free but it would be good to know what device naming is
proposed. I imagine other folks would like to know why you aren't using
sysfs or similar or extending /dev/kvm ?

Alan
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Andi Kleen
 It depends if the MSR may be considered global, or is required to be 
 per-cpu.  Need to refresh my memory on this, but I remember reading this 

Machine Check MSRs need to be per CPU. The latest kernels have some support
for shared banks (between CPUs) but it's better to not have it and
older OS don't like it.

-Andi
-- 
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Avi Kivity

Andi Kleen wrote:
It depends if the MSR may be considered global, or is required to be 
per-cpu.  Need to refresh my memory on this, but I remember reading this 



Machine Check MSRs need to be per CPU. The latest kernels have some support
for shared banks (between CPUs) but it's better to not have it and
older OS don't like it.
  


In the case of the hypercall page MSR, it is global and shared among all 
processors.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] add ksm kernel shared memory driver.

2009-04-20 Thread Avi Kivity

Alan Cox wrote:

The minor number you are using already belongs to another project.

10,234 is free but it would be good to know what device naming is
proposed. I imagine other folks would like to know why you aren't using
sysfs or similar or extending /dev/kvm ?
  


ksm was deliberately made independent of kvm.  While there may or may 
not be uses of ksm without kvm (you could run ordinary qemu, but no one 
would do this in a production deployment), keeping them separate helps 
avoid unnecessary interdependencies.  For example all tlb flushes are 
mediated through mmu notifiers instead of ksm hooking directly into kvm.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio net regression

2009-04-20 Thread Mark McLoughlin
On Sun, 2009-04-19 at 14:48 +0300, Avi Kivity wrote:
 Antoine Martin wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA512
 
  Wireshark was showing a huge amount of invalid packets (wrong checksum)
  - - that was the cause of the slowdown.
  Simply rebooting the host into 2.6.28.9 fixed *everything*, regardless
  of whether the guests use virtio or ne2k_pci/etc.
  The guests are still running 2.6.29.1, but I am not likely to try that
  release again on the host anytime soon! Ouch!

 
 
 Strange, no significant tun changes between .28 and .29.

Sounds to me like it's this:

  
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2f181855a0

davem said he was queueing up for stable, but it's not in yet:

  http://kerneltrap.org/mailarchive/linux-netdev/2009/3/30/5337934

I'll check that it's in the queue.

Cheers,
Mark.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Broken link on KVM website

2009-04-20 Thread Avi Kivity

Matteo Settenvini wrote:

Hello,

just a quick bugreport: on http://www.linux-kvm.org/page/Documents,
the Doxygen Documentation link is broken (http://kvmapi.ath.cx/
doesn't respond).
Does a mirror exist?

  


Probably not.  But the doxygen docs only document libkvm, which is a 
fairly unimportant bit.


Unfortunately all the important bits are undocumented.

--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Gerd Hoffmann

On 04/20/09 11:05, Avi Kivity wrote:

Gerd Hoffmann wrote:

On 04/20/09 10:26, Avi Kivity wrote:

Gerd Hoffmann wrote:

The xen pv-on-hvm drivers use an msr to indicate please place the
hypercall page here. Handling that in kernel isn't an option IMHO.


The contents of the hypercall page are vendor specific. This can be
handled from userspace (though ideally we'd abstract the cpu vendor
away).


Well, xenner doesn't do vmcalls, so the page isn't vendor specific.


Well, for true pv (not pv-on-hvm) it wouldn't use the MSR, would it?


Yes, the MSR is used for pv-on-hvm only.


It looks different for 32bit / 64bit guests though. And it actually
can be multiple pages (with one msr write per page). So the interface
for in-kernel handling would be more complex than here is a hypercall
page for you.


To different MSRs, or multiple writes to the same MSR?


Same MSR, multiple writes (page number in the low bits).

cheers,
  Gerd

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


fedora 10 x86_64 breakage under KVM, due to KVM_CLOCK

2009-04-20 Thread Lennert Buytenhek
(please CC, not on the list)

Hi all,

Fedora 10 randomly hangs for me when run under KVM, whereas (unpatched)
F9 works fine.  When this happens, kvm_stat shows that the VM is seeing
1000 IRQ exits per second, but nothing else seems to happen.  The latest
F9 update kernel seems to be bust as well.  (I noticed that the KVM guest
support status page says that Alexey E. is seeing the same thing.)

I tried about two dozen different kernel packages from koji, and it turns
out that kernel-2.6.26-0.122.rc9.git4.fc10 is the last known good kernel,
and kernel-2.6.26-0.124.rc9.git5.fc10 is the first kernel that's broken.

The F10 instability appears since this commit (attached below as well):


http://fedora.gitbits.net/?p=kernel;a=commit;h=a5991f36968f44d4d3c64fc5aaa285a21de1ba54

I built three new kernels from the latest 2.6.27-170.2.56.fc10 update
in Fedora 10:
1. One with PARAVIRT and all associated options disabled entirely.
2. One with only KVM_GUEST disabled.
3. One with only KVM_CLOCK disabled (and paravirt and KVM_GUEST enabled).

Kernel (1) is stable.  Kernel (2) is unstable like the unpatched F10
kernels are, in that it randomly locks up.  Kernel (3) is stable as
well, which suggests that KVM_CLOCK is what's causing the lockups.

I tried booting the original F10 update kernel with no-kvmclock on
the command line as well, and that makes things stable as well.

I tried with the latest F10 updates-testing kernel (2.6.29.1-30.fc10),
and that shows the same thing: with no-kvmclock it works fine, otherwise
it hangs randomly.

Running KVM 84 on x86_64 CentOS 5.3, with the kvm rpms from lfarkas.org.
The host CPU is a Q6600 Core 2 Quad.

Any ideas?


thanks,
Lennert


commit a5991f36968f44d4d3c64fc5aaa285a21de1ba54
Author: davej davej
Date:   Wed Jul 9 04:54:02 2008 +

Reenable paravirt on x86-64.

diff --git a/config-x86_64-generic b/config-x86_64-generic
index e0c1e61..a2dd13c 100644
--- a/config-x86_64-generic
+++ b/config-x86_64-generic
@@ -254,7 +254,11 @@ CONFIG_INTEL_IOATDMA=m
 
 CONFIG_SENSORS_I5K_AMB=m
 
-# CONFIG_PARAVIRT_GUEST is not set
+CONFIG_PARAVIRT_GUEST=y
+CONFIG_KVM_CLOCK=y
+CONFIG_KVM_GUEST=y
+CONFIG_PARAVIRT=y
+
 # CONFIG_COMPAT_VDSO is not set
 CONFIG_PROVIDE_OHCI1394_DMA_INIT=y
 # CONFIG_DEBUG_PER_CPU_MAPS is not set
diff --git a/kernel.spec b/kernel.spec
index cd4b749..ff4c12a 100644
--- a/kernel.spec
+++ b/kernel.spec
@@ -21,7 +21,7 @@ Summary: The Linux kernel
 # works out to the offset from the rebase, so it doesn't get too ginormous.
 #
 %define fedora_cvs_origin 623
-%define fedora_build %(R=$Revision: 1.744 $; R=${R%% \$}; R=${R##: 1.}; 
expr $R - %{fedora_cvs_origin})
+%define fedora_build %(R=$Revision: 1.745 $; R=${R%% \$}; R=${R##: 1.}; 
expr $R - %{fedora_cvs_origin})
 
 # base_sublevel is the kernel version we're starting with and patching
 # on top of -- for example, 2.6.22-rc7-git1 starts with a 2.6.21 base,
@@ -1794,6 +1794,9 @@ fi
 %kernel_variant_files -a /%{image_install_path}/xen*-%{KVERREL}.xen -e 
/etc/ld.so.conf.d/kernelcap-%{KVERREL}.xen.conf %{with_xen} xen
 
 %changelog
+* Wed Jul 09 2008 Dave Jones da...@redhat.com
+- Reenable paravirt on x86-64.
+
 * Tue Jul  8 2008 Roland McGrath rol...@redhat.com
 - new bleeding-edge utrace code

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Avi Kivity

Gerd Hoffmann wrote:

Well, xenner doesn't do vmcalls, so the page isn't vendor specific.


Well, for true pv (not pv-on-hvm) it wouldn't use the MSR, would it?


Yes, the MSR is used for pv-on-hvm only.


So it isn't relevant for Xenner?

That said, I'd like to be able to emulate the Xen HVM hypercalls.  But 
in any case, they hypercall implementation has to be in the kernel, so I 
don't see why the MSR shouldn't be.  Especially if we need to support 
tricky bits like continuations.



Same MSR, multiple writes (page number in the low bits).


Nasty.  The hypervisor has to remember all of the pages, so it can 
update them for live migration.  And it has to forget about them on 
reset.  And something needs to be done for kexec...


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: fedora 10 x86_64 breakage under KVM, due to KVM_CLOCK

2009-04-20 Thread Avi Kivity

Lennert Buytenhek wrote:

(please CC, not on the list)

Hi all,

Fedora 10 randomly hangs for me when run under KVM, whereas (unpatched)
F9 works fine.  When this happens, kvm_stat shows that the VM is seeing
1000 IRQ exits per second, but nothing else seems to happen.  The latest
F9 update kernel seems to be bust as well.  (I noticed that the KVM guest
support status page says that Alexey E. is seeing the same thing.)

I tried about two dozen different kernel packages from koji, and it turns
out that kernel-2.6.26-0.122.rc9.git4.fc10 is the last known good kernel,
and kernel-2.6.26-0.124.rc9.git5.fc10 is the first kernel that's broken.

The F10 instability appears since this commit (attached below as well):


http://fedora.gitbits.net/?p=kernel;a=commit;h=a5991f36968f44d4d3c64fc5aaa285a21de1ba54

I built three new kernels from the latest 2.6.27-170.2.56.fc10 update
in Fedora 10:
1. One with PARAVIRT and all associated options disabled entirely.
2. One with only KVM_GUEST disabled.
3. One with only KVM_CLOCK disabled (and paravirt and KVM_GUEST enabled).

Kernel (1) is stable.  Kernel (2) is unstable like the unpatched F10
kernels are, in that it randomly locks up.  Kernel (3) is stable as
well, which suggests that KVM_CLOCK is what's causing the lockups.

I tried booting the original F10 update kernel with no-kvmclock on
the command line as well, and that makes things stable as well.

I tried with the latest F10 updates-testing kernel (2.6.29.1-30.fc10),
and that shows the same thing: with no-kvmclock it works fine, otherwise
it hangs randomly.

Running KVM 84 on x86_64 CentOS 5.3, with the kvm rpms from lfarkas.org.
The host CPU is a Q6600 Core 2 Quad.

Any ideas?
  


This is a known issue.  While kvm-84 fixed kvmclock, an issue remained 
with cpu frequency scaling on older host kernels (including RHEL 5 / 
CentOS 5).  kvm-85 (to be released shortly) contains a fix (b395d156477a 
in kvm-userspace.git).



--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 13/15] Add NMI injection support to SVM.

2009-04-20 Thread Dmitry Eremin-Solenikov
Gleb Natapov wrote:

 On Fri, Apr 17, 2009 at 03:12:57PM +, Dmitry Eremin-Solenikov wrote:
 
 This patch does expose some problems on real HW. The first NMI
 completes w/o problems. However If I try to boot the kernel w/
 nmi_watchdog=1 or to trigger two NMIs from the monitor, kernel is stuck
 somewhere.
 
 Can you try this patch instead patch13:
 

Seems to work.



-- 
With best wishes
Dmitry


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Gerd Hoffmann

On 04/20/09 13:27, Avi Kivity wrote:

Gerd Hoffmann wrote:

Well, xenner doesn't do vmcalls, so the page isn't vendor specific.


Well, for true pv (not pv-on-hvm) it wouldn't use the MSR, would it?


Yes, the MSR is used for pv-on-hvm only.


So it isn't relevant for Xenner?


It is.  I still plan to merge xenner into qemu, and also support 
xenstyle pv-on-hvm drivers.



That said, I'd like to be able to emulate the Xen HVM hypercalls. But in
any case, they hypercall implementation has to be in the kernel,


No.  With Xenner the xen hypercall emulation code lives in guest address 
space.



so I
don't see why the MSR shouldn't be.


I don't care that much, but /me thinks it would be easier to handle in 
userspace ...



Especially if we need to support
tricky bits like continuations.


Is there any reason to?  I *think* xen does it for better scheduling 
latency.  But with xen emulation sitting in guest address space we can 
schedule the guest at will anyway.



Same MSR, multiple writes (page number in the low bits).


Nasty. The hypervisor has to remember all of the pages, so it can update
them for live migration.


Xenner doesn't need update-on-migration, so there is no need at all to 
remember this.  At the end of the day it is just memcpy(guest, data, 
PAGESIZE) triggered by wrmsr.


cheers,
  Gerd
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Avi Kivity

Gerd Hoffmann wrote:

That said, I'd like to be able to emulate the Xen HVM hypercalls. But in
any case, they hypercall implementation has to be in the kernel,


No.  With Xenner the xen hypercall emulation code lives in guest 
address space.


In this case the guest ring-0 code should trap the #GP, and install the 
hypercall page (which uses sysenter/syscall?).  No kvm or qemu changes 
needed.



Especially if we need to support
tricky bits like continuations.


Is there any reason to?  I *think* xen does it for better scheduling 
latency.  But with xen emulation sitting in guest address space we can 
schedule the guest at will anyway.


It also improves latency within the guest itself.  At least I think that 
what was the Hyper-V spec is saying.  You can interrupt the execution of 
a long hypercall, inject and interrupt, and resume.  Sort of like a 
rep/movs instruction, which the cpu can and will interrupt.



Same MSR, multiple writes (page number in the low bits).


Nasty. The hypervisor has to remember all of the pages, so it can update
them for live migration.


Xenner doesn't need update-on-migration, so there is no need at all to 
remember this.  At the end of the day it is just memcpy(guest, data, 
PAGESIZE) triggered by wrmsr. 


For Xenner, no (and you don't need to intercept the msr at all), but for 
pv-on-hvm, you do need to update the code.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 5/5] add ksm kernel shared memory driver.

2009-04-20 Thread Izik Eidus

Avi Kivity wrote:

Alan Cox wrote:

The minor number you are using already belongs to another project.

10,234 is free but it would be good to know what device naming is
proposed. I imagine other folks would like to know why you aren't using
sysfs or similar or extending /dev/kvm ?
  


ksm was deliberately made independent of kvm.  While there may or may 
not be uses of ksm without kvm (you could run ordinary qemu, but no 
one would do this in a production deployment), keeping them separate 
helps avoid unnecessary interdependencies.  For example all tlb 
flushes are mediated through mmu notifiers instead of ksm hooking 
directly into kvm.



Yes, beside, I do use sysfs for controlling the ksm behavior,
Ioctls are provided as easier way for application to register its memory.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: Defer remote tlb flushes on invlpg (v4)

2009-04-20 Thread Andrea Arcangeli
On Sun, Apr 19, 2009 at 02:54:28PM -0300, Marcelo Tosatti wrote:
 I'm fine with your kvm_flush_local_tlb. Just one minor nit:
 
 +   /* get new asid before returning to guest mode */
 +   if (!test_bit(KVM_REQ_TLB_FLUSH, vcpu-requests))
 +   set_bit(KVM_REQ_TLB_FLUSH, vcpu-requests);
 
 Whats the test_bit for?

To avoid a write in case it was already set... but thinking twice I
guess the probability that it's already set is near zero, so I'll
remove it and I'll just do set_bit.

 It was nice to hide explicit knowledge about
 vcpu-kvm-remote_tlbs_dirty behind the interface instead of exposing
 it.

Hmm ok, if you prefer it I'll add it back. I guess ..._tlb_dirty_cond
is better name so it's clear it's not just checking the cond but the
dirty flag too.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Gerd Hoffmann

On 04/20/09 14:43, Avi Kivity wrote:

Gerd Hoffmann wrote:

That said, I'd like to be able to emulate the Xen HVM hypercalls. But in
any case, they hypercall implementation has to be in the kernel,


No. With Xenner the xen hypercall emulation code lives in guest
address space.


In this case the guest ring-0 code should trap the #GP, and install the
hypercall page (which uses sysenter/syscall?). No kvm or qemu changes
needed.


Doesn't fly.

Reason #1: In the pv-on-hvm case the guest runs on ring0.
Reason #2: Chicken-egg issue:  For the pv-on-hvm case only few,
   simple hypercalls are needed.  The code to handle them
   is small enougth that it can be loaded directly into the
   hypercall page(s).

pure-pv doesn't need it in the first place.  But, yes, there I could 
simply trap #GP because the guest kernel runs on ring #1 (or #3 on 64bit).



Especially if we need to support
tricky bits like continuations.


Is there any reason to? I *think* xen does it for better scheduling
latency. But with xen emulation sitting in guest address space we can
schedule the guest at will anyway.


It also improves latency within the guest itself. At least I think that
what was the Hyper-V spec is saying. You can interrupt the execution of
a long hypercall, inject and interrupt, and resume. Sort of like a
rep/movs instruction, which the cpu can and will interrupt.


Hmm.  Needs investigation..  I'd expect the main source of latencies is 
page table walking.  Xen works very different from kvm+xenner here ...



For Xenner, no (and you don't need to intercept the msr at all),  but for
pv-on-hvm, you do need to update the code.


Xenner handling pv-on-hvm doesn't need code updates either.  Real Xen 
does as it uses vmcall, not sure how they handle migration.


cheers
  Gerd
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] Add MCE support to KVM

2009-04-20 Thread Avi Kivity

Gerd Hoffmann wrote:

On 04/20/09 14:43, Avi Kivity wrote:

Gerd Hoffmann wrote:
That said, I'd like to be able to emulate the Xen HVM hypercalls. 
But in

any case, they hypercall implementation has to be in the kernel,


No. With Xenner the xen hypercall emulation code lives in guest
address space.


In this case the guest ring-0 code should trap the #GP, and install the
hypercall page (which uses sysenter/syscall?). No kvm or qemu changes
needed.


Doesn't fly.

Reason #1: In the pv-on-hvm case the guest runs on ring0.


Sure, in this case you need to trap the MSR in the kernel (or qemu).  
But the handler is no longer in the guest address space, and you do need 
to update the opcode.


Let's not confuse the two cases.


Reason #2: Chicken-egg issue:  For the pv-on-hvm case only few,
   simple hypercalls are needed.  The code to handle them
   is small enougth that it can be loaded directly into the
   hypercall page(s).


Please elaborate.  What hypercalls are so simple that an exit into the 
hypervisor is not necessary?



Is there any reason to? I *think* xen does it for better scheduling
latency. But with xen emulation sitting in guest address space we can
schedule the guest at will anyway.


It also improves latency within the guest itself. At least I think that
what was the Hyper-V spec is saying. You can interrupt the execution of
a long hypercall, inject and interrupt, and resume. Sort of like a
rep/movs instruction, which the cpu can and will interrupt.


Hmm.  Needs investigation..  I'd expect the main source of latencies 
is page table walking.  Xen works very different from kvm+xenner here ...


kvm is mostly O(1).  We need to limit rmap chains, but we're fairly 
close.  The kvm paravirt mmu calls are not O(1), but we can easily use 
continuations there (and they're disabled on newer processors anyway).


Another area that worries me is virtio notification, which can take a 
long time.  It won't be trivial, but we can make work:


- for the existing pio-to-userspace notification, add a bit that tells 
the kernel to repeat the instruction instead of continuing.  the 'outl' 
instruction is idempotent, so we can do partial work, and return to the 
kernel.
- if using hypercallfd/piofd to a pipe, we're offloading everything to 
another thread anyway, so we can return immediately
- if using hypercallfd/piofd to a kernel virtio server, it can return 0 
bytes written, indicating it needs a retry.  kvm can try to inject an 
interrupt if it sees this.



--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio net regression

2009-04-20 Thread Antoine Martin
Hi,

The bug report below does indeed match everything I have experienced.
Upon further inspection, 2.6.28.9 is also affected, just less so.

Unfortunately I have applied this patch to 2.6.29.1:
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2f181855a0
And if anything, it made things worse... (speed was down to just 6KB/s
because of the number of broken packets)
Any ideas?

Cheers
Antoine



Mark McLoughlin wrote:
 On Sun, 2009-04-19 at 14:48 +0300, Avi Kivity wrote:
 Antoine Martin wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA512

 Wireshark was showing a huge amount of invalid packets (wrong checksum)
 - - that was the cause of the slowdown.
 Simply rebooting the host into 2.6.28.9 fixed *everything*, regardless
 of whether the guests use virtio or ne2k_pci/etc.
 The guests are still running 2.6.29.1, but I am not likely to try that
 release again on the host anytime soon! Ouch!
   

 Strange, no significant tun changes between .28 and .29.
 
 Sounds to me like it's this:
 
   
 http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=2f181855a0
 
 davem said he was queueing up for stable, but it's not in yet:
 
   http://kerneltrap.org/mailarchive/linux-netdev/2009/3/30/5337934
 
 I'll check that it's in the queue.
 
 Cheers,
 Mark.
 
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[PATCH] qemu: fix pci_enable_capabilities to set the CAP feature in pci::status

2009-04-20 Thread Gregory Haskins
(Applies to kvm-userspace.git:a1075de527f309850df278484f2ef4127827c6f4)

The PCI spec requires bit 4 of the config-space STATUS register to be set
in order to indicate that the capabilities pointer and capabilities area are
valid.  We have a pci_enable_capabilities() routine to fill out the
config-space metadata, but we leave the status bit cleared. It is not
apparent if this was intentionally omitted as part of the related
device-assignment support, or simply an oversight.  This patch completes
the function by also setting the status bit appropriately.

Signed-off-by: Gregory Haskins ghask...@novell.com
CC: Sheng Yang sh...@linux.intel.com
---

 qemu/hw/pci.c |2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/qemu/hw/pci.c b/qemu/hw/pci.c
index bf97c8c..5bfc4df 100644
--- a/qemu/hw/pci.c
+++ b/qemu/hw/pci.c
@@ -1009,6 +1009,8 @@ int pci_enable_capability_support(PCIDevice *pci_dev,
 if (!pci_dev)
 return -ENODEV;
 
+pci_dev-config[0x06] |= 0x10; // status = capabilities
+
 if (config_start == 0)
pci_dev-cap.start = PCI_CAPABILITY_CONFIG_DEFAULT_START_ADDR;
 else if (config_start = 0x40  config_start  0xff)

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 13/15] Add NMI injection support to SVM.

2009-04-20 Thread Jan Kiszka
Gleb Natapov wrote:
 On Fri, Apr 17, 2009 at 03:12:57PM +, Dmitry Eremin-Solenikov wrote:
 This patch does expose some problems on real HW. The first NMI completes w/o
 problems. However If I try to boot the kernel w/ nmi_watchdog=1 or to trigger
 two NMIs from the monitor, kernel is stuck somewhere.

 Can you try this patch instead patch13:
 
 
 diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
 index 8b6f6e9..057a612 100644
 --- a/arch/x86/include/asm/kvm_host.h
 +++ b/arch/x86/include/asm/kvm_host.h
 @@ -766,6 +766,7 @@ enum {
  #define HF_GIF_MASK  (1  0)
  #define HF_HIF_MASK  (1  1)
  #define HF_VINTR_MASK(1  2)
 +#define HF_NMI_MASK  (1  3)
  
  /*
   * Hardware virtualization extension instructions may fault if a
 diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
 index c605477..0a2b3f1 100644
 --- a/arch/x86/kvm/svm.c
 +++ b/arch/x86/kvm/svm.c
 @@ -1834,6 +1834,13 @@ static int cpuid_interception(struct vcpu_svm *svm, 
 struct kvm_run *kvm_run)
   return 1;
  }
  
 +static int iret_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
 +{
 + svm-vmcb-control.intercept = ~(1UL  INTERCEPT_IRET);
 + svm-vcpu.arch.hflags = ~HF_NMI_MASK;

Two minor issues:

++vcpu-stat.nmi_window_exits;

 + return 1;
 +}
 +
  static int invlpg_interception(struct vcpu_svm *svm, struct kvm_run *kvm_run)
  {
   if (emulate_instruction(svm-vcpu, kvm_run, 0, 0, 0) != EMULATE_DONE)
 @@ -2111,6 +2118,7 @@ static int (*svm_exit_handlers[])(struct vcpu_svm *svm,
   [SVM_EXIT_VINTR]= interrupt_window_interception,
   /* [SVM_EXIT_CR0_SEL_WRITE] = emulate_on_interception, */
   [SVM_EXIT_CPUID]= cpuid_interception,
 + [SVM_EXIT_IRET] = iret_interception,
   [SVM_EXIT_INVD] = emulate_on_interception,
   [SVM_EXIT_HLT]  = halt_interception,
   [SVM_EXIT_INVLPG]   = invlpg_interception,
 @@ -2218,6 +2226,12 @@ static void pre_svm_run(struct vcpu_svm *svm)
   new_asid(svm, svm_data);
  }
  
 +static void svm_inject_nmi(struct vcpu_svm *svm)
 +{
 + svm-vmcb-control.event_inj = SVM_EVTINJ_VALID | SVM_EVTINJ_TYPE_NMI;
 + svm-vcpu.arch.hflags |= HF_NMI_MASK;
 + svm-vmcb-control.intercept |= (1UL  INTERCEPT_IRET);

and:

++svm-vcpu.stat.nmi_injections;

 +}
  
  static inline void svm_inject_irq(struct vcpu_svm *svm, int irq)
  {
 @@ -2269,6 +2283,14 @@ static void update_cr8_intercept(struct kvm_vcpu *vcpu)
   vmcb-control.intercept_cr_write |= INTERCEPT_CR8_MASK;
  }
  
 +static int svm_nmi_allowed(struct kvm_vcpu *vcpu)
 +{
 + struct vcpu_svm *svm = to_svm(vcpu);
 + struct vmcb *vmcb = svm-vmcb;
 + return !(vmcb-control.int_state  SVM_INTERRUPT_SHADOW_MASK) 
 + !(svm-vcpu.arch.hflags  HF_NMI_MASK);
 +}
 +
  static int svm_interrupt_allowed(struct kvm_vcpu *vcpu)
  {
   struct vcpu_svm *svm = to_svm(vcpu);
 @@ -2284,16 +2306,35 @@ static void enable_irq_window(struct kvm_vcpu *vcpu)
   svm_inject_irq(to_svm(vcpu), 0x0);
  }
  
 +static void enable_nmi_window(struct kvm_vcpu *vcpu)
 +{
 + struct vcpu_svm *svm = to_svm(vcpu);
 +
 + if (svm-vmcb-control.int_state  SVM_INTERRUPT_SHADOW_MASK)
 + enable_irq_window(vcpu);
 +}
 +
  static void svm_intr_inject(struct kvm_vcpu *vcpu)
  {
   /* try to reinject previous events if any */
 + if (vcpu-arch.nmi_injected) {
 + svm_inject_nmi(to_svm(vcpu));
 + return;
 + }
 +
   if (vcpu-arch.interrupt.pending) {
   svm_queue_irq(to_svm(vcpu), vcpu-arch.interrupt.nr);
   return;
   }
  
   /* try to inject new event if pending */
 - if (kvm_cpu_has_interrupt(vcpu)) {
 + if (vcpu-arch.nmi_pending) {
 + if (svm_nmi_allowed(vcpu)) {
 + vcpu-arch.nmi_pending = false;
 + vcpu-arch.nmi_injected = true;
 + svm_inject_nmi(vcpu);
 + }
 + } else if (kvm_cpu_has_interrupt(vcpu)) {
   if (svm_interrupt_allowed(vcpu)) {
   kvm_queue_interrupt(vcpu, kvm_cpu_get_interrupt(vcpu));
   svm_queue_irq(to_svm(vcpu), vcpu-arch.interrupt.nr);
 @@ -2312,7 +2353,10 @@ static void svm_intr_assist(struct kvm_vcpu *vcpu, 
 struct kvm_run *kvm_run)
  
   svm_intr_inject(vcpu);
  
 - if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
 + /* enable NMI/IRQ window open exits if needed */
 + if (vcpu-arch.nmi_pending)
 + enable_nmi_window(vcpu);
 + else if (kvm_cpu_has_interrupt(vcpu) || req_int_win)
   enable_irq_window(vcpu);
  
  out:
 --
   Gleb.
 --
 To unsubscribe from this list: send the line unsubscribe kvm in
 the body of a message to majord...@vger.kernel.org
 More majordomo info at  

[PATCH] kvm: x86: Drop request_nmi from stats

2009-04-20 Thread Jan Kiszka
The stats entry request_nmi is no longer used as the related user space
interface was dropped. So clean it up.

Signed-off-by: Jan Kiszka jan.kis...@siemens.com
---

 arch/x86/include/asm/kvm_host.h |1 -
 arch/x86/kvm/x86.c  |1 -
 2 files changed, 0 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 3fc4623..909f094 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -445,7 +445,6 @@ struct kvm_vcpu_stat {
u32 halt_exits;
u32 halt_wakeup;
u32 request_irq_exits;
-   u32 request_nmi_exits;
u32 irq_exits;
u32 host_state_reload;
u32 efer_reload;
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 95892f7..5519dd1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -91,7 +91,6 @@ struct kvm_stats_debugfs_item debugfs_entries[] = {
{ halt_wakeup, VCPU_STAT(halt_wakeup) },
{ hypercalls, VCPU_STAT(hypercalls) },
{ request_irq, VCPU_STAT(request_irq_exits) },
-   { request_nmi, VCPU_STAT(request_nmi_exits) },
{ irq_exits, VCPU_STAT(irq_exits) },
{ host_state_reload, VCPU_STAT(host_state_reload) },
{ efer_reload, VCPU_STAT(efer_reload) },
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm 84 update

2009-04-20 Thread Brent A Nelson
Well, I just tried disabling smp on my guest, and live migration worked 
fine.  I see that there are already SMP complaints with live migration in 
KVM:84 in the bug database, so I guess this is just another, Me, too.


Is this expected to be fixed in the soon-to-be-released KVM:85?

Thanks,

Brent

PS The migrated guest still uses its full memory allocation, even though 
the original was nowhere close.
PPS The perceptible delay in an open ssh session was similar between the 
live and offline migrations.  I guess this is beyond KVMs control, though, 
and more dependent on how quickly gratuitous arp is being handled, or 
something else to do with the transition of the IP/MAC between nodes. 
It's just a handful of seconds, either way.


On Fri, 17 Apr 2009, Brent A Nelson wrote:

I, too, am trying out the KVM 84 build for Hardy.  I didn't have any problems 
with networking (I'm using a bridge interface), except I had to specify 
script=/etc/kvm/kvm-ifup, unlike with KVM 62.  Without specifying any 
script=, kvm looks for /etc/kvm-ifup, which does not exist. Perhaps this is 
just a glitch with the way the Ubuntu folks built the package.


However, live migration simply doesn't work right.  When it completes, the 
migrated guest always immediately crashes in some fashion (I've had an oops, 
a panic, and a reboot).  I can send the original a cont command, however, 
and it will resume as if nothing had happened.


If I first stop the guest, then migration works great, and it's fast enough 
to hardly notice the brief unresponsiveness.  However, there's a nuisance 
here, too.  The migrated VM uses the full amount of memory allocated to it, 
even though the original may have been using only a small fraction.  I tried 
issuing a balloon command to shrink it; the guest VM did see the smaller 
memory size, but the kvm process itself did not change memory consumption. 
When I used the balloon command to set it back to the original, full size, 
the guest VM saw its memory shrink down to nothing until all processes had 
been killed by the out-of-memory killer.


info balloon tells me:
Using KVM without synchronous MMU, ballooning disabled

So, I assume I don't have everything I need to try ballooning properly to see 
if I can reduce the excess memory consumed by the migrated guest VM.


Thanks,

Brent Nelson
Director of Computing
Dept. of Physics
University of Florida


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [x86] - technical questions about HV implementation on Intel VT

2009-04-20 Thread Eric Lacombe
Hi,

I reviewed my code (modify some things and add missing features) and made more 
tests, but I'm stuck with the same problem.
Nonetheless, all the tests I've done seem to freeze my machine when files are 
used.

When I try the commands echo, pwd in the console (X is not started), the 
machine behaves nicely. When I try completion (with double-tab) on a command, 
it also works. But, when I try for instance more help.c, the machine 
freezes, likewise when I try more hel+double-tab.

I really would appreciate some help on this.
Please, could you tell me what I could check (because I already checked a lot 
of things and can't figure out what happens)? I would also give you all the 
information you need.

(Recall: When loaded, my module use VT-x to go on vmx root operation, then it 
creates a vmcs in order to execute the OS inside a VM.)

Thanks in advance for your response.

Eric Lacombe

Le mardi 14 avril 2009 14:24:01 Eric Lacombe, vous avez écrit :
 Hi,

 I analysed some of my logs and see that sometimes, two successive VM-exits
 handle exactly the same instruction (a wrmsr with the same MSR ID and
 data), like below. Is it not strange ? and could it help to focus on the
 problem I have (the freeze of the machine).

 I can understand that you have no time to spend on my problem, but could
 someone give me some ideas on what I could check/print/... in order to fix
 my problem.

 Thanks in advance,

   Eric Lacombe

 I also join the complete log file from which is extracted this sample.

 ## Hytux: VM-EXIT (#24) ##
 Hytux: interruptibility_state=0, activity_state=0
 Hytux: EXIT_REASON = 0x20, EXIT_QUALIF = 0x0
 Hytux: INTR_INFO is not valid
 Hytux: instruction_len=0x2
 Hytux[24]: handle_wrmsr: ecx=0xc100 (MSR id), data=0x7f83583e56e0
 ## GUEST REGISTERS DUMP ##
 --
 GRFLAGS(VMCS): 0x0002 GRSP(VMCS): 0x88007854da18
 GRIP(VMCS): 0x80209c07
 HRFLAGS: 0x0086 HRSP: 0x88006d5cd000
 --
 rax: 0x7f83583e56e0 rbx: 0x88007d127280 rcx: 0xc100
 rdx: 0x7f83
 rsi: 0x rdi: 0x88007d03e050 rbp: 0x88007d03e050
 rsp: 0x88007854da18
 r8: 0x r9: 0x r10: 0x88007d38c640 r11:
 0x0001
 r12: 0x r13: 0x8800790786c0 r14: 0x80818fa0
 r15: 0x
 --
 cr2: 0x00615618 cr3(VMCS): 0x78532000
 --
 ## Hytux: VM-EXIT (#25) ##
 Hytux: interruptibility_state=0, activity_state=0
 Hytux: EXIT_REASON = 0x20, EXIT_QUALIF = 0x0
 Hytux: INTR_INFO is not valid
 Hytux: instruction_len=0x2
 Hytux[25]: handle_wrmsr: ecx=0xc100 (MSR id), data=0x7f83583e56e0
 ## GUEST REGISTERS DUMP ##
 --
 GRFLAGS(VMCS): 0x0002 GRSP(VMCS): 0x88007854da18
 GRIP(VMCS): 0x80209c07
 HRFLAGS: 0x0086 HRSP: 0x88006d5cd000
 --
 rax: 0x7f83583e56e0 rbx: 0x88007d127280 rcx: 0xc100
 rdx: 0x7f83
 rsi: 0x rdi: 0x88007d03e050 rbp: 0x88007d03e050
 rsp: 0x88007854da18
 r8: 0x r9: 0x r10: 0x88007d38c640 r11:
 0x80241868
 r12: 0x r13: 0x880079590980 r14: 0x80818fa0
 r15: 0x
 --
 cr2: 0x00615618 cr3(VMCS): 0x78532000
 --

 Le Tuesday 7 April 2009 19:26:30 Eric Lacombe, vous avez écrit :
  Hello,
 
  I forgot to mention that my module only supports a single processor
  (that's why I run a kernel with SMP disabled).
 
  I was able to run my module on two different machines (both are
  core2-based) and that led to the same outcome: both machines freeze
  without any bad printk messages :/
 
  I join another log file where the guest registers are always dumped for
  each VM-exit. The log file begins with the loading of the module and ends
  when the system crash.
 
  Could someone look at it please and maybe have an hint on what occurs ?
 
  If you need other information, just ask me and I will fulfil your needs.
 
  Best regards,
 
  Eric
 
  Le mardi 24 mars 2009 18:22:11 Eric Lacombe, vous avez écrit :
   Hello,
  
   I work on the implementation of a particular hypervisor for my PhD
   and I face a problem I have not resolved yet. Let me explain it.
  
   I have a module that when it is loaded triggers vmx mode, and load the
   current running kernel in a VM. The code seems to work at some extents.
   For the moment, I just handle the mandatory vm-exit (with my CPU: cr3,
   msr). I log information of each 

Re: [x86] - technical questions about HV implementation on Intel VT

2009-04-20 Thread Avi Kivity

Eric Lacombe wrote:

Hi,

I reviewed my code (modify some things and add missing features) and made more 
tests, but I'm stuck with the same problem.
Nonetheless, all the tests I've done seem to freeze my machine when files are 
used.


When I try the commands echo, pwd in the console (X is not started), the 
machine behaves nicely. When I try completion (with double-tab) on a command, 
it also works. But, when I try for instance more help.c, the machine 
freezes, likewise when I try more hel+double-tab.
  


echo and pwd are part of bash, so they are probably in memory.  I guess 
once you go to disk things fail.


Try to boot the entire OS from initramfs (and keep it there).


I really would appreciate some help on this.
  


This is much to complicated for drive-by debugging.

Please, could you tell me what I could check (because I already checked a lot 
of things and can't figure out what happens)? I would also give you all the 
information you need.


(Recall: When loaded, my module use VT-x to go on vmx root operation, then it 
creates a vmcs in order to execute the OS inside a VM.)
  


I imagine you have interrupts working properly?  Does 'watch -d cat 
/proc/interrupts' give the expected results (run it before you enter vmx 
to load it into cache)?


Are you virtualizing memory, or does the guest manipulate page tables 
directly?


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[ kvm-Bugs-2776577 ] Call trace after live migration of 32 bit guest under load

2009-04-20 Thread SourceForge.net
Bugs item #2776577, was opened at 2009-04-20 13:23
Message generated for change (Tracker Item Submitted) made by tljohnsn
You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=893831aid=2776577group_id=180599

Please note that this message will contain a full copy of the comment thread,
including the initial issue submission, for this request,
not just the latest update.
Category: None
Group: None
Status: Open
Resolution: None
Priority: 5
Private: No
Submitted By: Trent Johnson (tljohnsn)
Assigned to: Nobody/Anonymous (nobody)
Summary: Call trace after live migration of 32 bit guest under load

Initial Comment:
CPU: model name  : Intel(R) Xeon(R) CPU   L5335  @ 2.00GHz

Host is running centos 5.3 2.6.18-128.1.6.el5 kvm-84
Guest is running 32 bit centos 4.7 kernel 2.6.9-78.0.13.ELsmp

Start guest with:
qemu-kvm -M pc -m 512 -smp 1 -name c4test -uuid
ace28629-2d19-89aa-036b-bea6f46a4909 -monitor pty -pidfile
/var/run/libvirt/qemu//c4test.pid -boot c -drive
file=/makersvm/c4test.img,if=ide,index=0,boot=on -net
nic,macaddr=54:52:00:16:1b:76,vlan=0 -net
tap,script=/etc/kvm/qemu-ifup,vlan=0,ifname=vnet1 -serial pty
-parallel none -usb -vnc 0.0.0.0:1 -k en-us

Start building a kernel on the guest - then:
migrate tcp:dest.example.com:

It fails with the attached log messages, however the guest can be
continued on the source host.  Save and resume work fine if I copy the
state file manually.

The problem does not go away with -no-kvm-irqchip or -no-kvm-pit
switch.  The problem does not happen with the -no-kvm switch.

Thanks,
Trent


--

You can respond by visiting: 
https://sourceforge.net/tracker/?func=detailatid=893831aid=2776577group_id=180599
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[KVM-AUTOTEST] [PATCH] Iterate over reboot

2009-04-20 Thread supriya kannery

A patch for iterating over VM reboot

- Supriya Kannery,
 LTC, IBM

diff -Naurp kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.cfg.sample 
kvm-autotest.mod/client/tests/kvm_runtest_2/kvm_tests.cfg.sample
--- kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.cfg.sample
2009-04-13 17:20:56.0 +0530
+++ kvm-autotest.mod/client/tests/kvm_runtest_2/kvm_tests.cfg.sample
2009-04-20 23:22:33.0 +0530
@@ -50,6 +50,7 @@ variants:
 reboot = yes
 extra_params +=  -snapshot
 kill_vm_on_error = yes
+reboot_iterations = 1
 
 - migrate:  install setup
 type = migration
diff -Naurp kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.py 
kvm-autotest.mod/client/tests/kvm_runtest_2/kvm_tests.py
--- kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.py2009-04-13 
17:20:56.0 +0530
+++ kvm-autotest.mod/client/tests/kvm_runtest_2/kvm_tests.py2009-04-20 
23:28:08.0 +0530
@@ -31,25 +31,28 @@ def run_boot(test, params, env):
 kvm_log.info(Logged in)
 
 if params.get(reboot) == yes:
-session.sendline(params.get(cmd_reboot))
-kvm_log.info(Reboot command sent; waiting for guest to go down...)
+iteration = int(params.get(reboot_iterations,1))
+while iteration:
+session.sendline(params.get(cmd_reboot))
+kvm_log.info(Reboot command sent; waiting for guest to go 
down...)
+
+if not kvm_utils.wait_for(lambda: not session.is_responsive(), 
120, 0, 1):
+message = Guest refuses to go down
+kvm_log.error(message)
+raise error.TestFail, message
+
+session.close()
+
+kvm_log.info(Guest is down; waiting for it to go up again...)
+
+session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
+if not session:
+message = Could not log into guest after reboot
+kvm_log.error(message)
+raise error.TestFail, message
 
-if not kvm_utils.wait_for(lambda: not session.is_responsive(), 120, 0, 
1):
-message = Guest refuses to go down
-kvm_log.error(message)
-raise error.TestFail, message
-
-session.close()
-
-kvm_log.info(Guest is down; waiting for it to go up again...)
-
-session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
-if not session:
-message = Could not log into guest after reboot
-kvm_log.error(message)
-raise error.TestFail, message
-
-kvm_log.info(Guest is up again)
+kvm_log.info(Guest is up again)
+iteration -= 1
 
 session.close()
 


Re: [KVM-AUTOTEST] [PATCH] Iterate over reboot

2009-04-20 Thread Ryan Harper
* supriya kannery supri...@in.ibm.com [2009-04-20 13:49]:
 A patch for iterating over VM reboot
 
 - Supriya Kannery,
  LTC, IBM

Needs a Signed-off-by:


 diff -Naurp kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.cfg.sample 
 kvm-autotest.mod/client/tests/kvm_runtest_2/kvm_tests.cfg.sample
 --- kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.cfg.sample  
 2009-04-13 17:20:56.0 +0530
 +++ kvm-autotest.mod/client/tests/kvm_runtest_2/kvm_tests.cfg.sample  
 2009-04-20 23:22:33.0 +0530
 @@ -50,6 +50,7 @@ variants:
  reboot = yes
  extra_params +=  -snapshot
  kill_vm_on_error = yes
 +reboot_iterations = 1
 
  - migrate:  install setup
  type = migration
 diff -Naurp kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.py 
 kvm-autotest.mod/client/tests/kvm_runtest_2/kvm_tests.py
 --- kvm-autotest/client/tests/kvm_runtest_2/kvm_tests.py  2009-04-13 
 17:20:56.0 +0530
 +++ kvm-autotest.mod/client/tests/kvm_runtest_2/kvm_tests.py  2009-04-20 
 23:28:08.0 +0530
 @@ -31,25 +31,28 @@ def run_boot(test, params, env):
  kvm_log.info(Logged in)
 
  if params.get(reboot) == yes:
 -session.sendline(params.get(cmd_reboot))
 -kvm_log.info(Reboot command sent; waiting for guest to go down...)
 +iteration = int(params.get(reboot_iterations,1))
 +while iteration:
 +session.sendline(params.get(cmd_reboot))
 +kvm_log.info(Reboot command sent; waiting for guest to go 
 down...)
 +
 +if not kvm_utils.wait_for(lambda: not session.is_responsive(), 
 120, 0, 1):
 +message = Guest refuses to go down
 +kvm_log.error(message)
 +raise error.TestFail, message
 +
 +session.close()
 +
 +kvm_log.info(Guest is down; waiting for it to go up again...)
 +
 +session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
 +if not session:
 +message = Could not log into guest after reboot
 +kvm_log.error(message)
 +raise error.TestFail, message
 
 -if not kvm_utils.wait_for(lambda: not session.is_responsive(), 120, 
 0, 1):
 -message = Guest refuses to go down
 -kvm_log.error(message)
 -raise error.TestFail, message
 -
 -session.close()
 -
 -kvm_log.info(Guest is down; waiting for it to go up again...)
 -
 -session = kvm_utils.wait_for(vm.ssh_login, 120, 0, 2)
 -if not session:
 -message = Could not log into guest after reboot
 -kvm_log.error(message)
 -raise error.TestFail, message
 -
 -kvm_log.info(Guest is up again)
 +kvm_log.info(Guest is up again)
 +iteration -= 1
 
  session.close()
 


-- 
Ryan Harper
Software Engineer; Linux Technology Center
IBM Corp., Austin, Tx
ry...@us.ibm.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH 0/4] ksm - dynamic page sharing driver for linux v3

2009-04-20 Thread Nick Piggin
On Friday 17 April 2009 17:08:07 Jared Hulbert wrote:
  As everyone knows, my favourite thing is to say nasty things about any
  new feature that adds complexity to common code. I feel like crying to
  hear about how many more instances of MS Office we can all run, if only
  we apply this patch. And the poorly written HPC app just sounds like
  scrapings from the bottom of justification barrel.
 
  I'm sorry, maybe I'm way off with my understanding of how important
  this is. There isn't too much help in the changelog. A discussion of
  where the memory savings comes from, and how far does things like
  sharing of fs image, or ballooning goes and how much extra savings we
  get from this... with people from other hypervisors involved as well.
  Have I missed this kind of discussion?
 
 Nick,
 
 I don't know about other hypervisors, fs and balloonings, but I have
 tried this out.  It works.  It works on apps I don't consider, poorly
 written.  I'm very excited about this.  I got 10% saving in a
 roughly off the shelf embedded system.  No user noticeable performance
 impact.

OK well that's what I want to hear. Thanks, that means a lot to me.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html