From: Sheng Yang sh...@linux.intel.com
We can use them in x86.c and vmx.c now...
Signed-off-by: Sheng Yang sh...@linux.intel.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
arch/x86/kvm/mmu.c |4
arch/x86/kvm/mmu.h |4
2 files changed, 4 insertions(+), 4
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
The explanation of write_emulated is confused with
that of read_emulated. This patch fix it.
Signed-off-by: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
This is the first of four batches of patches for the 2.6.34 merge window. KVM
changes for this cycle include:
- rdtscp support
- powerpc server-class updates
- much improved large-guest scaling (now up to 64 vcpus)
- improved guest fpu handling
- initial Hyper-V emulation
- better swapping
From: Alexander Graf ag...@suse.de
The PowerPC C ABI defines that registers r14-r31 need to be preserved across
function calls. Since our exit handler is written in C, we can make use of that
and don't need to reload r14-r31 on every entry/exit cycle.
This technique is also used in the BookE
From: Alexander Graf ag...@suse.de
The code to unset HID5.dcbz32 is broken.
This patch makes it do the right rotate magic.
Signed-off-by: Alexander Graf ag...@suse.de
Reported-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Avi Kivity a...@redhat.com
---
From: Alexander Graf ag...@suse.de
Using an RFI in IR=1 is dangerous. We need to set two SRRs and then do an RFI
without getting interrupted at all, because every interrupt could potentially
overwrite the SRR values.
Fortunately, we don't need to RFI in at least this particular case of the code,
From: Sheng Yang sh...@linux.intel.com
Signed-off-by: Sheng Yang sh...@linux.intel.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
arch/x86/include/asm/vmx.h |1 +
arch/x86/kvm/mmu.c |8 +---
arch/x86/kvm/vmx.c | 11 ++-
3 files changed, 16
From: Takuya Yoshikawa yoshikawa.tak...@oss.ntt.co.jp
There are two spellings of writable in
arch/x86/kvm/mmu.c and paging_tmpl.h .
This patch renames is_writeble_pte() to is_writable_pte()
and makes grepping easy.
New name is consistent with the definition of itself:
return pte
From: Alexander Graf ag...@suse.de
When our guest starts using either the FPU, Altivec or VSX we need to make
sure Linux knows about it and sneak into its process switching code
accordingly.
This patch makes accesses to the above parts of the system work inside the
VM.
Signed-off-by: Alexander
From: Gleb Natapov g...@redhat.com
Implement HYPER-V apic MSRs. Spec defines three MSRs that speed-up
access to EOI/TPR/ICR apic registers for PV guests.
Signed-off-by: Gleb Natapov g...@redhat.com
Signed-off-by: Vadim Rozenfeld vroze...@redhat.com
Signed-off-by: Avi Kivity a...@redhat.com
---
From: Alexander Graf ag...@suse.de
Commit 7d01b4c3ed2bb33ceaf2d270cb4831a67a76b51b introduced PACA backed vcpu
values. With this patch, when a userspace app was setting GPRs before it was
actually first loaded, the set values get discarded.
This is because vcpu_load loads them from the vcpu
From: Alexander Graf ag...@suse.de
We keep a copy of the MSR around that we use when we go into the guest context.
That copy is basically the normal process MSR flags OR some allowed guest
specified MSR flags. We also AND the external providers into this, so we get
traps on FPU usage when we
From: Gleb Natapov g...@redhat.com
Provide HYPER-V related defines that will be used by following patches.
Signed-off-by: Gleb Natapov g...@redhat.com
Signed-off-by: Vadim Rozenfeld vroze...@redhat.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/include/asm/hyperv.h | 186
From: Gleb Natapov g...@redhat.com
Minimum HYPER-V implementation should have GUEST_OS_ID, HYPERCALL and
VP_INDEX MSRs.
[avi: fix build on i386]
Signed-off-by: Gleb Natapov g...@redhat.com
Signed-off-by: Vadim Rozenfeld vroze...@redhat.com
Signed-off-by: Avi Kivity a...@redhat.com
---
From: Gleb Natapov g...@redhat.com
Windows issues this hypercall after guest was spinning on a spinlock
for too many iterations.
Signed-off-by: Gleb Natapov g...@redhat.com
Signed-off-by: Vadim Rozenfeld vroze...@redhat.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/x86.c |
From: Alexander Graf ag...@suse.de
We need to explicitly only giveup VSX in KVM, so let's export that
specific function to module space.
Signed-off-by: Alexander Graf ag...@suse.de
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/powerpc/kernel/ppc_ksyms.c |1 +
1 files changed, 1
Defer fpu deactivation as much as possible - if the guest fpu is loaded, keep
it loaded until the next heavyweight exit (where we are forced to unload it).
This reduces unnecessary exits.
We also defer fpu activation on clts; while clts signals the intent to use the
fpu, we can't be sure the
If two conditions apply:
- no bits outside TS and EM differ between the host and guest cr0
- the fpu is active
then we can activate the selective cr0 write intercept and drop the
unconditional cr0 read and write intercept, and allow the guest to run
with the host fpu state. This reduces cr0
init_vmcb() sets up the intercepts as if the fpu is active, so initialize it
there. This avoids an INIT from setting up intercepts inconsistent with
fpu_active.
Acked-by: Joerg Roedel joerg.roe...@amd.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/svm.c |3 ++-
1 files
Currently we don't intercept cr0 at all when npt is enabled. This improves
performance but requires us to activate the fpu at all times.
Remove this behaviour in preparation for adding selective cr0 intercepts.
Acked-by: Joerg Roedel joerg.roe...@amd.com
Signed-off-by: Avi Kivity
From: Alexander Graf ag...@suse.de
SRR1 stores more information that just the MSR value. It also stores
valuable information about the type of interrupt we received, for
example whether the storage interrupt we just got was because of a
missing htab entry or not.
We use that information to speed
Since we'd like to allow the guest to own a few bits of cr0 at times, we need
to know when we access those bits.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/emulate.c|6 +++---
arch/x86/kvm/kvm_cache_regs.h | 10 ++
arch/x86/kvm/mmu.c|2 +-
Follow the hardware.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/x86.c |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 1de2ad7..1ad34d1 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -428,6 +428,8
From: Roel Kluin roel.kl...@gmail.com
kvm_get_exit_data() cannot return a NULL pointer.
Signed-off-by: Roel Kluin roel.kl...@gmail.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/ia64/kvm/kvm_fw.c | 28 +---
1 files changed, 13 insertions(+), 15 deletions(-)
Now that we can allow the guest to play with cr0 when the fpu is loaded,
we can enable lazy fpu when npt is in use.
Acked-by: Joerg Roedel joerg.roe...@amd.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/svm.c |8
1 files changed, 0 insertions(+), 8 deletions(-)
diff
If the guest fpu is loaded, there is nothing interesing about cr0.ts; let
the guest play with it as it will. This makes context switches between fpu
intensive guest processes faster, as we won't trap the clts and cr0 write
instructions.
[marcelo: fix cr0 read shadow update on fpu deactivation;
From: Alexander Graf ag...@suse.de
Linux contains quite some bits of code to load FPU, Altivec and VSX lazily for
a task. It calls those bits in real mode, coming from an interrupt handler.
For KVM we better reuse those, so let's wrap a bit of trampoline magic around
them and then we can call
From: Alexander Graf ag...@suse.de
An SLB entry contains two pieces of information related to size:
1) PTE size
2) SLB size
The L bit defines the PTE be large (usually means 16MB),
SLB_VSID_B_1T defines that the SLB should span 1 GB instead of the
default 256MB.
Apparently I messed things
Instead of selecting TS and MP as the comments say, the macro included TS and
PE. Luckily the macro is unused now, but fix in order to save a few hours of
debugging from anyone who attempts to use it.
Acked-by: Joerg Roedel joerg.roe...@amd.com
Signed-off-by: Avi Kivity a...@redhat.com
---
From: Alexander Graf ag...@suse.de
When we get a program interrupt in guest kernel mode, we try to emulate the
instruction.
If that doesn't fail, we report to the user and try again - at the exact same
instruction pointer. So if the guest kernel really does trigger an invalid
instruction, we
We will use this later to give the guest ownership of cr0.ts.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/include/asm/kvm_host.h |2 ++
arch/x86/kvm/kvm_cache_regs.h |2 ++
arch/x86/kvm/svm.c |5 +
arch/x86/kvm/vmx.c |9 +
4
clts writes cr0.ts; lmsw writes cr0[0:15] - record that in ftrace.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/vmx.c |5 -
1 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 9b197b2..7c7b2ee 100644
---
From: Alexander Graf ag...@suse.de
All code in PPC KVM currently accesses gprs in the vcpu struct directly.
While there's nothing wrong with that wrt the current way gprs are stored
and loaded, it doesn't suffice for the PACA acceleration that will follow
in this patchset.
So let's just create
From: Alexander Graf ag...@suse.de
We're being horribly racy right now. All the entry and exit code hijacks
random fields from the PACA that could easily be used by different code in
case we get interrupted, for example by a #MC or even page fault.
After discussing this with Ben, we figured it's
From: Alexander Graf ag...@suse.de
We now have helpers for the GPRs, so let's also add some for CR and XER.
Having them in the PACA simplifies code a lot, as we don't need to care
about where to store CC or not to overflow any integers.
Signed-off-by: Alexander Graf ag...@suse.de
Signed-off-by:
From: Alexander Graf ag...@suse.de
When we need to reinject a program interrupt into the guest, we also need to
reinject the corresponding flags into the guest.
Signed-off-by: Alexander Graf ag...@suse.de
Reported-by: Benjamin Herrenschmidt b...@kernel.crashing.org
Signed-off-by: Avi Kivity
From: Alexander Graf ag...@suse.de
To fetch the last instruction we were interrupted on, we enable DR in early
exit code, where we are still in a very transitional phase between guest
and host state.
Most of the time this seemed to work, but another CPU can easily flush our
TLB and HTAB which
From: Alexander Graf ag...@suse.de
Book3S needs some flags in SRR1 to get to know details about an interrupt.
One such example is the trap instruction. It tells the guest kernel that
a program interrupt is due to a trap using a bit in SRR1.
This patch implements above behavior, making WARN_ON
From: Alexander Graf ag...@suse.de
Currently we're racy when doing the transition from IR=1 to IR=0, from
the module memory entry code to the real mode SLB switching code.
To work around that I took a look at the RTAS entry code which is faced
with a similar problem and did the same thing:
A
From: Sheng Yang sh...@linux.intel.com
Then the callback can provide the maximum supported large page level, which
is more flexible.
Also move the gb page support into x86_64 specific.
Signed-off-by: Sheng Yang sh...@linux.intel.com
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
---
From: Jan Kiszka jan.kis...@siemens.com
We intercept #BP while in guest debugging mode. As VM exists due to
intercepted exceptions do not necessarily come with valid
idt_vectoring, we have to update event_exit_inst_len explicitly in such
cases. At least in the absence of migration, this ensures
From: Jan Kiszka jan.kis...@siemens.com
VMX requires a properly set instruction length VM entry field when
trying to inject soft exception and interrupts. We have to preserve this
state across VM save/restore to avoid breaking the re-injection of such
events on Intel. So add it to the new VCPU
On 02/13/2010 11:51 AM, Jan Kiszka wrote:
From: Jan Kiszkajan.kis...@siemens.com
VMX requires a properly set instruction length VM entry field when
trying to inject soft exception and interrupts. We have to preserve this
state across VM save/restore to avoid breaking the re-injection of such
Avi Kivity wrote:
On 02/13/2010 11:51 AM, Jan Kiszka wrote:
From: Jan Kiszkajan.kis...@siemens.com
VMX requires a properly set instruction length VM entry field when
trying to inject soft exception and interrupts. We have to preserve this
state across VM save/restore to avoid breaking the
Hi!!
I got a curious situation that leads me to start sailing on qemu-kvm
source code.
I got an audio latency very very high, even with some little cuts at
playback. In qemu lists they told me qemu is synchronous, so if it
take so long to draw high screen resolution audio could be affected.
As I
On Thu, Feb 11, 2010 at 12:09:12AM +0100, Paolo Bonzini wrote:
This patch series morphs the code in qemu-kvm's eventfd so that it looks
like the code in upstream qemu. Patch 4 is not yet in upstream QEMU,
I'm submitting it first to qemu-kvm to avoid conflicts.
Paolo Bonzini (4):
morph
On Sat, Feb 13, 2010 at 10:51:40AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
VMX requires a properly set instruction length VM entry field when
trying to inject soft exception and interrupts. We have to preserve this
state across VM save/restore to avoid breaking the
On Mon, Feb 08, 2010 at 06:49:39PM +1030, Rusty Russell wrote:
On Sun, 7 Feb 2010 07:37:49 pm Michael S. Tsirkin wrote:
On Mon, Feb 01, 2010 at 07:21:02PM +0200, Michael S. Tsirkin wrote:
vhost-net only uses memory barriers to control SMP effects
(communication with userspace potentially
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 10:51:40AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
VMX requires a properly set instruction length VM entry field when
trying to inject soft exception and interrupts. We have to preserve this
state across VM save/restore to
How's that? Feel free to upgrade it.
--
Document that partially emulated instructions leave the guest state
unconsistent, and that the kernel must complete operations before
checking for pending signals.
Signed-off-by: Marcelo Tosatti mtosa...@redhat.com
diff --git
On Sat, Feb 13, 2010 at 06:49:44PM +0100, Jan Kiszka wrote:
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 10:51:40AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
VMX requires a properly set instruction length VM entry field when
trying to inject soft exception and
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 06:49:44PM +0100, Jan Kiszka wrote:
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 10:51:40AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
VMX requires a properly set instruction length VM entry field when
trying to inject soft
On Sat, Feb 13, 2010 at 07:41:35PM +0100, Jan Kiszka wrote:
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 06:49:44PM +0100, Jan Kiszka wrote:
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 10:51:40AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
VMX requires a properly
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 07:41:35PM +0100, Jan Kiszka wrote:
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 06:49:44PM +0100, Jan Kiszka wrote:
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 10:51:40AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
VMX
On Sat, Feb 13, 2010 at 08:20:41PM +0100, Jan Kiszka wrote:
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 07:41:35PM +0100, Jan Kiszka wrote:
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 06:49:44PM +0100, Jan Kiszka wrote:
Gleb Natapov wrote:
On Sat, Feb 13, 2010 at 10:51:40AM +0100, Jan
On Mon, Feb 08, 2010 at 11:31:40AM +0100, Jes Sorensen wrote:
On 01/28/10 05:39, Kevin O'Connor wrote:
As a side note, it should probably do the e820 map check even for qemu
users (ie, not just kvm).
Hi Kevin,
Here is an updated version of the patch which does the e820 read
Anthony Liguori wrote:
On 02/01/2010 01:02 PM, john cooper wrote:
[target-x86_64.conf was unintentionally omitted from the earlier patch]
This is a reimplementation of prior versions which adds
the ability to define cpu models for contemporary processors.
The added models are likewise
[Revision to fix build breakage for a few targets. This
does not yet reflect Andre's suggestion to coalesce all
config file flags into one space, the implementation of
which depends somewhat upon acceptance of the proposed
config file syntax modification and is left as a TBD for
now.]
This is a
I don't subscribe to the list, so please excuse any breach of etiquette.
According to AMD document 21485D pp.141, APROMWE is bit 8 of BCR2.
Signed-off-by: Christopher Kilgour techie at whiterocker.com
---
diff --git a/hw/pcnet.c b/hw/pcnet.c
index 44b5b31..f889898 100644
--- a/hw/pcnet.c
+++
On Sat, Feb 13, 2010 at 10:31:12AM +0100, Jan Kiszka wrote:
From: Jan Kiszka jan.kis...@siemens.com
We intercept #BP while in guest debugging mode. As VM exists due to
intercepted exceptions do not necessarily come with valid
idt_vectoring, we have to update event_exit_inst_len explicitly in
60 matches
Mail list logo