On 09/20/2012 04:43 AM, Hao, Xudong wrote:
-Original Message-
From: Avi Kivity [mailto:a...@redhat.com]
Sent: Wednesday, September 19, 2012 6:24 PM
To: Hao, Xudong
Cc: Marcelo Tosatti; kvm@vger.kernel.org; Zhang, Xiantao
Subject: Re: [PATCH v3] kvm/fpu: Enable fully eager restore kvm
On 09/14/2012 12:58 PM, Xiao Guangrong wrote:
Let it return emulate state instead of spte like __direct_map
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/paging_tmpl.h | 28 ++--
1 files changed, 10 insertions(+), 18 deletions(-)
On 09/14/2012 12:59 PM, Xiao Guangrong wrote:
Wrap the common operations into these two functions
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 53 +++
arch/x86/kvm/paging_tmpl.h | 16
On 09/19/2012 08:44 PM, Auld, Will wrote:
From 9982bb73460b05c1328068aae047b14b2294e2da Mon Sep 17 00:00:00 2001
From: Will Auld will.a...@intel.com
Date: Wed, 12 Sep 2012 18:10:56 -0700
Subject: [PATCH] Enabling IA32_TSC_ADJUST for guest VM
CPUID.7.0.EBX[1]=1 indicates IA32_TSC_ADJUST MSR
On 09/19/2012 08:44 PM, Auld, Will wrote:
@@ -2241,6 +2244,13 @@ static int vmx_set_msr(struct kvm_vcpu *vcpu, u32
msr_index, u64 data)
}
ret = kvm_set_msr_common(vcpu, msr_index, data);
break;
+ case MSR_TSC_ADJUST:
+#define DUMMY 1
+
On 09/19/2012 10:38 PM, Jan Kiszka wrote:
If we reset a vcpu on INIT, make sure to not touch dr7 as stored in the
VMCS/VMCB and also switch_db_regs if guest debugging is using hardware
breakpoints. Otherwise, the vcpu will not trigger hardware breakpoints
until userspace issues another
On 09/13/2012 05:19 PM, Gleb Natapov wrote:
Most interrupt are delivered to only one vcpu. Use pre-build tables to
find interrupt destination instead of looping through all vcpus. In case
of logical mode loop only through vcpus in a logical cluster irq is sent
to.
Applied, thanks.
--
error
On 09/18/2012 06:16 AM, Alex Williamson wrote:
To emulate level triggered interrupts, add a resample option to
KVM_IRQFD. When specified, a new resamplefd is provided that notifies
the user when the irqchip has been resampled by the VM. This may, for
instance, indicate an EOI. Also in this
On 09/19/2012 12:08 PM, Michael S. Tsirkin wrote:
Whoa. Can't we put the resampler entry somewhere we don't need to
search for it? Like a kvm_kernel_irq_routing_entry, that's indexed by
gsi already (kvm_irq_routing_table::rt_entries[gsi]).
I'm not sure why would we bother optimizing this,
On 09/18/2012 05:38 PM, Li, Jiongxi wrote:
+static int handle_apic_write(struct kvm_vcpu *vcpu) {
+ unsigned long exit_qualification = vmcs_readl(EXIT_QUALIFICATION);
+ u32 offset = exit_qualification 0xfff;
+
+ /* APIC-write VM exit is trap-like and thus no need to adjust IP */
On 09/18/2012 12:51 PM, Gerd Hoffmann wrote:
This patch adds a mmio bar to the qemu standard vga which allows to
access the standard vga registers and bochs dispi interface registers
via mmio.
diff --git a/hw/vga-pci.c b/hw/vga-pci.c
index 9abbada..e05e2ef 100644
--- a/hw/vga-pci.c
+++
On 09/18/2012 04:08 AM, Hao, Xudong wrote:
The objective of the change is to disable lazy fpu loading (that is,
host fpu loaded in guest and vice-versa),
Not vice versa. We allow the guest fpu loaded in the host, but save it
on heavyweight exit or task switch.
when some bit except the
On 09/19/2012 02:35 PM, Gerd Hoffmann wrote:
Looks like word writes are supported provided the memory API breaks up
writes in little endian order. Better to make it explicit.
Like the attached incremental patch?
Very like.
--
error compiling committee.c: too many arguments to function
On 09/15/2012 06:34 PM, Christoffer Dall wrote:
The following series implements KVM support for ARM processors,
specifically on the Cortex A-15 platform. We feel this is ready to be
merged.
Work is done in collaboration between Columbia University, Virtual Open
Systems and ARM/Linaro.
On 09/18/2012 06:03 AM, Andrew Theurer wrote:
On Sun, 2012-09-16 at 11:55 +0300, Avi Kivity wrote:
On 09/14/2012 12:30 AM, Andrew Theurer wrote:
The concern I have is that even though we have gone through changes to
help reduce the candidate vcpus we yield to, we still have a very poor
On 09/19/2012 04:54 PM, Alex Williamson wrote:
On Wed, 2012-09-19 at 12:10 +0300, Avi Kivity wrote:
On 09/19/2012 12:08 PM, Michael S. Tsirkin wrote:
Whoa. Can't we put the resampler entry somewhere we don't need to
search for it? Like a kvm_kernel_irq_routing_entry, that's indexed
On 09/17/2012 10:36 PM, Dean Pucsek wrote:
Hello,
For my Masters thesis I am investigating the usage of Intel VT-x and branch
tracing in the domain of malware analysis. Essentially what I'm aiming to do
is trace the execution of a guest VM and then pass that trace on to some
other
On 09/17/2012 02:28 PM, Li, Jiongxi wrote:
+++ b/arch/x86/kvm/lapic.c
@@ -499,8 +499,13 @@ static int __apic_accept_irq(struct kvm_lapic *apic,
int delivery_mode,
if (trig_mode) {
apic_debug(level trig mode for vector %d, vector);
On 09/14/2012 05:15 PM, Li, Jiongxi wrote:
@@ -5293,16 +5300,27 @@ static int vcpu_enter_guest(struct kvm_vcpu
*vcpu)
}
if (kvm_check_request(KVM_REQ_EVENT, vcpu) || req_int_win) {
+ /* update archtecture specific hints for APIC virtual interrupt
delivery
*/
+
On 09/19/2012 05:47 PM, Alexander Graf wrote:
On 04.09.2012, at 17:13, Cornelia Huck wrote:
Handle most support for channel I/O instructions in the kernel itself.
Only asynchronous functions (such as the start function) need to be
handled by userspace.
Phew. This is a lot of code for
On 09/19/2012 05:45 PM, Jan Kiszka wrote:
On 2012-09-19 16:38, Avi Kivity wrote:
On 09/17/2012 10:36 PM, Dean Pucsek wrote:
Hello,
For my Masters thesis I am investigating the usage of Intel VT-x and branch
tracing in the domain of malware analysis. Essentially what I'm aiming to
do
On 09/17/2012 02:28 PM, Li, Jiongxi wrote:
+} else if (kvm_apic_vid_enabled(vcpu)) {
+if (kvm_cpu_has_interrupt_apic_vid(vcpu)
+kvm_x86_ops-interrupt_allowed(vcpu)) {
+kvm_queue_interrupt(vcpu,
+
On 09/18/2012 09:45 AM, Xiao Guangrong wrote:
On 09/16/2012 08:07 PM, Avi Kivity wrote:
@@ -3672,20 +3672,17 @@ static int vcpu_mmio_gva_to_gpa(struct kvm_vcpu
*vcpu, unsigned long gva,
gpa_t *gpa, struct x86_exception *exception
On 09/18/2012 10:21 AM, Xiao Guangrong wrote:
On 09/16/2012 08:07 PM, Avi Kivity wrote:
+/*
+ * On a write fault, fold the dirty bit into accessed_dirty by shifting
it one
+ * place right.
+ *
+ * On a read fault, do nothing.
+ */
+accessed_dirty = pte
On 09/18/2012 09:53 AM, Xiao Guangrong wrote:
On 09/16/2012 08:07 PM, Avi Kivity wrote:
-pt_access = ACC_ALL;
+pt_access = pte_access = ACC_ALL;
+++walker-level;
-for (;;) {
+do {
gfn_t real_gfn;
unsigned long host_addr
If nx is disabled, then is gpte[63] is set we will hit a reserved
bit set fault before checking permissions; so we can ignore the
setting of efer.nxe.
Reviewed-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 4 +---
1
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 76 --
1 file changed, 47 insertions(+), 29 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 1cbf576..35a05dd 100644
--- a/arch/x86/kvm
but PTE.U=0.
Noted by Xiao Guangrong.
The result is short, branch-free code.
Reviewed-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/include/asm/kvm_host.h | 7 +++
arch/x86/kvm/mmu.c | 38
xiaoguangr...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 60 +++---
1 file changed, 25 insertions(+), 35 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 8f6c59f
'eperm' is no longer used in the walker loop, so we can eliminate it.
Reviewed-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/x86/kvm
Instead of branchy code depending on level, gpte.ps, and mmu configuration,
prepare everything in a bitmap during mode changes and look it up during
runtime.
Reviewed-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/include/asm/kvm_host.h
Keep track of accessed/dirty bits; if they are all set, do not
enter the accessed/dirty update loop.
Reviewed-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 26 --
1 file changed, 20
...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/mmu.c | 12
arch/x86/kvm/mmu.h | 3 ++-
arch/x86/kvm/paging_tmpl.h | 24
3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
'ac' essentially reconstructs the 'access' variable we already
have, except for the PFERR_PRESENT_MASK and PFERR_RSVD_MASK. As
these are not used by callees, just use 'access' directly.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 5 +
1 file changed, 1
to the end of the walk
fix last_pte_bitmap documentation
fix incorrect SMEP fault permission checks
introduce helper for accessing the permission bitmap
Avi Kivity (10):
KVM: MMU: Push clean gpte write protection out of gpte_access()
KVM: MMU: Optimize gpte_access() slightly
KVM: MMU: Move
We no longer rely on paging_tmpl.h defines; so we can move the function
to mmu.c.
Rely on zero extension to 64 bits to get the correct nx behaviour.
Reviewed-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/mmu.c | 10
On 09/18/2012 10:36 AM, Xiao Guangrong wrote:
On 09/16/2012 08:07 PM, Avi Kivity wrote:
Instead of branchy code depending on level, gpte.ps, and mmu configuration,
prepare everything in a bitmap during mode changes and look it up during
runtime.
Avi,
Can we introduce ignore_bits_mask
On 09/19/2012 08:17 PM, Avi Kivity wrote:
On 09/18/2012 10:36 AM, Xiao Guangrong wrote:
On 09/16/2012 08:07 PM, Avi Kivity wrote:
Instead of branchy code depending on level, gpte.ps, and mmu configuration,
prepare everything in a bitmap during mode changes and look it up during
runtime
On 09/14/2012 12:30 AM, Andrew Theurer wrote:
The concern I have is that even though we have gone through changes to
help reduce the candidate vcpus we yield to, we still have a very poor
idea of which vcpu really needs to run. The result is high cpu usage in
the get_pid_task and still some
On 09/14/2012 05:14 PM, Li, Jiongxi wrote:
I don't see patches for enabling posted interrupts? This can improve both
assigned and virtual interrupt delivery.
We will have a separate patch for posted interrupts after cleaning up this
patch. Meanwhile it is not ready.
Please post it
On 09/14/2012 05:14 PM, Li, Jiongxi wrote:
Sorry for the late response
-Original Message-
From: Avi Kivity [mailto:a...@redhat.com]
Sent: Friday, September 07, 2012 12:02 AM
To: Li, Jiongxi
Cc: kvm@vger.kernel.org
Subject: Re: [PATCH 1/5]KVM: x86, apicv: add APICv register
On 09/14/2012 05:17 PM, Li, Jiongxi wrote:
-static void apic_send_ipi(struct kvm_lapic *apic)
+/*
+ * this interface assumes a trap-like exit, which has already
+finished
+ * desired side effect including vISR and vPPR update.
+ */
+void kvm_apic_set_eoi(struct kvm_vcpu *vcpu, int
On 09/14/2012 05:19 PM, Li, Jiongxi wrote:
Sorry for the late response
-Original Message-
From: Avi Kivity [mailto:a...@redhat.com]
Sent: Friday, September 07, 2012 12:38 AM
To: Li, Jiongxi
Cc: kvm@vger.kernel.org
Subject: Re: [PATCH 4/5]KVM:x86, apicv: add interface for poking
On 09/16/2012 01:56 PM, Michael S. Tsirkin wrote:
On Thu, Sep 13, 2012 at 12:36:30PM +0200, Jan Kiszka wrote:
On 2012-09-13 12:33, Gleb Natapov wrote:
So, this can be the foundation for direct MSI delivery as well, right?
What do you mean by direct MSI delivery?
On 09/13/2012 04:39 PM, Xiao Guangrong wrote:
diff --git a/arch/x86/include/asm/kvm_host.h
b/arch/x86/include/asm/kvm_host.h
index 3318bde..f9a48cf 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -298,6 +298,13 @@ struct kvm_mmu {
u64 *lm_root;
fault permission checks
introduce helper for accessing the permission bitmap
Avi Kivity (9):
KVM: MMU: Push clean gpte write protection out of gpte_access()
KVM: MMU: Optimize gpte_access() slightly
KVM: MMU: Move gpte_access() out of paging_tmpl.h
KVM: MMU: Update accessed and dirty bits
...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/mmu.c | 12
arch/x86/kvm/mmu.h | 3 ++-
arch/x86/kvm/paging_tmpl.h | 24
3 files changed, 26 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
If nx is disabled, then is gpte[63] is set we will hit a reserved
bit set fault before checking permissions; so we can ignore the
setting of efer.nxe.
Reviewed-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 4 +---
1
We no longer rely on paging_tmpl.h defines; so we can move the function
to mmu.c.
Rely on zero extension to 64 bits to get the correct nx behaviour.
Reviewed-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/mmu.c | 10
Keep track of accessed/dirty bits; if they are all set, do not
enter the accessed/dirty update loop.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 24 ++--
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b
'eperm' is no longer used in the walker loop, so we can eliminate it.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 134ea7b..95a64d1
the access permissions check after the walk is complete, rather
than after each walk step.
(the tricky case is SMEP: a zero in any pte's U bit makes the referenced
page a supervisor page, so we can't fault on a one bit during the walk
itself).
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm
but PTE.U=0.
Noted by Xiao Guangrong.
The result is short, branch-free code.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/include/asm/kvm_host.h | 7 +++
arch/x86/kvm/mmu.c | 38 ++
arch/x86/kvm/mmu.h | 19
vmx.c has an lto-unfriendly bit, fix it up.
While there, clean up our asm code.
v2: add missing .global in case vmx_return and vmx_set_constant_host_state()
become
separated by lto
Avi Kivity (3):
KVM: VMX: Make lto-friendly
KVM: VMX: Make use of asm.h
KVM: SVM: Make use of asm.h
Use macros for bitness-insensitive register names, instead of
rolling our own.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/vmx.c | 69 --
1 file changed, 30 insertions(+), 39 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch
LTO (link-time optimization) doesn't like local labels to be referred to
from a different function, since the two functions may be built in separate
compilation units. Use an external variable instead.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/vmx.c | 17 +++--
1
Use macros for bitness-insensitive register names, instead of
rolling our own.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/svm.c | 46 --
1 file changed, 20 insertions(+), 26 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm
On 09/13/2012 01:20 AM, Marcelo Tosatti wrote:
On Wed, Sep 12, 2012 at 05:29:49PM +0300, Avi Kivity wrote:
(resend due to mail server malfunction)
The page table walk has gotten crufty over the years and is threatening to
become
even more crufty when SMAP is introduced. Clean it up
On 09/12/2012 10:17 PM, Andi Kleen wrote:
On Wed, Sep 12, 2012 at 05:50:41PM +0300, Avi Kivity wrote:
vmx.c has an lto-unfriendly bit, fix it up.
While there, clean up our asm code.
Avi Kivity (3):
KVM: VMX: Make lto-friendly
KVM: VMX: Make use of asm.h
KVM: SVM: Make use of asm.h
On 09/13/2012 12:00 PM, Gleb Natapov wrote:
Most interrupt are delivered to only one vcpu. Use pre-build tables to
find interrupt destination instead of looping through all vcpus. In case
of logical mode loop only through vcpus in a logical cluster irq is sent
to.
Looks good.
--
error
On 09/12/2012 09:03 PM, Avi Kivity wrote:
On 09/12/2012 08:49 PM, Avi Kivity wrote:
Instead of branchy code depending on level, gpte.ps, and mmu configuration,
prepare everything in a bitmap during mode changes and look it up during
runtime.
6/5 is buggy, sorry, will update it tomorrow
On 09/12/2012 09:11 PM, Jiri Slaby wrote:
On 09/12/2012 10:18 AM, Avi Kivity wrote:
On 09/12/2012 11:13 AM, Jiri Slaby wrote:
Please provide the output of vmxcap
(http://goo.gl/c5lUO),
Unrestricted guest no
The big real mode fixes.
and a snapshot
On 09/13/2012 02:48 PM, Xiao Guangrong wrote:
On 09/12/2012 10:29 PM, Avi Kivity wrote:
static bool FNAME(is_last_gpte)(struct guest_walker *walker,
struct kvm_vcpu *vcpu, struct kvm_mmu *mmu,
pt_element_t gpte)
@@ -217,7 +206,7
On 09/11/2012 09:27 PM, Andrew Theurer wrote:
So, having both is probably not a good idea. However, I feel like
there's more work to be done. With no over-commit (10 VMs), total
throughput is 23427 +/- 2.76%. A 2x over-commit will no doubt have some
overhead, but a reduction to ~4500 is
On 09/13/2012 03:09 PM, Xiao Guangrong wrote:
The result is short, branch-free code.
Signed-off-by: Avi Kivity a...@redhat.com
+static void update_permission_bitmask(struct kvm_vcpu *vcpu, struct kvm_mmu
*mmu)
+{
+unsigned bit, byte, pfec;
+u8 map;
+bool fault, x, w, u
On 09/13/2012 03:41 PM, Xiao Guangrong wrote:
On 09/12/2012 10:29 PM, Avi Kivity wrote:
+pte_access = pt_access gpte_access(vcpu, pte);
+eperm |= (mmu-permissions[access 1] pte_access) 1;
last_gpte = FNAME(is_last_gpte)(walker, vcpu, mmu, pte
On 09/12/2012 11:10 AM, Xudong Hao wrote:
Enable KVM FPU fully eager restore, if there is other FPU state which isn't
tracked by CR0.TS bit.
v3 changes from v2:
- Make fpu active explicitly while guest xsave is enabling and non-lazy
xstate bit
exist.
v2 changes from v1:
- Expand
On 09/13/2012 07:29 PM, Marcelo Tosatti wrote:
On Thu, Sep 13, 2012 at 01:26:36PM -0300, Marcelo Tosatti wrote:
On Wed, Sep 12, 2012 at 04:10:24PM +0800, Xudong Hao wrote:
Enable KVM FPU fully eager restore, if there is other FPU state which isn't
tracked by CR0.TS bit.
v3 changes from
On 09/12/2012 09:07 AM, Fengguang Wu wrote:
On Wed, Sep 12, 2012 at 01:58:22PM +0800, Amos Kong wrote:
On 11/09/12 22:31, Fengguang Wu wrote:
Hi Avi,
In the kvm/next branch, sparse warns about
arch/x86/kvm/emulate.c:232 writeback_registers() error: buffer overflow
'ctxt-_regs' 9 = 15
On 09/12/2012 01:39 AM, Michael S. Tsirkin wrote:
On Tue, Sep 11, 2012 at 11:04:59PM +0300, Avi Kivity wrote:
On 09/11/2012 08:13 PM, Paul E. McKenney wrote:
Is there a risk of DOS if RCU is delayed while
lots of memory is queued up in this way?
If yes is this a generic problem
On 09/12/2012 04:03 AM, Paul E. McKenney wrote:
Paul, I'd like to check something with you here:
this function can be triggered by userspace,
any number of times; we allocate
a 2K chunk of memory that is later freed by
kfree_rcu.
Is there a risk of DOS if RCU is delayed while
On 09/10/2012 04:29 AM, Matthew Ogilvie wrote:
Intel's definition of edge triggered means: asserted with a
low-to-high transition at the time an interrupt is registered
and then kept high until the interrupt is served via one of the
EOI mechanisms or goes away unhandled.
So the only
On 09/11/2012 10:41 PM, Jiri Slaby wrote:
On 09/11/2012 09:03 PM, Marcelo Tosatti wrote:
On Tue, Sep 11, 2012 at 08:11:36PM +0200, Jiri Slaby wrote:
Hi,
it looks like an update from next-20120824 to next-20120910 makes kvm
defunct. When I try to run qemu, it loops forever without printing
On 09/12/2012 07:40 AM, Fengguang Wu wrote:
Hi,
3 of my test boxes running v3.5 kernel become unaccessible and I find
two of them kept emitting this dmesg:
vmx_handle_exit: unexpected, valid vectoring info (0x8b0e) and exit
reason is 0x31
The other one has froze and the above
On 09/12/2012 11:13 AM, Jiri Slaby wrote:
Please provide the output of vmxcap
(http://goo.gl/c5lUO),
Unrestricted guest no
The big real mode fixes.
and a snapshot of kvm_stat while the guest is hung.
kvm statistics
exits
On 09/12/2012 11:48 AM, Jan Kiszka wrote:
On 2012-09-12 10:01, Avi Kivity wrote:
On 09/10/2012 04:29 AM, Matthew Ogilvie wrote:
Intel's definition of edge triggered means: asserted with a
low-to-high transition at the time an interrupt is registered
and then kept high until the interrupt
On 09/12/2012 11:57 AM, Jan Kiszka wrote:
On 2012-09-12 10:51, Avi Kivity wrote:
On 09/12/2012 11:48 AM, Jan Kiszka wrote:
On 2012-09-12 10:01, Avi Kivity wrote:
On 09/10/2012 04:29 AM, Matthew Ogilvie wrote:
Intel's definition of edge triggered means: asserted with a
low-to-high transition
want to trap a following write in order to
set the dirty bit).
It doesn't seem to hurt in practice, but in order to make the code
readable, push the write protection out of gpte_access() and into
a new protect_clean_gpte() which is called explicitly when needed.
Signed-off-by: Avi Kivity
The page table walk has gotten crufty over the years and is threatening to
become
even more crufty when SMAP is introduced. Clean it up (and optimize it)
somewhat.
Avi Kivity (5):
KVM: MMU: Push clean gpte write protection out of gpte_access()
KVM: MMU: Optimize gpte_access() slightly
We no longer rely on paging_tmpl.h defines; so we can move the function
to mmu.c.
Rely on zero extension to 64 bits to get the correct nx behaviour.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/mmu.c | 10 ++
arch/x86/kvm/paging_tmpl.h | 21 +
2
is recalculated when rarely-changing variables change
(cr0, cr4) and is indexed by the often-changing variables (page fault error
code, pte access permissions).
The result is short, branch-free code.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/include/asm/kvm_host.h | 7 +++
arch
If nx is disabled, then is gpte[63] is set we will hit a reserved
bit set fault before checking permissions; so we can ignore the
setting of efer.nxe.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/paging_tmpl.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
diff --git
The page table walk is coded as an infinite loop, with a special
case on the last pte.
Code it as an ordinary loop with a termination condition on the last
pte (large page or walk length exhausted), and put the last pte handling
code after the loop where it belongs.
Signed-off-by: Avi Kivity
Use macros for bitness-insensitive register names, instead of
rolling our own.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/vmx.c | 69 --
1 file changed, 30 insertions(+), 39 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch
vmx.c has an lto-unfriendly bit, fix it up.
While there, clean up our asm code.
Avi Kivity (3):
KVM: VMX: Make lto-friendly
KVM: VMX: Make use of asm.h
KVM: SVM: Make use of asm.h
arch/x86/kvm/svm.c | 46 +
arch/x86/kvm/vmx.c | 85
LTO (link-time optimization) doesn't like local labels to be referred to
from a different function, since the two functions may be built in separate
compilation units. Use an external variable instead.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/vmx.c | 16 ++--
1
Use macros for bitness-insensitive register names, instead of
rolling our own.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/kvm/svm.c | 46 --
1 file changed, 20 insertions(+), 26 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm
want to trap a following write in order to
set the dirty bit).
It doesn't seem to hurt in practice, but in order to make the code
readable, push the write protection out of gpte_access() and into
a new protect_clean_gpte() which is called explicitly when needed.
Signed-off-by: Avi Kivity
(resend due to mail server malfunction)
The page table walk has gotten crufty over the years and is threatening to
become
even more crufty when SMAP is introduced. Clean it up (and optimize it)
somewhat.
Avi Kivity (5):
KVM: MMU: Push clean gpte write protection out of gpte_access()
KVM
On 09/11/2012 05:39 PM, Marcelo Tosatti wrote:
On Tue, Sep 11, 2012 at 12:18:22PM +0300, Avi Kivity wrote:
The same can happen with slot deletion, for example.
Userspace (which performed the modification which can result in faults
to non-existant/read-only/.../new-tag memslot), must
On 09/12/2012 06:44 PM, Marcelo Tosatti wrote:
On Wed, Sep 12, 2012 at 06:34:33PM +0300, Avi Kivity wrote:
On 09/11/2012 05:39 PM, Marcelo Tosatti wrote:
On Tue, Sep 11, 2012 at 12:18:22PM +0300, Avi Kivity wrote:
The same can happen with slot deletion, for example.
Userspace (which
Instead of branchy code depending on level, gpte.ps, and mmu configuration,
prepare everything in a bitmap during mode changes and look it up during
runtime.
Signed-off-by: Avi Kivity a...@redhat.com
---
arch/x86/include/asm/kvm_host.h | 7 +++
arch/x86/kvm/mmu.c | 20
On 09/12/2012 08:49 PM, Avi Kivity wrote:
Instead of branchy code depending on level, gpte.ps, and mmu configuration,
prepare everything in a bitmap during mode changes and look it up during
runtime.
6/5 is buggy, sorry, will update it tomorrow.
--
error compiling committee.c: too many
On 09/11/2012 09:43 AM, Hao, Xudong wrote:
-Original Message-
From: Avi Kivity [mailto:a...@redhat.com]
Sent: Monday, September 10, 2012 4:07 PM
To: Hao, Xudong
Cc: kvm@vger.kernel.org; Zhang, Xiantao; joerg.roe...@amd.com
Subject: Re: [PATCH v2] kvm/fpu: Enable fully eager restore
On 09/11/2012 02:03 AM, Anthony Liguori wrote:
Avi Kivity a...@redhat.com writes:
Please pull from:
git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git uq/master
to merge some kvm updates, most notably a port of qemu-kvm's pre-vfio device
assignment. With this there are no significant
On 09/11/2012 01:31 AM, Marcelo Tosatti wrote:
On Fri, Sep 07, 2012 at 05:56:39PM +0800, Xiao Guangrong wrote:
On 09/06/2012 10:09 PM, Avi Kivity wrote:
On 08/22/2012 03:47 PM, Xiao Guangrong wrote:
On 08/22/2012 08:06 PM, Avi Kivity wrote:
On 08/21/2012 06:03 AM, Xiao Guangrong wrote
On 09/10/2012 08:05 PM, Gleb Natapov wrote:
On Mon, Sep 10, 2012 at 07:17:54PM +0300, Gleb Natapov wrote:
+return 0;
+}
+
+static inline int kvm_apic_set_id(struct kvm_lapic *apic, u8 id)
+{
+apic_set_reg(apic, APIC_ID, id 24);
+return
On 09/11/2012 12:35 PM, Gleb Natapov wrote:
On Tue, Sep 11, 2012 at 12:29:06PM +0300, Avi Kivity wrote:
On 09/10/2012 08:05 PM, Gleb Natapov wrote:
On Mon, Sep 10, 2012 at 07:17:54PM +0300, Gleb Natapov wrote:
+ return 0;
+}
+
+static inline int kvm_apic_set_id(struct
On 09/10/2012 09:33 PM, Daniel Tschritter wrote:
Hi everybody,
I got a server with CentOS 6.3 and KVM as a host and a windows 2k8
guest.
The windows machine's disk performance is very poor.
The windows guest uses VirtIO disk drivers, no cache and uses a LVM
partition on a Raid1.
On 09/11/2012 12:45 PM, Gleb Natapov wrote:
There is no userspace to return error to if error happens on guest MMIO
write. Unless you mean return it as a return value of ioctl(VM_RUN) in
which case it is equivalent of killing the guest.
That is what I meant.
And this is not fair
301 - 400 of 16034 matches
Mail list logo