> -Original Message-
> From: Avi Kivity [mailto:a...@redhat.com]
> Sent: Friday, September 14, 2012 12:40 AM
> To: Marcelo Tosatti
> Cc: Hao, Xudong; kvm@vger.kernel.org; Zhang, Xiantao
> Subject: Re: [PATCH v3] kvm/fpu: Enable fully eager restore kvm FPU
>
> On 09/13/2012 07:29 PM, Marcel
Richard Davies wrote:
> Thank you for your latest patches. I attach my latest perf report for a slow
> boot with all of these applied.
For avoidance of any doubt, there is the combined diff versus 3.6.0-rc5
which I tested:
diff --git a/fs/btrfs/qgroup.c b/fs/btrfs/qgroup.c
index 38b42e7..090405d
Use macros for bitness-insensitive register names, instead of
rolling our own.
Signed-off-by: Avi Kivity
---
arch/x86/kvm/svm.c | 46 --
1 file changed, 20 insertions(+), 26 deletions(-)
diff --git a/arch/x86/kvm/svm.c b/arch/x86/kvm/svm.c
index 611c7
LTO (link-time optimization) doesn't like local labels to be referred to
from a different function, since the two functions may be built in separate
compilation units. Use an external variable instead.
Signed-off-by: Avi Kivity
---
arch/x86/kvm/vmx.c | 17 +++--
1 file changed, 11 i
Use macros for bitness-insensitive register names, instead of
rolling our own.
Signed-off-by: Avi Kivity
---
arch/x86/kvm/vmx.c | 69 --
1 file changed, 30 insertions(+), 39 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
ind
vmx.c has an lto-unfriendly bit, fix it up.
While there, clean up our asm code.
v2: add missing .global in case vmx_return and vmx_set_constant_host_state()
become
separated by lto
Avi Kivity (3):
KVM: VMX: Make lto-friendly
KVM: VMX: Make use of asm.h
KVM: SVM: Make use of asm.h
ar
walk_addr_generic() permission checks are a maze of branchy code, which is
performed four times per lookup. It depends on the type of access, efer.nxe,
cr0.wp, cr4.smep, and in the near future, cr4.smap.
Optimize this away by precalculating all variants and storing them in a
bitmap. The bitmap i
While unspecified, the behaviour of Intel processors is to first
perform the page table walk, then, if the walk was successful, to
atomically update the accessed and dirty bits of walked paging elements.
While we are not required to follow this exactly, doing so will allow us
to perform the access
'eperm' is no longer used in the walker loop, so we can eliminate it.
Signed-off-by: Avi Kivity
---
arch/x86/kvm/paging_tmpl.h | 5 +
1 file changed, 1 insertion(+), 4 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 134ea7b..95a64d1 100644
--- a/arch/
Keep track of accessed/dirty bits; if they are all set, do not
enter the accessed/dirty update loop.
Signed-off-by: Avi Kivity
---
arch/x86/kvm/paging_tmpl.h | 24 ++--
1 file changed, 18 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/pa
Instead of branchy code depending on level, gpte.ps, and mmu configuration,
prepare everything in a bitmap during mode changes and look it up during
runtime.
Reviewed-by: Xiao Guangrong
Signed-off-by: Avi Kivity
---
arch/x86/include/asm/kvm_host.h | 7 +++
arch/x86/kvm/mmu.c |
The page table walk is coded as an infinite loop, with a special
case on the last pte.
Code it as an ordinary loop with a termination condition on the last
pte (large page or walk length exhausted), and put the last pte handling
code after the loop where it belongs.
Signed-off-by: Avi Kivity
---
We no longer rely on paging_tmpl.h defines; so we can move the function
to mmu.c.
Rely on zero extension to 64 bits to get the correct nx behaviour.
Reviewed-by: Xiao Guangrong
Signed-off-by: Avi Kivity
---
arch/x86/kvm/mmu.c | 10 ++
arch/x86/kvm/paging_tmpl.h | 21 +--
If nx is disabled, then is gpte[63] is set we will hit a reserved
bit set fault before checking permissions; so we can ignore the
setting of efer.nxe.
Reviewed-by: Xiao Guangrong
Signed-off-by: Avi Kivity
---
arch/x86/kvm/paging_tmpl.h | 4 +---
1 file changed, 1 insertion(+), 3 deletions(-)
d
gpte_access() computes the access permissions of a guest pte and also
write-protects clean gptes. This is wrong when we are servicing a
write fault (since we'll be setting the dirty bit momentarily) but
correct when instantiating a speculative spte, or when servicing a
read fault (since we'll want
The page table walk has gotten crufty over the years and is threatening to
become
even more crufty when SMAP is introduced. Clean it up (and optimize it)
somewhat.
v2:
fix SMEP false positive by moving checks to the end of the walk
fix last_pte_bitmap documentation
fix incorrect SMEP faul
On 09/13/2012 04:39 PM, Xiao Guangrong wrote:
>> diff --git a/arch/x86/include/asm/kvm_host.h
>> b/arch/x86/include/asm/kvm_host.h
>> index 3318bde..f9a48cf 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -298,6 +298,13 @@ struct kvm_mmu {
>> u6
Hi,
I found the kvmclock documentation to be rather unhelpful. This patch
should fix it.
Cheers,
Stefan
Author: Stefan Fritsch
Date: Sun Sep 16 12:30:46 2012 +0200
kvm: Fix kvmclock documentation to match reality
- mention that system time needs to be added to wallclock time
On 09/16/2012 01:56 PM, Michael S. Tsirkin wrote:
> On Thu, Sep 13, 2012 at 12:36:30PM +0200, Jan Kiszka wrote:
>> On 2012-09-13 12:33, Gleb Natapov wrote:
>> >>
>> >> So, this can be the foundation for direct MSI delivery as well, right?
>> >>
>> > What do you mean by "direct MSI delivery"? kvm_ir
On Thu, Sep 13, 2012 at 12:36:30PM +0200, Jan Kiszka wrote:
> On 2012-09-13 12:33, Gleb Natapov wrote:
> >>
> >> So, this can be the foundation for direct MSI delivery as well, right?
> >>
> > What do you mean by "direct MSI delivery"? kvm_irq_delivery_to_apic() is
> > called by MSI. If you mean de
Hi,
I remember that this was broken some time ago and currently with
qemu-kvm 1.2.0 I am still not able to use
block migration plus xbzrle. The migration fails if both are used
together. XBZRLE without block migration works.
Can someone please advise what is the current expected behaviour?
T
Hi,
i have seen some recent threads about running Xen as guest. For me it is
still not working, but
I have read that Avi is working on some fixes. I have seen in the logs
that the following MSRs
are missing. Maybe this is related:
cpu0 unhandled rdmsr: 0xce
cpu0 disabled perfctr wrmsr: 0xc1
Hi,
when trying to block migrate a VM from one node to another, the source
VM crashed with the following assertion:
block.c:3829: bdrv_set_in_use: Assertion `bs->in_use != in_use' failed.
Is this sth already addresses/known?
Thanks,
Peter
--
To unsubscribe from this list: send the line "unsu
On 09/14/2012 08:59 AM, Xiao Guangrong wrote:
> On 09/10/2012 05:26 PM, Xiao Guangrong wrote:
>> On 09/10/2012 05:09 PM, Avi Kivity wrote:
>>> On 09/07/2012 09:16 AM, Xiao Guangrong wrote:
mmu_notifier is the interface to broadcast the mm events to KVM, the
tracepoints introduced in this
On 09/14/2012 05:19 PM, Li, Jiongxi wrote:
> Sorry for the late response
>
>> -Original Message-
>> From: Avi Kivity [mailto:a...@redhat.com]
>> Sent: Friday, September 07, 2012 12:38 AM
>> To: Li, Jiongxi
>> Cc: kvm@vger.kernel.org
>> Subject: Re: [PATCH 4/5]KVM:x86, apicv: add interface
On 09/14/2012 05:17 PM, Li, Jiongxi wrote:
>> >
>> > -static void apic_send_ipi(struct kvm_lapic *apic)
>> > +/*
>> > + * this interface assumes a trap-like exit, which has already
>> > +finished
>> > + * desired side effect including vISR and vPPR update.
>> > + */
>> > +void kvm_apic_set_eoi(stru
On 09/14/2012 05:14 PM, Li, Jiongxi wrote:
> Sorry for the late response
>
>> -Original Message-
>> From: Avi Kivity [mailto:a...@redhat.com]
>> Sent: Friday, September 07, 2012 12:02 AM
>> To: Li, Jiongxi
>> Cc: kvm@vger.kernel.org
>> Subject: Re: [PATCH 1/5]KVM: x86, apicv: add APICv reg
On 09/14/2012 05:14 PM, Li, Jiongxi wrote:
>
>>
>> I don't see patches for enabling posted interrupts? This can improve both
>> assigned and virtual interrupt delivery.
> We will have a separate patch for posted interrupts after cleaning up this
> patch. Meanwhile it is not ready.
Please post
On 09/14/2012 12:30 AM, Andrew Theurer wrote:
> The concern I have is that even though we have gone through changes to
> help reduce the candidate vcpus we yield to, we still have a very poor
> idea of which vcpu really needs to run. The result is high cpu usage in
> the get_pid_task and still so
vcpu mutex can be held for unlimited time so
taking it with mutex_lock on an ioctl is wrong:
one process could be passed a vcpu fd and
call this ioctl on the vcpu used by another process,
it will then be unkillable until the owner exits.
Call mutex_lock_killable instead and return status.
Note: mu
30 matches
Mail list logo