On Fri, 13 Apr 2012 18:05:29 +0800
Xiao Guangrong wrote:
> Thanks for Avi and Marcelo's review, i have simplified the whole things
> in this version:
> - it only fix the page fault with PFEC.P = 1 && PFEC.W = 0 that means
> unlock set_spte path can be dropped.
>
> - it only fixes the page faul
On Fri, 13 Apr 2012 18:14:26 +0800
Xiao Guangrong wrote:
> Using bit 1 (PTE_LIST_WP_BIT) in rmap store the write-protect status
> to avoid unnecessary shadow page walking
>
> Signed-off-by: Xiao Guangrong
> ---
> arch/x86/kvm/mmu.c | 40 ++--
> 1 files cha
On Fri, 13 Apr 2012 18:12:41 +0800b
Xiao Guangrong wrote:
> It is used to walk all the sptes of the specified pte_list, after
> this, the code of pte_list_walk can be removed
>
> And it can restart the walking automatically if the spte is zapped
Well, I want to ask two questions:
- why
On Fri, 13 Apr 2012 18:11:45 +0800
Xiao Guangrong wrote:
> +/* Return true if the spte is dropped. */
Return value does not correspond with the function name so it is confusing.
People may think that true means write protection has been done.
> +static bool spte_write_protect(struct kvm *kvm,
On Fri, 13 Apr 2012 18:10:45 +0800
Xiao Guangrong wrote:
> static u64 *rmap_get_next(struct rmap_iterator *iter)
> {
> + u64 *sptep = NULL;
> +
> if (iter->desc) {
> if (iter->pos < PTE_LIST_EXT - 1) {
> - u64 *sptep;
> -
> ++ite
On Fri, 13 Apr 2012 18:11:13 +0800
Xiao Guangrong wrote:
> The reture value of __rmap_write_protect is either 1 or 0, use
> true/false instead of these
...
> @@ -1689,7 +1690,7 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
>
> kvm_mmu_pages_init(parent, &parents, &pages);
>
On Wed, Apr 11, 2012 at 06:49:55PM +0300, Michael S. Tsirkin wrote:
> Intel spec says that TMR needs to be set/cleared
> when IRR is set, but kvm also clears it on EOI.
>
> I did some tests on a real (AMD based) system,
> and I see same TMR values both before
> and after EOI, so I think it's a mi
Hi,
On Wed, 11 Apr 2012 11:11:07 +0800
Xiao Guangrong wrote:
> > restart:
> > - list_for_each_entry_safe(sp, node, &kvm->arch.active_mmu_pages, link)
> > - if (kvm_mmu_prepare_zap_page(kvm, sp, &invalid_list))
> > - goto restart;
> > + zapped = 0;
> > + list_fo
On Fri, 13 Apr 2012 18:33:39 -0300
Marcelo Tosatti wrote:
> > kvm_arch_commit_memory_region(kvm, mem, old, user_alloc);
> >
> > - /*
> > -* If the new memory slot is created, we need to clear all
> > -* mmio sptes.
> > -*/
> > - if (npages && old.base_gfn != mem->guest_phys_
On Thu, Apr 12, 2012 at 08:24:57PM +0900, Kuniyasu Suzaki wrote:
>
> Dear,
>
> I made a presentation which measures OS security functions(ASLR,
> Memory Santization, and Cache Page Flushing) on memory deduplication
> "KSM with VKM" at EuroSec 2012.
>
> The titile is "Effects of Memory Randomizat
On Thu, 12 Apr 2012 19:56:45 -0300
Marcelo Tosatti wrote:
> Other than potential performance improvement, the worst case scenario
> of holding mmu_lock for hundreds of milliseconds at the beginning
> of migration of huge guests must be fixed.
Write protection in kvm_arch_commit_memory_region()
From: David Gibson
On target-ppc, our table of CPU types and features encodes the features as
found on the hardware, regardless of whether these features are actually
usable under TCG or KVM. We already have cases where the information from
the cpu table must be fixed up to account for limitatio
On Tue, Apr 10, 2012 at 10:05:03PM +0900, Takuya Yoshikawa wrote:
> From: Takuya Yoshikawa
>
> We do not need to zap all shadow pages of the guest when we create or
> destroy a slot in this function.
>
> To change this, we make kvm_mmu_zap_all()/kvm_arch_flush_shadow()
> zap only those which hav
On Mon, Apr 09, 2012 at 06:39:58PM +0300, Avi Kivity wrote:
> This patchset implements MMX registers, SSE data alignment, and three
> instructions:
>
> MOVQ (MMX)
> MOVNTPS (SSE)
> MOVDQA (SSE)
>
> all used in accessing framebuffers from guests.
>
> Avi Kivity (4):
> KVM: x86 emulator: a
OPEN YOUR ATTACHMENT FILE FOR YOUR TRANSFER INFORMATION.rtf
Description: MS-Word document
For Guest accessible SPRGs 4-7, save/restore must be handled differently for
64bit and
non-64 bit case. The registers are maintained as 64 bit copies by KVM. While
saving/restoring
for the non-64 bit case we should always take the lower 4 bytes.
Signed-off-by: Varun Sethi
---
arch/powerpc/kvm/
From: "Michael S. Tsirkin"
Date: Mon, 9 Apr 2012 13:24:02 +0300
> The skb struct ubuf_info callback gets passed struct ubuf_info
> itself, not the arg value as the field name and the function signature
> seem to imply. Rename the arg field to ctx to match usage,
> add documentation and change the
On 04/12/2012 08:32 PM, Marcelo Tosatti wrote:
The following changes since commit dadc1064c348545695b8a14d9dc72ccaa2983be7:
target-microblaze: added PetaLogix copyright (2012-04-12 09:56:51 +0200)
are available in the git repository at:
git://git.kernel.org/pub/scm/virt/kvm/qemu-kvm.git u
On Apr 11, 2012, at 3:27 PM, Scott Wood wrote:
> e6500 support (commit 10241842fbe900276634fee8d37ec48a7d8a762f,
> "powerpc: Add initial e6500 cpu support" and the introduction of
> CPU_FTR_EMB_HV (commit 73196cd364a2d972d73fa08da9d81ca3215bed68,
> "KVM: PPC: e500mc support") collided during mer
Xiao,
Takuya Yoshikawa wrote:
> > What is your really want to say but i missed?
>
> How to improve and what we should pay for that.
>
> Note that I am not objecting to O(1) itself.
>
I forgot to say one important thing -- I might give you wrong impression.
I am perfectly fine with your lock
On 2012-04-13 00:38, Marcelo Tosatti wrote:
>>> Have you checked that direct MSI injection does not make use of
>>> IRQ routing data structures, such as for acking?
>>
>> See kvm_set_msi: The routing structure is only read in the context of
>> that function, no reference is kept.
>
> I was think
On 04/11/2012 02:03 PM, Steven wrote:
> Hi, Guangrong,
> I read your very nice slides at LCJ 2011, "KVM MMU virtualization".
> However, I have some confusion about nested paging,
> which you gave a simplified example to illustrate in slide 11.
> The very first step is to use gCR3 as the input to t
For Guest accessible SPRGs 4-7, save/restore must be handled differently for
64bit and
non-64 bit case. The registers are maintained as 64 bit copies by KVM. While
saving/restoring
for the non-64 bit case we should always take the lower 4 bytes.
Signed-off-by: Varun Sethi
---
arch/powerpc/kvm/
On 04/13/2012 07:08 AM, Marcelo Tosatti wrote:
>> Yes, it is used as a cache for mmu_need_write_protect.
>>
>> When the gfn is protected by sync sp or read-only host page we set this bit,
>> and it is be cleared when the sp become unsync or host page becomes writable.
>
> Wouldnt dropping suppor
P bit of page fault error code is missed in this tracepoint, fix it by
passing the full error code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmutrace.h|7 +++
arch/x86/kvm/paging_tmpl.h |3 +--
2 files changed, 4 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmutr
To see what happen on this path and help us to optimize it
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |2 ++
arch/x86/kvm/mmutrace.h | 41 +
2 files changed, 43 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/
If the the present bit of page fault error code is set, it indicates
the shadow page is populated on all levels, it means what we do is
only modify the access bit which can be done out of mmu-lock
Currently, in order to simplify the code, we only fix the page fault
caused by write-protect on the f
Make all sptes to be writable if the gfn become write-free to reduce
the later page fault
The idea is from Avi
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 34 +-
1 files changed, 33 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arc
If this bit is set, it means the W bit of the spte is cleared due
to shadow page table protection
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 55 +++-
1 files changed, 37 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c
This bit indicates whether the spte is allow to be writable that
means the gpte of this spte is writable and the pfn pointed by
this spte is writable on host
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 13 ++---
1 files changed, 6 insertions(+), 7 deletions(-)
diff --git a/
Using bit 1 (PTE_LIST_WP_BIT) in rmap store the write-protect status
to avoid unnecessary shadow page walking
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 40 ++--
1 files changed, 34 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
On direct mmu without nested, all the page is not write-protected by
shadow page table protection
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 53e92de..0c6e92d 100644
In current code, only one bit (bit 0) is used in rmap, this patch
export more bits from rmap, during spte add/remove, only bit 0 is
touched and other bits are keeped
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 138
1 files changed,
It is used to walk all the sptes of the specified pte_list, after
this, the code of pte_list_walk can be removed
And it can restart the walking automatically if the spte is zapped
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 233 +++---
a
Export the present bit of page fault error code, the later patch
will use it
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/vmx.c |9 -
1 files changed, 8 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b/arch/x86/kvm/vmx.c
index 52f6856..2c98057 100644
--- a/arch/x86/
Introduce a common function to abstract spte write-protect to
cleanup the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 60 ++-
1 files changed, 35 insertions(+), 25 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
The reture value of __rmap_write_protect is either 1 or 0, use
true/false instead of these
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 13 +++--
1 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 91518b6..7589e56 1006
Only test present bit is not enough since mmio spte is also set this
bit, use is_shadow_present_pte() instead of it
Also move the BUG_ONs to the common function to cleanup the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 38 --
1 files change
It is used to establish the spte if it is not present to cleanup the
code, it also marks spte present before linking it to the sp's
parent_list, then we can integrate the code between rmap walking and
parent_lisk walking in the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c
Use link_shadow_page to link the sp to the spte in __direct_map
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 12
1 files changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 29ad6f9..151fcddc 100644
--- a/arch/x86/kvm/mmu.c
Thanks for Avi and Marcelo's review, i have simplified the whole things
in this version:
- it only fix the page fault with PFEC.P = 1 && PFEC.W = 0 that means
unlock set_spte path can be dropped.
- it only fixes the page fault caused by dirty-log
In this version, all the information we need is
41 matches
Mail list logo