Show sp->mmu_valid_gen
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
--- a/arch/x86/kvm/mmutrace.h
++
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_zap_all_pages
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 15 ---
arch/x86/kvm/x86.c |4 ++--
2 files changed, 2 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 688e755..c010ace
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
After the patch, kvm_zap_obsolete_pages can forward progress anyway,
so update the comments
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 35
e is patched. This is no longer the case and
| mutex_lock(&vcpu->kvm->lock); is gone from that code path long time ago,
| so now kvm_mmu_zap_all() there is useless and the code is incorrect.
So we drop it and it will be fixed later
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |5 +
1 files
able according to current kvm's
generation-number. It ensures the old pages are not used any more.
Then the invalid-gen pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
are zapped by using lock-break technique.
Gleb Natapov (1):
KVM: MMU: reduce KVM_REQ_MMU_RELOAD
flushes
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 46 +-
1 files changed, 41 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9b57faa..e676356 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm
The obsolete page will be zapped soon, do not resue it to
reduce future page fault
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3a3e6c5..9b57faa 100644
--- a/arch/x86
are zapped
Note: kvm_mmu_commit_zap_page is still needed before free
the pages since other vcpus may be doing locklessly shadow
page walking
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 32 ++--
1 files changed, 22 insertions(+), 10 deletions(-)
diff
d_remote_mmus()
| after incrementing mmu_valid_gen.
[ Xiao: add some comments and the check of is_obsolete_sp() ]
Signed-off-by: Gleb Natapov
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |8 +++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
It is good for debug and development
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |1 +
arch/x86/kvm/mmutrace.h | 20
2 files changed, 21 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c010ace..3a3e6c5 100644
On 05/22/2013 09:17 PM, Gleb Natapov wrote:
> On Wed, May 22, 2013 at 05:41:10PM +0800, Xiao Guangrong wrote:
>> On 05/22/2013 04:54 PM, Gleb Natapov wrote:
>>> On Wed, May 22, 2013 at 04:46:04PM +0800, Xiao Guangrong wrote:
>>>> On 05/22/2013 02:34 PM, Gleb Natap
On 05/22/2013 04:54 PM, Gleb Natapov wrote:
> On Wed, May 22, 2013 at 04:46:04PM +0800, Xiao Guangrong wrote:
>> On 05/22/2013 02:34 PM, Gleb Natapov wrote:
>>> On Tue, May 21, 2013 at 10:33:30PM -0300, Marcelo Tosatti wrote:
>>>> On Tue, May 21, 2013 at 11:39:
On 05/22/2013 02:34 PM, Gleb Natapov wrote:
> On Tue, May 21, 2013 at 10:33:30PM -0300, Marcelo Tosatti wrote:
>> On Tue, May 21, 2013 at 11:39:03AM +0300, Gleb Natapov wrote:
Any pages with stale information will be zapped by kvm_mmu_zap_all().
When that happens, page faults will take pl
On 05/21/2013 04:40 AM, Marcelo Tosatti wrote:
> On Mon, May 20, 2013 at 11:15:45PM +0300, Gleb Natapov wrote:
>> On Mon, May 20, 2013 at 04:46:24PM -0300, Marcelo Tosatti wrote:
>>> On Fri, May 17, 2013 at 05:12:58AM +0800, Xiao Guangrong wrote:
>>>> The current
On 05/19/2013 06:47 PM, Gleb Natapov wrote:
> On Fri, May 17, 2013 at 05:12:57AM +0800, Xiao Guangrong wrote:
>> Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
>> kvm_mmu_prepare_zap_page so that we can call kvm_mmu_commit_zap_page
>&
On 05/19/2013 06:04 PM, Gleb Natapov wrote:
>> +/*
>> + * Do not repeatedly zap a root page to avoid unnecessary
>> + * KVM_REQ_MMU_RELOAD, otherwise we may not be able to
>> + * progress:
>> + *vcpu 0vcpu 1
>>
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |5 +
1 files
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 14 --
1 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm
It is good for debug and development
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |1 +
arch/x86/kvm/mmutrace.h | 23 +++
2 files changed, 24 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 89b51dc..2c512e8 100644
Show sp->mmu_valid_gen
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
--- a/arch/x86/kvm/mmutrace.h
++
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_all_pages
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 15 ---
arch/x86/kvm/x86.c |6 +++---
2 files changed, 3 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7ad0e50..89b51dc
es (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
are zapped by using lock-break technique
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 103 +++
arch/x86/kvm/mmu.h |1 +
3 fi
Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
kvm_mmu_prepare_zap_page so that we can call kvm_mmu_commit_zap_page
once for multiple kvm_mmu_prepare_zap_page that can help us to avoid
unnecessary TLB flush
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c
according to current kvm's
generation-number. It ensures the old pages are not used any more.
Then the invalid-gen pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
are zapped by using lock-break technique.
Xiao Guangrong (7):
KVM: MMU: drop unnecessary kvm_reload_remote_mmus
KVM: M
On 05/17/2013 12:19 AM, Gleb Natapov wrote:
> On Thu, May 16, 2013 at 08:17:49PM +0800, Xiao Guangrong wrote:
>> Replace kvm_mmu_zap_all by kvm_mmu_invalidate_memslot_pages except on
>> the path of mmu_notifier->release() which will be fixed in
>> the later patch
>>
On 05/17/2013 12:18 AM, Gleb Natapov wrote:
>> +
>> +/*
>> + * Fast invalidate all shadow pages belong to @slot.
>> + *
>> + * @slot != NULL means the invalidation is caused the memslot specified
>> + * by @slot is being deleted, in this case, we should ensure that rmap
>> + * and lpage-info of th
On 05/16/2013 11:57 PM, Gleb Natapov wrote:
> One more thought. With current patch if zap_invalid_page() will be
> called second time while another zap_invalid_page() is still running
> (can that happen?) they will both run concurrently fighting for the
Currently, it can not happen since zap_inva
On 05/16/2013 10:36 PM, Takuya Yoshikawa wrote:
> On Thu, 16 May 2013 20:17:45 +0800
> Xiao Guangrong wrote:
>
>> Bechmark result:
>> I have tested this patchset and the previous version that only zaps the
>> pages linked on invalid slot's rmap. The benchmark i
On 05/16/2013 08:45 PM, Paolo Bonzini wrote:
> Il 16/05/2013 14:17, Xiao Guangrong ha scritto:
>> Zap at lease 10 pages before releasing mmu-lock to reduce the overload
>> caused by requiring lock
>>
>> [ It improves kernel building 0.6% ~ 1% ]
>>
>> Signed-
On 05/16/2013 08:43 PM, Gleb Natapov wrote:
> On Thu, May 16, 2013 at 08:17:48PM +0800, Xiao Guangrong wrote:
>> The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
>> walk and zap all shadow pages one by one, also it need to zap all guest
>> page's
just simply increase the
global generation-number then reload root shadow pages on all vcpus.
Vcpu will create a new shadow page table according to current kvm's
generation-number. It ensures the old pages are not used any more.
Then the invalid-gen pages (sp->mmu_valid_gen != kvm->arch
Attach the benchmark.
On 05/16/2013 08:17 PM, Xiao Guangrong wrote:
> Bechmark result:
> I have tested this patchset and the previous version that only zaps the
> pages linked on invalid slot's rmap. The benchmark is written by myself
> which has been attached, it writes large m
It is good for debug and development
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |2 ++
arch/x86/kvm/mmutrace.h | 23 +++
2 files changed, 25 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 268b2ff..e12f431 100644
es (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
are zapped by using lock-break technique
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 98 +++
arch/x86/kvm/mmu.h |2 +
3 fi
er mmu-notify handlers.)
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 11 ++-
1 files changed, 10 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9343fe..268b2ff 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4197,11 +4
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_memslot_pages except on
the path of mmu_notifier->release() which will be fixed in
the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |5 +
1 files
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 11 ---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
Show sp->mmu_valid_gen
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
--- a/arch/x86/kvm/mmutrace.h
++
Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
kvm_mmu_prepare_zap_page so that we can call kvm_mmu_commit_zap_page
once for multiple kvm_mmu_prepare_zap_page that can help us to avoid
unnecessary TLB flush
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c
CC kvm list.
On 05/09/2013 12:31 PM, David Ahern wrote:
> With the consolidation of the open counters code in December 2012
> (late to the party figuring that out) I think all of the past
> comments on the live mode for perf-kvm have been resolved.
Great work, David! I am playing it and glad to s
On 05/08/2013 05:43 PM, Robert Richter wrote:
> From: Robert Richter
>
> The tag of the perf version is wrongly determined, always the latest
> tag is taken regardless of the HEAD commit:
>
> $ perf --version
> perf version 3.9.rc8.gd7f5d3
> $ git describe d7f5d3
> v3.9-rc7-154-gd7f5d33
> $
On 05/07/2013 04:58 PM, Gleb Natapov wrote:
> On Tue, May 07, 2013 at 01:45:52AM +0800, Xiao Guangrong wrote:
>> On 05/07/2013 01:24 AM, Gleb Natapov wrote:
>>> On Mon, May 06, 2013 at 09:10:11PM +0800, Xiao Guangrong wrote:
>>>> On 05/06/2013 08:36 PM, Gleb Natapov w
On 05/07/2013 03:50 AM, Marcelo Tosatti wrote:
> On Mon, May 06, 2013 at 11:39:11AM +0800, Xiao Guangrong wrote:
>> On 05/04/2013 08:52 AM, Marcelo Tosatti wrote:
>>> On Sat, May 04, 2013 at 12:51:06AM +0800, Xiao Guangrong wrote:
>>>> On 05/03/2013 11:53 PM, Marce
On 05/07/2013 01:24 AM, Gleb Natapov wrote:
> On Mon, May 06, 2013 at 09:10:11PM +0800, Xiao Guangrong wrote:
>> On 05/06/2013 08:36 PM, Gleb Natapov wrote:
>>
>>>>> Step 1) Fix kvm_mmu_zap_all's behaviour: introduce lockbreak via
>>>>> spin
On 05/06/2013 08:36 PM, Gleb Natapov wrote:
>>> Step 1) Fix kvm_mmu_zap_all's behaviour: introduce lockbreak via
>>> spin_needbreak. Use generation numbers so that in case kvm_mmu_zap_all
>>> releases mmu_lock and reacquires it again, only shadow pages
>>> from the generation with which kvm_mmu_
oldest
version has this commit is 3.0-stable.
Tested-by: Robin Holt
Cc:
Signed-off-by: Xiao Guangrong
---
Andrew, this patch has been tested by Robin and the test shows that the bug
of "NULL Pointer deref" bas been fixed. However, we have the argument that
whether the fix of "m
On 05/04/2013 08:52 AM, Marcelo Tosatti wrote:
> On Sat, May 04, 2013 at 12:51:06AM +0800, Xiao Guangrong wrote:
>> On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
>>> On Fri, May 03, 2013 at 01:52:07PM +0800, Xiao Guangrong wrote:
>>>> On 05/03/2013 0
On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
> On Fri, May 03, 2013 at 01:52:07PM +0800, Xiao Guangrong wrote:
>> On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
>>
>>>> +
>>>> +/*
>>>> + * Fast invalid all shadow pages belong to @slot.
>&g
On 05/01/2013 12:12 AM, Pavel Emelyanov wrote:
> +static inline void clear_soft_dirty(struct vm_area_struct *vma,
> + unsigned long addr, pte_t *pte)
> +{
> +#ifdef CONFIG_MEM_SOFT_DIRTY
> + /*
> + * The soft-dirty tracker uses #PF-s to catch writes
> + * to pages, so wri
On 05/03/2013 10:27 AM, Takuya Yoshikawa wrote:
> On Sat, 27 Apr 2013 11:13:20 +0800
> Xiao Guangrong wrote:
>
>> +/*
>> + * Fast invalid all shadow pages belong to @slot.
>> + *
>> + * @slot != NULL means the invalidation is caused the memslot specified
>>
On 05/03/2013 10:15 AM, Takuya Yoshikawa wrote:
> On Sat, 27 Apr 2013 11:13:19 +0800
> Xiao Guangrong wrote:
>
>> This function is used to reset the large page info of all guest pages
>> which will be used in later patch
>>
>> Signed-off-by: Xiao Guangrong
>
On 05/03/2013 10:10 AM, Takuya Yoshikawa wrote:
> On Sat, 27 Apr 2013 11:13:18 +0800
> Xiao Guangrong wrote:
>
>> It is used to set disallowed large page on the specified level, can be
>> used in later patch
>>
>> Signed-off-by: Xiao Guangrong
>
On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
>> +
>> +/*
>> + * Fast invalid all shadow pages belong to @slot.
>> + *
>> + * @slot != NULL means the invalidation is caused the memslot specified
>> + * by @slot is being deleted, in this case, we should ensure that rmap
>> + * and lpage-info of th
It is used to set disallowed large page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 53 ++-
1 files changed, 35 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch
e-info all memslots so
that rmap and lpage info can be safely freed.
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 77 ++-
arch/x86/kvm/mmu.h |2 +
3 files changed, 80 insertions(+),
er mmu-notify handlers.)
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 11 ++-
1 files changed, 10 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 63110c7..46d1d47 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4197,12 +4
This function is used to reset the large page info of all guest pages
which will be used in later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 25 +
arch/x86/kvm/x86.h |2 ++
2 files changed, 27 insertions(+), 0 deletions(-)
diff --git a/arch/x86
Replace kvm_mmu_zap_all by kvm_mmu_invalid_all_pages except on
the path of mmu_notifier->release() which will be fixed in
the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/a
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |5 +
1 files
ap, it is not very effective but good for the first step.
* TODO
Unmapping invalid rmap out of mmu-lock with a clear way.
Xiao Guangrong (6):
KVM: MMU: drop unnecessary kvm_reload_remote_mmus
KVM: x86: introduce memslot_set_lpage_disallowed
KVM: MMU: introduce kvm_clear_all_lpage_info
KVM: M
On 04/24/2013 09:34 PM, Gleb Natapov wrote:
>> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
>> index 2adcbc2..6b4ba1e 100644
>> --- a/arch/x86/kvm/mmu.h
>> +++ b/arch/x86/kvm/mmu.h
>> @@ -52,6 +52,20 @@
>>
>> int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64
>> sptes[
On 04/24/2013 08:59 PM, Gleb Natapov wrote:
> On Mon, Apr 01, 2013 at 05:56:49PM +0800, Xiao Guangrong wrote:
>> Then it has chance to trigger mmio generation number wrap-around
>>
>> Signed-off-by: Xiao Guangrong
>> ---
>> arch/x86/include/asm/kvm_host.
On 04/23/2013 02:28 PM, Gleb Natapov wrote:
> On Tue, Apr 23, 2013 at 08:19:02AM +0800, Xiao Guangrong wrote:
>> On 04/22/2013 05:21 PM, Gleb Natapov wrote:
>>> On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
>>>> On 04/21/2013 09:03 PM, Gleb Natap
On 04/22/2013 05:21 PM, Gleb Natapov wrote:
> On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
>> On 04/21/2013 09:03 PM, Gleb Natapov wrote:
>>> On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
>>>> This patchset is based on my previous
On 04/21/2013 11:24 PM, Marcelo Tosatti wrote:
> On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
>> On 04/21/2013 09:03 PM, Gleb Natapov wrote:
>>> On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
>>>> This patchset is based on my previo
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
> On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
>> This patchset is based on my previous two patchset:
>> [PATCH 0/2] KVM: x86: avoid potential soft lockup and unneeded mmu reload
>> (https://lkml.org/lkml/2013/4/1
On 04/21/2013 01:18 AM, Marcelo Tosatti wrote:
> On Thu, Apr 18, 2013 at 12:03:45PM +0800, Xiao Guangrong wrote:
>> On 04/18/2013 08:08 AM, Marcelo Tosatti wrote:
>>> On Tue, Apr 16, 2013 at 02:32:53PM +0800, Xiao Guangrong wrote:
>>>> Use kvm_mmu_invalid_all_pages in
On 04/18/2013 09:29 PM, Marcelo Tosatti wrote:
> On Thu, Apr 18, 2013 at 10:03:06AM -0300, Marcelo Tosatti wrote:
>> On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
>>>>
>>>> What is the justification for this?
>>>
>>> We want the
On 04/18/2013 07:38 PM, Gleb Natapov wrote:
> On Thu, Apr 18, 2013 at 07:22:23PM +0800, Xiao Guangrong wrote:
>> On 04/18/2013 07:00 PM, Gleb Natapov wrote:
>>> On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
>>>> pte_list_clear_concurrently all
On 04/18/2013 07:00 PM, Gleb Natapov wrote:
> On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
>> pte_list_clear_concurrently allows us to reset pte-desc entry
>> out of mmu-lock. We can reset spte out of mmu-lock if we can protect the
>> lifecycle of sp, we us
On 04/18/2013 08:08 AM, Marcelo Tosatti wrote:
> On Tue, Apr 16, 2013 at 02:32:53PM +0800, Xiao Guangrong wrote:
>> Use kvm_mmu_invalid_all_pages in kvm_arch_flush_shadow_all and
>> rename kvm_zap_all to kvm_free_all which is used to free all
>> memmory used by kvm mmu when
On 04/18/2013 08:05 AM, Marcelo Tosatti wrote:
> On Tue, Apr 16, 2013 at 02:32:50PM +0800, Xiao Guangrong wrote:
>> The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
>> walk and zap all shadow pages one by one, also it need to zap all guest
>> page's
On 04/18/2013 07:38 AM, Marcelo Tosatti wrote:
> On Tue, Apr 16, 2013 at 02:32:45PM +0800, Xiao Guangrong wrote:
>> Invalid rmaps is the rmap of the invalid memslot which is being
>> deleted, especially, we can treat all rmaps are invalid when
>> kvm is being destroyed sinc
On 04/18/2013 02:45 AM, Robin Holt wrote:
>>> For the v3.10 release, we should work on making this more
>>> correct and completely documented.
>>
>> Better document is always welcomed.
>>
>> Double call ->release is not bad, like i mentioned it in the changelog:
>>
On 04/17/2013 10:10 PM, Robin Holt wrote:
> On Wed, Apr 17, 2013 at 10:55:26AM +0800, Xiao Guangrong wrote:
>> On 04/17/2013 02:08 AM, Robin Holt wrote:
>>> On Tue, Apr 16, 2013 at 09:07:20PM +0800, Xiao Guangrong wrote:
>>>> On 04/16/2013 07:43 PM, Robin Holt wro
On 04/17/2013 02:08 AM, Robin Holt wrote:
> On Tue, Apr 16, 2013 at 09:07:20PM +0800, Xiao Guangrong wrote:
>> On 04/16/2013 07:43 PM, Robin Holt wrote:
>>> Argh. Taking a step back helped clear my head.
>>>
>>> For the -stable releases, I agree we shou
On 04/16/2013 07:43 PM, Robin Holt wrote:
> Argh. Taking a step back helped clear my head.
>
> For the -stable releases, I agree we should just go with your
> revert-plus-hlist_del_init_rcu patch. I will give it a test
> when I am in the office.
Okay. Wait for your test report. Thank you in adv
On 04/16/2013 05:31 PM, Robin Holt wrote:
> On Tue, Apr 16, 2013 at 02:39:49PM +0800, Xiao Guangrong wrote:
>> The commit 751efd8610d3 (mmu_notifier_unregister NULL Pointer deref
>> and multiple ->release()) breaks the fix:
>> 3ad3d901bbcfb15a5e4690e55350db0899095a68
nce all the pages have already been released by the first call.
Signed-off-by: Xiao Guangrong
---
mm/mmu_notifier.c | 81 +++--
1 files changed, 41 insertions(+), 40 deletions(-)
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index be
Let kvm do not reuse the rmap of the memslot which is being moved
then the rmap of moved or deleted memslot can only be unmapped, no
new spte can be added on it.
This is good for us to unmap rmap out of mmu-lock in the later patches
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |2
-lock to unmap invalid rmap
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 80
1 files changed, 80 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 850eab5..2a7a5d0 100644
--- a/arch/x86/kvm/
It frees pte-list-descs used by memslot rmap after update
memslot is completed
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 26 ++
arch/x86/kvm/mmu.h |1 +
2 files changed, 27 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 62 +++
1 files changed, 57 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 99ad2a4..850eab5 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
This function is used to reset the large page info of all guest page
which will be used in later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 25 +
arch/x86/kvm/x86.h |2 ++
2 files changed, 27 insertions(+), 0 deletions(-)
diff --git a/arch/x86
goto retry
(the wait is very rare and clear one rmap is very fast, it
is not bad even if wait is needed)
Then, we can sure the spte is always available when we do
unmap_memslot_rmap_nolock
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/
Replace kvm_mmu_zap_all by kvm_mmu_invalid_all_pages except on
the path of mmu_notifier->release() which will be replaced in
the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/a
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |4
virt/kvm/kvm_main.c |3 ---
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6e7c85b..d3dd0d5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7056,7 +7056,11
e-info all memslots so
that rmap and lpage info can be safely freed.
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 85 +-
arch/x86/kvm/mmu.h |4 ++
arch/x86/kvm/x86.c
-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c | 15 ++-
arch/x86/kvm/x86.c |9 -
3 files changed, 7 insertions(+), 19 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm
It is used to set disallowed lage page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 53 ++-
1 files changed, 35 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch
Introduce slot_rmap_* functions to abstract memslot rmap related
operations which makes the later patch more clearer
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 108 +-
arch/x86/kvm/mmu_audit.c | 10 +++--
2 files changed, 84
memslot rmap and lpage-info are never partly reused and nothing need
be freed when new memslot is created
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 21 -
1 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
nk them
together by desc->more.
* Performance
We observably reduce the contention of mmu-lock and make the invalidation
be preemptable.
Xiao Guangrong (15):
KVM: x86: clean up and optimize for kvm_arch_free_memslot
KVM: fold kvm_arch_create_memslot into kvm_arch_prepare_memory_region
KVM: x86
It removes a arch-specified interface and also removes unnecessary
empty functions on some architectures
Signed-off-by: Xiao Guangrong
---
arch/arm/kvm/arm.c |5 -
arch/ia64/kvm/kvm-ia64.c |5 -
arch/powerpc/kvm/powerpc.c |8 ++--
arch/s390/kvm/kvm-s390.c
Introduce rmap_operations to allow rmap having different operations,
then, we are able to handle invalid rmap specially
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 31 ---
arch/x86/kvm/mmu.h
Hi Marcelo,
On 04/16/2013 08:54 AM, Marcelo Tosatti wrote:
> On Mon, Apr 01, 2013 at 05:56:43PM +0800, Xiao Guangrong wrote:
>> Changelog in v2:
>> - rename kvm_mmu_invalid_mmio_spte to kvm_mmu_invalid_mmio_sptes
>> - use kvm->memslots->generation as kvm global g
ual the global generation-number,
it will go to the normal #PF handler to update the mmio spte
Since 19 bits are used to store generation-number on mmio spte, we zap all
mmio sptes when the number is round
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/
walks the shadow page table and get the mmio spte. If the
generation-number on the spte does not equal the global generation-number,
it will go to the normal #PF handler to update the mmio spte
Since 19 bits are used to store generation-number on mmio spte, we zap all
mmio sptes when the number is ro
601 - 700 of 1077 matches
Mail list logo