Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
After the patch, kvm_zap_obsolete_pages can forward progress anyway,
so update the comments
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 35
no longer the case and
| mutex_lock(>kvm->lock); is gone from that code path long time ago,
| so now kvm_mmu_zap_all() there is useless and the code is incorrect.
So we drop it and it will be fixed later
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |7 ---
1 files cha
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |5 +
1 files
sures the old pages are not used any more.
Then the invalid-gen pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
are zapped by using lock-break technique.
Gleb Natapov (1):
KVM: MMU: reduce KVM_REQ_MMU_RELOAD when root page is zapped
Xiao Guangrong (10):
KVM: x86: dro
flushes
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 46 +-
1 files changed, 41 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9b57faa..e676356 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm
The obsolete page will be zapped soon, do not resue it to
reduce future page fault
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 3a3e6c5..9b57faa 100644
--- a/arch/x86
are zapped
Note: kvm_mmu_commit_zap_page is still needed before free
the pages since other vcpus may be doing locklessly shadow
page walking
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 32 ++--
1 files changed, 22 insertions(+), 10 deletions(-)
diff
mus()
| after incrementing mmu_valid_gen.
[ Xiao: add some comments and the check of is_obsolete_sp() ]
Signed-off-by: Gleb Natapov
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |8 +++-
1 files changed, 7 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/
It is good for debug and development
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |1 +
arch/x86/kvm/mmutrace.h | 20
2 files changed, 21 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c010ace..3a3e6c5 100644
On 05/22/2013 09:17 PM, Gleb Natapov wrote:
> On Wed, May 22, 2013 at 05:41:10PM +0800, Xiao Guangrong wrote:
>> On 05/22/2013 04:54 PM, Gleb Natapov wrote:
>>> On Wed, May 22, 2013 at 04:46:04PM +0800, Xiao Guangrong wrote:
>>>> On 05/22/2013 02:34 PM, Gleb Natap
On 05/22/2013 04:54 PM, Gleb Natapov wrote:
> On Wed, May 22, 2013 at 04:46:04PM +0800, Xiao Guangrong wrote:
>> On 05/22/2013 02:34 PM, Gleb Natapov wrote:
>>> On Tue, May 21, 2013 at 10:33:30PM -0300, Marcelo Tosatti wrote:
>>>> On Tue, May 21, 2013 at 11:39:
On 05/22/2013 02:34 PM, Gleb Natapov wrote:
> On Tue, May 21, 2013 at 10:33:30PM -0300, Marcelo Tosatti wrote:
>> On Tue, May 21, 2013 at 11:39:03AM +0300, Gleb Natapov wrote:
Any pages with stale information will be zapped by kvm_mmu_zap_all().
When that happens, page faults will take
On 05/22/2013 02:34 PM, Gleb Natapov wrote:
On Tue, May 21, 2013 at 10:33:30PM -0300, Marcelo Tosatti wrote:
On Tue, May 21, 2013 at 11:39:03AM +0300, Gleb Natapov wrote:
Any pages with stale information will be zapped by kvm_mmu_zap_all().
When that happens, page faults will take place which
On 05/22/2013 04:54 PM, Gleb Natapov wrote:
On Wed, May 22, 2013 at 04:46:04PM +0800, Xiao Guangrong wrote:
On 05/22/2013 02:34 PM, Gleb Natapov wrote:
On Tue, May 21, 2013 at 10:33:30PM -0300, Marcelo Tosatti wrote:
On Tue, May 21, 2013 at 11:39:03AM +0300, Gleb Natapov wrote:
Any pages
On 05/22/2013 09:17 PM, Gleb Natapov wrote:
On Wed, May 22, 2013 at 05:41:10PM +0800, Xiao Guangrong wrote:
On 05/22/2013 04:54 PM, Gleb Natapov wrote:
On Wed, May 22, 2013 at 04:46:04PM +0800, Xiao Guangrong wrote:
On 05/22/2013 02:34 PM, Gleb Natapov wrote:
On Tue, May 21, 2013 at 10:33
It is good for debug and development
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |1 +
arch/x86/kvm/mmutrace.h | 20
2 files changed, 21 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
kvm_reload_remote_mmus()
| after incrementing mmu_valid_gen.
[ Xiao: add some comments and the check of is_obsolete_sp() ]
Signed-off-by: Gleb Natapov g...@redhat.com
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |8 +++-
1 files changed, 7 insertions(+), 1
are zapped
Note: kvm_mmu_commit_zap_page is still needed before free
the pages since other vcpus may be doing locklessly shadow
page walking
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 32 ++--
1 files changed, 22
The obsolete page will be zapped soon, do not resue it to
reduce future page fault
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |3 +++
1 files changed, 3 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index
flushes
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 46 +-
1 files changed, 41 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 9b57faa..e676356 100644
--- a/arch/x86
are not used any more.
Then the invalid-gen pages (sp-mmu_valid_gen != kvm-arch.mmu_valid_gen)
are zapped by using lock-break technique.
Gleb Natapov (1):
KVM: MMU: reduce KVM_REQ_MMU_RELOAD when root page is zapped
Xiao Guangrong (10):
KVM: x86: drop calling kvm_mmu_zap_all in emulator_fix_hypercall
the case and
| mutex_lock(vcpu-kvm-lock); is gone from that code path long time ago,
| so now kvm_mmu_zap_all() there is useless and the code is incorrect.
So we drop it and it will be fixed later
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c |7 ---
1
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
After the patch, kvm_zap_obsolete_pages can forward progress anyway,
so update the comments
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Show sp-mmu_valid_gen
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_zap_all_pages
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 15 ---
arch/x86/kvm/x86.c |4 ++--
2 files changed, 2 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86
-arch.mmu_valid_gen)
are zapped by using lock-break technique
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 84 +++
arch/x86/kvm/mmu.h |1 +
3 files
On 05/21/2013 04:40 AM, Marcelo Tosatti wrote:
> On Mon, May 20, 2013 at 11:15:45PM +0300, Gleb Natapov wrote:
>> On Mon, May 20, 2013 at 04:46:24PM -0300, Marcelo Tosatti wrote:
>>> On Fri, May 17, 2013 at 05:12:58AM +0800, Xiao Guangrong wrote:
>>>> The current
On 05/19/2013 06:47 PM, Gleb Natapov wrote:
> On Fri, May 17, 2013 at 05:12:57AM +0800, Xiao Guangrong wrote:
>> Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
>> kvm_mmu_prepare_zap_page so that we can call kvm_mmu_commit_zap_page
>&
On 05/19/2013 06:04 PM, Gleb Natapov wrote:
>> +/*
>> + * Do not repeatedly zap a root page to avoid unnecessary
>> + * KVM_REQ_MMU_RELOAD, otherwise we may not be able to
>> + * progress:
>> + *vcpu 0vcpu 1
On 05/19/2013 06:04 PM, Gleb Natapov wrote:
+/*
+ * Do not repeatedly zap a root page to avoid unnecessary
+ * KVM_REQ_MMU_RELOAD, otherwise we may not be able to
+ * progress:
+ *vcpu 0vcpu 1
+
On 05/19/2013 06:47 PM, Gleb Natapov wrote:
On Fri, May 17, 2013 at 05:12:57AM +0800, Xiao Guangrong wrote:
Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
kvm_mmu_prepare_zap_page so that we can call kvm_mmu_commit_zap_page
once for multiple
On 05/21/2013 04:40 AM, Marcelo Tosatti wrote:
On Mon, May 20, 2013 at 11:15:45PM +0300, Gleb Natapov wrote:
On Mon, May 20, 2013 at 04:46:24PM -0300, Marcelo Tosatti wrote:
On Fri, May 17, 2013 at 05:12:58AM +0800, Xiao Guangrong wrote:
The current kvm_mmu_zap_all is really slow
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |5 +
1 files
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 14 --
1 files changed, 12 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm
It is good for debug and development
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |1 +
arch/x86/kvm/mmutrace.h | 23 +++
2 files changed, 24 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 89b51dc..2c512e8 100644
Show sp->mmu_valid_gen
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
--- a/arch/x86/kvm/mmutrace.h
++
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_all_pages
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 15 ---
arch/x86/kvm/x86.c |6 +++---
2 files changed, 3 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7ad0e50..89b51dc
valid_gen != kvm->arch.mmu_valid_gen)
are zapped by using lock-break technique
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 103 +++
arch/x86/kvm/mmu.h |1 +
3 files changed, 10
Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
kvm_mmu_prepare_zap_page so that we can call kvm_mmu_commit_zap_page
once for multiple kvm_mmu_prepare_zap_page that can help us to avoid
unnecessary TLB flush
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c
s the old pages are not used any more.
Then the invalid-gen pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
are zapped by using lock-break technique.
Xiao Guangrong (7):
KVM: MMU: drop unnecessary kvm_reload_remote_mmus
KVM: MMU: delete shadow page from hash list in
kvm_mmu_prepare
On 05/17/2013 12:19 AM, Gleb Natapov wrote:
> On Thu, May 16, 2013 at 08:17:49PM +0800, Xiao Guangrong wrote:
>> Replace kvm_mmu_zap_all by kvm_mmu_invalidate_memslot_pages except on
>> the path of mmu_notifier->release() which will be fixed in
>> the later patch
>>
On 05/17/2013 12:18 AM, Gleb Natapov wrote:
>> +
>> +/*
>> + * Fast invalidate all shadow pages belong to @slot.
>> + *
>> + * @slot != NULL means the invalidation is caused the memslot specified
>> + * by @slot is being deleted, in this case, we should ensure that rmap
>> + * and lpage-info of
On 05/16/2013 11:57 PM, Gleb Natapov wrote:
> One more thought. With current patch if zap_invalid_page() will be
> called second time while another zap_invalid_page() is still running
> (can that happen?) they will both run concurrently fighting for the
Currently, it can not happen since
On 05/16/2013 10:36 PM, Takuya Yoshikawa wrote:
> On Thu, 16 May 2013 20:17:45 +0800
> Xiao Guangrong wrote:
>
>> Bechmark result:
>> I have tested this patchset and the previous version that only zaps the
>> pages linked on invalid slot's rmap. The benchmark is wri
On 05/16/2013 08:45 PM, Paolo Bonzini wrote:
> Il 16/05/2013 14:17, Xiao Guangrong ha scritto:
>> Zap at lease 10 pages before releasing mmu-lock to reduce the overload
>> caused by requiring lock
>>
>> [ It improves kernel building 0.6% ~ 1% ]
>>
>> Signed-
On 05/16/2013 08:43 PM, Gleb Natapov wrote:
> On Thu, May 16, 2013 at 08:17:48PM +0800, Xiao Guangrong wrote:
>> The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
>> walk and zap all shadow pages one by one, also it need to zap all guest
>> page's rmap
then reload root shadow pages on all vcpus.
Vcpu will create a new shadow page table according to current kvm's
generation-number. It ensures the old pages are not used any more.
Then the invalid-gen pages (sp->mmu_valid_gen != kvm->arch.mmu_valid_gen)
are zapped by using lock-break techniqu
Attach the benchmark.
On 05/16/2013 08:17 PM, Xiao Guangrong wrote:
> Bechmark result:
> I have tested this patchset and the previous version that only zaps the
> pages linked on invalid slot's rmap. The benchmark is written by myself
> which has been attached, it writes large memory
It is good for debug and development
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |2 ++
arch/x86/kvm/mmutrace.h | 23 +++
2 files changed, 25 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 268b2ff..e12f431 100644
valid_gen != kvm->arch.mmu_valid_gen)
are zapped by using lock-break technique
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 98 +++
arch/x86/kvm/mmu.h |2 +
3 files changed, 10
er mmu-notify handlers.)
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 11 ++-
1 files changed, 10 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9343fe..268b2ff 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4197,11 +4
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_memslot_pages except on
the path of mmu_notifier->release() which will be fixed in
the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |5 +
1 files
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 11 ---
1 files changed, 8 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
Show sp->mmu_valid_gen
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
--- a/arch/x86/kvm/mmutrace.h
++
Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
kvm_mmu_prepare_zap_page so that we can call kvm_mmu_commit_zap_page
once for multiple kvm_mmu_prepare_zap_page that can help us to avoid
unnecessary TLB flush
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c
Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
kvm_mmu_prepare_zap_page so that we can call kvm_mmu_commit_zap_page
once for multiple kvm_mmu_prepare_zap_page that can help us to avoid
unnecessary TLB flush
Signed-off-by: Xiao Guangrong xiaoguangr
Show sp-mmu_valid_gen
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 11 ---
1 files changed, 8 insertions(+), 3 deletions
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_memslot_pages except on
the path of mmu_notifier-release() which will be fixed in
the later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions
-notify handlers.)
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 11 ++-
1 files changed, 10 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9343fe..268b2ff 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm
-arch.mmu_valid_gen)
are zapped by using lock-break technique
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 98 +++
arch/x86/kvm/mmu.h |2 +
3
Attach the benchmark.
On 05/16/2013 08:17 PM, Xiao Guangrong wrote:
Bechmark result:
I have tested this patchset and the previous version that only zaps the
pages linked on invalid slot's rmap. The benchmark is written by myself
which has been attached, it writes large memory when do pci rom
It is good for debug and development
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |2 ++
arch/x86/kvm/mmutrace.h | 23 +++
2 files changed, 25 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
root shadow pages on all vcpus.
Vcpu will create a new shadow page table according to current kvm's
generation-number. It ensures the old pages are not used any more.
Then the invalid-gen pages (sp-mmu_valid_gen != kvm-arch.mmu_valid_gen)
are zapped by using lock-break technique.
Xiao Guangrong (8
On 05/16/2013 08:43 PM, Gleb Natapov wrote:
On Thu, May 16, 2013 at 08:17:48PM +0800, Xiao Guangrong wrote:
The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
walk and zap all shadow pages one by one, also it need to zap all guest
page's rmap and all shadow page's parent
On 05/16/2013 08:45 PM, Paolo Bonzini wrote:
Il 16/05/2013 14:17, Xiao Guangrong ha scritto:
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong xiaoguangr
On 05/16/2013 10:36 PM, Takuya Yoshikawa wrote:
On Thu, 16 May 2013 20:17:45 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
Bechmark result:
I have tested this patchset and the previous version that only zaps the
pages linked on invalid slot's rmap. The benchmark is written
On 05/16/2013 11:57 PM, Gleb Natapov wrote:
One more thought. With current patch if zap_invalid_page() will be
called second time while another zap_invalid_page() is still running
(can that happen?) they will both run concurrently fighting for the
Currently, it can not happen since
On 05/17/2013 12:18 AM, Gleb Natapov wrote:
+
+/*
+ * Fast invalidate all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot specified
+ * by @slot is being deleted, in this case, we should ensure that rmap
+ * and lpage-info of the @slot can
On 05/17/2013 12:19 AM, Gleb Natapov wrote:
On Thu, May 16, 2013 at 08:17:49PM +0800, Xiao Guangrong wrote:
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_memslot_pages except on
the path of mmu_notifier-release() which will be fixed in
the later patch
Why -release() cannot use
are not used any more.
Then the invalid-gen pages (sp-mmu_valid_gen != kvm-arch.mmu_valid_gen)
are zapped by using lock-break technique.
Xiao Guangrong (7):
KVM: MMU: drop unnecessary kvm_reload_remote_mmus
KVM: MMU: delete shadow page from hash list in
kvm_mmu_prepare_zap_page
KVM: MMU
Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
kvm_mmu_prepare_zap_page so that we can call kvm_mmu_commit_zap_page
once for multiple kvm_mmu_prepare_zap_page that can help us to avoid
unnecessary TLB flush
Signed-off-by: Xiao Guangrong xiaoguangr
-arch.mmu_valid_gen)
are zapped by using lock-break technique
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 103 +++
arch/x86/kvm/mmu.h |1 +
3
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_all_pages
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 15 ---
arch/x86/kvm/x86.c |6 +++---
2 files changed, 3 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86
It is good for debug and development
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |1 +
arch/x86/kvm/mmutrace.h | 23 +++
2 files changed, 24 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
Show sp-mmu_valid_gen
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 14 --
1 files changed, 12 insertions(+), 2 deletions
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86
CC kvm list.
On 05/09/2013 12:31 PM, David Ahern wrote:
> With the consolidation of the open counters code in December 2012
> (late to the party figuring that out) I think all of the past
> comments on the live mode for perf-kvm have been resolved.
Great work, David! I am playing it and glad to
CC kvm list.
On 05/09/2013 12:31 PM, David Ahern wrote:
With the consolidation of the open counters code in December 2012
(late to the party figuring that out) I think all of the past
comments on the live mode for perf-kvm have been resolved.
Great work, David! I am playing it and glad to see
On 05/08/2013 05:43 PM, Robert Richter wrote:
> From: Robert Richter
>
> The tag of the perf version is wrongly determined, always the latest
> tag is taken regardless of the HEAD commit:
>
> $ perf --version
> perf version 3.9.rc8.gd7f5d3
> $ git describe d7f5d3
> v3.9-rc7-154-gd7f5d33
>
On 05/08/2013 05:43 PM, Robert Richter wrote:
From: Robert Richter robert.rich...@calxeda.com
The tag of the perf version is wrongly determined, always the latest
tag is taken regardless of the HEAD commit:
$ perf --version
perf version 3.9.rc8.gd7f5d3
$ git describe d7f5d3
On 05/07/2013 04:58 PM, Gleb Natapov wrote:
> On Tue, May 07, 2013 at 01:45:52AM +0800, Xiao Guangrong wrote:
>> On 05/07/2013 01:24 AM, Gleb Natapov wrote:
>>> On Mon, May 06, 2013 at 09:10:11PM +0800, Xiao Guangrong wrote:
>>>> On 05/06/2013 08:36 PM, Gleb Natapov w
On 05/07/2013 03:50 AM, Marcelo Tosatti wrote:
> On Mon, May 06, 2013 at 11:39:11AM +0800, Xiao Guangrong wrote:
>> On 05/04/2013 08:52 AM, Marcelo Tosatti wrote:
>>> On Sat, May 04, 2013 at 12:51:06AM +0800, Xiao Guangrong wrote:
>>>> On 05/03/2013 11:53 PM, Marce
On 05/07/2013 01:24 AM, Gleb Natapov wrote:
> On Mon, May 06, 2013 at 09:10:11PM +0800, Xiao Guangrong wrote:
>> On 05/06/2013 08:36 PM, Gleb Natapov wrote:
>>
>>>>> Step 1) Fix kvm_mmu_zap_all's behaviour: introduce lockbreak via
>>>>> spin_need
On 05/06/2013 08:36 PM, Gleb Natapov wrote:
>>> Step 1) Fix kvm_mmu_zap_all's behaviour: introduce lockbreak via
>>> spin_needbreak. Use generation numbers so that in case kvm_mmu_zap_all
>>> releases mmu_lock and reacquires it again, only shadow pages
>>> from the generation with which
t
version has this commit is 3.0-stable.
Tested-by: Robin Holt
Cc:
Signed-off-by: Xiao Guangrong
---
Andrew, this patch has been tested by Robin and the test shows that the bug
of "NULL Pointer deref" bas been fixed. However, we have the argument that
whether the fix of &q
On 05/06/2013 08:36 PM, Gleb Natapov wrote:
Step 1) Fix kvm_mmu_zap_all's behaviour: introduce lockbreak via
spin_needbreak. Use generation numbers so that in case kvm_mmu_zap_all
releases mmu_lock and reacquires it again, only shadow pages
from the generation with which kvm_mmu_zap_all
On 05/07/2013 01:24 AM, Gleb Natapov wrote:
On Mon, May 06, 2013 at 09:10:11PM +0800, Xiao Guangrong wrote:
On 05/06/2013 08:36 PM, Gleb Natapov wrote:
Step 1) Fix kvm_mmu_zap_all's behaviour: introduce lockbreak via
spin_needbreak. Use generation numbers so that in case kvm_mmu_zap_all
On 05/07/2013 03:50 AM, Marcelo Tosatti wrote:
On Mon, May 06, 2013 at 11:39:11AM +0800, Xiao Guangrong wrote:
On 05/04/2013 08:52 AM, Marcelo Tosatti wrote:
On Sat, May 04, 2013 at 12:51:06AM +0800, Xiao Guangrong wrote:
On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
On Fri, May 03, 2013
-stable.
Tested-by: Robin Holt h...@sgi.com
Cc: sta...@vger.kernel.org
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
Andrew, this patch has been tested by Robin and the test shows that the bug
of NULL Pointer deref bas been fixed. However, we have the argument that
whether
On 05/04/2013 08:52 AM, Marcelo Tosatti wrote:
> On Sat, May 04, 2013 at 12:51:06AM +0800, Xiao Guangrong wrote:
>> On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
>>> On Fri, May 03, 2013 at 01:52:07PM +0800, Xiao Guangrong wrote:
>>>> On 05/03/2013 0
On 05/04/2013 08:52 AM, Marcelo Tosatti wrote:
On Sat, May 04, 2013 at 12:51:06AM +0800, Xiao Guangrong wrote:
On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
On Fri, May 03, 2013 at 01:52:07PM +0800, Xiao Guangrong wrote:
On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
+
+/*
+ * Fast
On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
> On Fri, May 03, 2013 at 01:52:07PM +0800, Xiao Guangrong wrote:
>> On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
>>
>>>> +
>>>> +/*
>>>> + * Fast invalid all shadow pages belong to @slot.
>
On 05/01/2013 12:12 AM, Pavel Emelyanov wrote:
> +static inline void clear_soft_dirty(struct vm_area_struct *vma,
> + unsigned long addr, pte_t *pte)
> +{
> +#ifdef CONFIG_MEM_SOFT_DIRTY
> + /*
> + * The soft-dirty tracker uses #PF-s to catch writes
> + * to pages, so
On 05/03/2013 10:27 AM, Takuya Yoshikawa wrote:
> On Sat, 27 Apr 2013 11:13:20 +0800
> Xiao Guangrong wrote:
>
>> +/*
>> + * Fast invalid all shadow pages belong to @slot.
>> + *
>> + * @slot != NULL means the invalidation is caused the memslot specified
On 05/03/2013 10:27 AM, Takuya Yoshikawa wrote:
On Sat, 27 Apr 2013 11:13:20 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
+/*
+ * Fast invalid all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot specified
+ * by @slot
1201 - 1300 of 2152 matches
Mail list logo