Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 11 ---
1 files changed, 8 insertions(+), 3 deletions
Show sp-mmu_valid_gen
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
Attach the benchmark.
On 05/16/2013 08:17 PM, Xiao Guangrong wrote:
Bechmark result:
I have tested this patchset and the previous version that only zaps the
pages linked on invalid slot's rmap. The benchmark is written by myself
which has been attached, it writes large memory when do pci rom
root shadow pages on all vcpus.
Vcpu will create a new shadow page table according to current kvm's
generation-number. It ensures the old pages are not used any more.
Then the invalid-gen pages (sp-mmu_valid_gen != kvm-arch.mmu_valid_gen)
are zapped by using lock-break technique.
Xiao Guangrong (8
On 05/16/2013 08:43 PM, Gleb Natapov wrote:
On Thu, May 16, 2013 at 08:17:48PM +0800, Xiao Guangrong wrote:
The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
walk and zap all shadow pages one by one, also it need to zap all guest
page's rmap and all shadow page's parent
On 05/16/2013 08:45 PM, Paolo Bonzini wrote:
Il 16/05/2013 14:17, Xiao Guangrong ha scritto:
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong xiaoguangr
On 05/16/2013 10:36 PM, Takuya Yoshikawa wrote:
On Thu, 16 May 2013 20:17:45 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
Bechmark result:
I have tested this patchset and the previous version that only zaps the
pages linked on invalid slot's rmap. The benchmark is written
On 05/16/2013 11:57 PM, Gleb Natapov wrote:
One more thought. With current patch if zap_invalid_page() will be
called second time while another zap_invalid_page() is still running
(can that happen?) they will both run concurrently fighting for the
Currently, it can not happen since
On 05/17/2013 12:18 AM, Gleb Natapov wrote:
+
+/*
+ * Fast invalidate all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot specified
+ * by @slot is being deleted, in this case, we should ensure that rmap
+ * and lpage-info of the @slot can
On 05/17/2013 12:19 AM, Gleb Natapov wrote:
On Thu, May 16, 2013 at 08:17:49PM +0800, Xiao Guangrong wrote:
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_memslot_pages except on
the path of mmu_notifier-release() which will be fixed in
the later patch
Why -release() cannot use
are not used any more.
Then the invalid-gen pages (sp-mmu_valid_gen != kvm-arch.mmu_valid_gen)
are zapped by using lock-break technique.
Xiao Guangrong (7):
KVM: MMU: drop unnecessary kvm_reload_remote_mmus
KVM: MMU: delete shadow page from hash list in
kvm_mmu_prepare_zap_page
KVM: MMU
-arch.mmu_valid_gen)
are zapped by using lock-break technique
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 103 +++
arch/x86/kvm/mmu.h |1 +
3
Replace kvm_mmu_zap_all by kvm_mmu_invalidate_all_pages
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 15 ---
arch/x86/kvm/x86.c |6 +++---
2 files changed, 3 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86
It is good for debug and development
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |1 +
arch/x86/kvm/mmutrace.h | 23 +++
2 files changed, 24 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
Show sp-mmu_valid_gen
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmutrace.h | 22 --
1 files changed, 12 insertions(+), 10 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86/kvm/mmutrace.h
index b8f6172..697f466 100644
Zap at lease 10 pages before releasing mmu-lock to reduce the overload
caused by requiring lock
[ It improves kernel building 0.6% ~ 1% ]
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 14 --
1 files changed, 12 insertions(+), 2 deletions
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86
On 05/13/2013 07:24 PM, Gleb Natapov wrote:
On Fri, May 10, 2013 at 09:43:50AM +0800, Xiao Guangrong wrote:
On 05/10/2013 09:05 AM, Takuya Yoshikawa wrote:
On Thu, 09 May 2013 18:11:31 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote
On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
By making the last three statements common to both if/else cases, the
symmetry between the locking and unlocking becomes clearer. One note
here is that VCPU's root_hpa does not need to be protected by mmu_lock.
Signed-off-by: Takuya Yoshikawa
On 05/09/2013 02:44 PM, Takuya Yoshikawa wrote:
Rather than clearing the ACC_WRITE_MASK bit of pte_access in the
if (mmu_need_write_protect()) block not to call mark_page_dirty() in
the following if statement, simply moving the call into the appropriate
else block is better.
Signed-off-by:
);
- spin_unlock(vcpu-kvm-mmu_lock);
+ kvm_mmu_sync_roots(vcpu);
Reviewed-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo
On 05/09/2013 07:18 PM, Gleb Natapov wrote:
On Thu, May 09, 2013 at 06:16:55PM +0800, Xiao Guangrong wrote:
On 05/09/2013 02:44 PM, Takuya Yoshikawa wrote:
Rather than clearing the ACC_WRITE_MASK bit of pte_access in the
if (mmu_need_write_protect()) block not to call mark_page_dirty
CC kvm list.
On 05/09/2013 12:31 PM, David Ahern wrote:
With the consolidation of the open counters code in December 2012
(late to the party figuring that out) I think all of the past
comments on the live mode for perf-kvm have been resolved.
Great work, David! I am playing it and glad to see
On 05/10/2013 09:05 AM, Takuya Yoshikawa wrote:
On Thu, 09 May 2013 18:11:31 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
On 05/09/2013 02:46 PM, Takuya Yoshikawa wrote:
By making the last three statements common to both if/else cases, the
symmetry between the locking
On 05/07/2013 04:58 PM, Gleb Natapov wrote:
On Tue, May 07, 2013 at 01:45:52AM +0800, Xiao Guangrong wrote:
On 05/07/2013 01:24 AM, Gleb Natapov wrote:
On Mon, May 06, 2013 at 09:10:11PM +0800, Xiao Guangrong wrote:
On 05/06/2013 08:36 PM, Gleb Natapov wrote:
Step 1) Fix kvm_mmu_zap_all's
On 05/06/2013 08:36 PM, Gleb Natapov wrote:
Step 1) Fix kvm_mmu_zap_all's behaviour: introduce lockbreak via
spin_needbreak. Use generation numbers so that in case kvm_mmu_zap_all
releases mmu_lock and reacquires it again, only shadow pages
from the generation with which kvm_mmu_zap_all
On 05/07/2013 01:24 AM, Gleb Natapov wrote:
On Mon, May 06, 2013 at 09:10:11PM +0800, Xiao Guangrong wrote:
On 05/06/2013 08:36 PM, Gleb Natapov wrote:
Step 1) Fix kvm_mmu_zap_all's behaviour: introduce lockbreak via
spin_needbreak. Use generation numbers so that in case kvm_mmu_zap_all
On 05/07/2013 03:50 AM, Marcelo Tosatti wrote:
On Mon, May 06, 2013 at 11:39:11AM +0800, Xiao Guangrong wrote:
On 05/04/2013 08:52 AM, Marcelo Tosatti wrote:
On Sat, May 04, 2013 at 12:51:06AM +0800, Xiao Guangrong wrote:
On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
On Fri, May 03, 2013
On 05/04/2013 08:52 AM, Marcelo Tosatti wrote:
On Sat, May 04, 2013 at 12:51:06AM +0800, Xiao Guangrong wrote:
On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
On Fri, May 03, 2013 at 01:52:07PM +0800, Xiao Guangrong wrote:
On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
+
+/*
+ * Fast
On 05/03/2013 10:27 AM, Takuya Yoshikawa wrote:
On Sat, 27 Apr 2013 11:13:20 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
+/*
+ * Fast invalid all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot specified
+ * by @slot
On 05/01/2013 01:38 PM, Jordan Justen wrote:
Don't use #ifdef __KVM_HAVE_READONLY_MEM when defining
KVM_CAP_READONLY_MEM.
Signed-off-by: Jordan Justen jordan.l.jus...@intel.com
Cc: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
Cc: Jan Kiszka jan.kis...@siemens.com
---
include/uapi
On 05/03/2013 02:26 PM, Jan Kiszka wrote:
On 2013-05-03 08:19, Xiao Guangrong wrote:
On 05/01/2013 01:38 PM, Jordan Justen wrote:
Don't use #ifdef __KVM_HAVE_READONLY_MEM when defining
KVM_CAP_READONLY_MEM.
Signed-off-by: Jordan Justen jordan.l.jus...@intel.com
Cc: Xiao Guangrong xiaoguangr
On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
On Fri, May 03, 2013 at 01:52:07PM +0800, Xiao Guangrong wrote:
On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
+
+/*
+ * Fast invalid all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot
On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
+
+/*
+ * Fast invalid all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot specified
+ * by @slot is being deleted, in this case, we should ensure that rmap
+ * and lpage-info of the @slot can
On 05/03/2013 10:10 AM, Takuya Yoshikawa wrote:
On Sat, 27 Apr 2013 11:13:18 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
It is used to set disallowed large page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr
On 05/03/2013 10:15 AM, Takuya Yoshikawa wrote:
On Sat, 27 Apr 2013 11:13:19 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
This function is used to reset the large page info of all guest pages
which will be used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr
invalid rmap out of mmu-lock with a clear way.
Xiao Guangrong (6):
KVM: MMU: drop unnecessary kvm_reload_remote_mmus
KVM: x86: introduce memslot_set_lpage_disallowed
KVM: MMU: introduce kvm_clear_all_lpage_info
KVM: MMU: fast invalid all shadow pages
KVM: x86: use the fast way to invalid
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86
It is used to set disallowed large page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 53 ++-
1 files changed, 35 insertions(+), 18 deletions(-)
diff
This function is used to reset the large page info of all guest pages
which will be used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 25 +
arch/x86/kvm/x86.h |2 ++
2 files changed, 27 insertions(+), 0
and lpage info can be safely freed.
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 77 ++-
arch/x86/kvm/mmu.h |2 +
3 files changed, 80 insertions
Replace kvm_mmu_zap_all by kvm_mmu_invalid_all_pages except on
the path of mmu_notifier-release() which will be fixed in
the later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git
-notify handlers.)
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 11 ++-
1 files changed, 10 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 63110c7..46d1d47 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm
On 04/24/2013 08:59 PM, Gleb Natapov wrote:
On Mon, Apr 01, 2013 at 05:56:49PM +0800, Xiao Guangrong wrote:
Then it has chance to trigger mmio generation number wrap-around
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86
On 04/24/2013 09:34 PM, Gleb Natapov wrote:
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 2adcbc2..6b4ba1e 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -52,6 +52,20 @@
int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64
sptes[4]);
void
On 04/23/2013 02:28 PM, Gleb Natapov wrote:
On Tue, Apr 23, 2013 at 08:19:02AM +0800, Xiao Guangrong wrote:
On 04/22/2013 05:21 PM, Gleb Natapov wrote:
On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32
On 04/22/2013 05:21 PM, Gleb Natapov wrote:
On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
This patchset is based on my previous two patchset:
[PATCH 0/2] KVM: x86
On 04/21/2013 01:18 AM, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 12:03:45PM +0800, Xiao Guangrong wrote:
On 04/18/2013 08:08 AM, Marcelo Tosatti wrote:
On Tue, Apr 16, 2013 at 02:32:53PM +0800, Xiao Guangrong wrote:
Use kvm_mmu_invalid_all_pages in kvm_arch_flush_shadow_all and
rename
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
This patchset is based on my previous two patchset:
[PATCH 0/2] KVM: x86: avoid potential soft lockup and unneeded mmu reload
(https://lkml.org/lkml/2013/4/1/2)
[PATCH v2 0/6] KVM: MMU
On 04/21/2013 11:24 PM, Marcelo Tosatti wrote:
On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
This patchset is based on my previous two patchset:
[PATCH 0/2] KVM: x86
On 04/18/2013 07:00 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
pte_list_clear_concurrently allows us to reset pte-desc entry
out of mmu-lock. We can reset spte out of mmu-lock if we can protect the
lifecycle of sp, we use this way to achieve
On 04/18/2013 07:38 PM, Gleb Natapov wrote:
On Thu, Apr 18, 2013 at 07:22:23PM +0800, Xiao Guangrong wrote:
On 04/18/2013 07:00 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
pte_list_clear_concurrently allows us to reset pte-desc entry
out of mmu-lock
On 04/18/2013 09:29 PM, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 10:03:06AM -0300, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
What is the justification for this?
We want the rmap of being deleted memslot is removed-only that is
needed
On 04/17/2013 10:10 PM, Robin Holt wrote:
On Wed, Apr 17, 2013 at 10:55:26AM +0800, Xiao Guangrong wrote:
On 04/17/2013 02:08 AM, Robin Holt wrote:
On Tue, Apr 16, 2013 at 09:07:20PM +0800, Xiao Guangrong wrote:
On 04/16/2013 07:43 PM, Robin Holt wrote:
Argh. Taking a step back helped clear
On 04/18/2013 02:45 AM, Robin Holt wrote:
For the v3.10 release, we should work on making this more
correct and completely documented.
Better document is always welcomed.
Double call -release is not bad, like i mentioned it in the changelog:
it is really rare (e.g, can not happen on kvm
On 04/18/2013 07:38 AM, Marcelo Tosatti wrote:
On Tue, Apr 16, 2013 at 02:32:45PM +0800, Xiao Guangrong wrote:
Invalid rmaps is the rmap of the invalid memslot which is being
deleted, especially, we can treat all rmaps are invalid when
kvm is being destroyed since all memslot will be deleted
On 04/18/2013 08:05 AM, Marcelo Tosatti wrote:
On Tue, Apr 16, 2013 at 02:32:50PM +0800, Xiao Guangrong wrote:
The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
walk and zap all shadow pages one by one, also it need to zap all guest
page's rmap and all shadow page's parent
On 04/18/2013 08:08 AM, Marcelo Tosatti wrote:
On Tue, Apr 16, 2013 at 02:32:53PM +0800, Xiao Guangrong wrote:
Use kvm_mmu_invalid_all_pages in kvm_arch_flush_shadow_all and
rename kvm_zap_all to kvm_free_all which is used to free all
memmory used by kvm mmu when vm is being destroyed
reduce the contention of mmu-lock and make the invalidation
be preemptable.
Xiao Guangrong (15):
KVM: x86: clean up and optimize for kvm_arch_free_memslot
KVM: fold kvm_arch_create_memslot into kvm_arch_prepare_memory_region
KVM: x86: do not reuse rmap when memslot is moved
KVM: MMU: abstract
It removes a arch-specified interface and also removes unnecessary
empty functions on some architectures
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/arm/kvm/arm.c |5 -
arch/ia64/kvm/kvm-ia64.c |5 -
arch/powerpc/kvm/powerpc.c |8
memslot rmap and lpage-info are never partly reused and nothing need
be freed when new memslot is created
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 21 -
1 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/arch/x86
Introduce rmap_operations to allow rmap having different operations,
then, we are able to handle invalid rmap specially
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 31
Introduce slot_rmap_* functions to abstract memslot rmap related
operations which makes the later patch more clearer
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 108 +-
arch/x86/kvm/mmu_audit.c | 10
It is used to set disallowed lage page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 53 ++-
1 files changed, 35 insertions(+), 18 deletions(-)
diff
-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c | 15 ++-
arch/x86/kvm/x86.c |9 -
3 files changed, 7 insertions(+), 19 deletions(-)
diff --git a/arch/x86/include/asm
and lpage info can be safely freed.
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 85 +-
arch/x86/kvm/mmu.h |4 ++
arch/x86/kvm/x86.c
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c |4
virt/kvm/kvm_main.c |3 ---
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6e7c85b..d3dd0d5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86
Replace kvm_mmu_zap_all by kvm_mmu_invalid_all_pages except on
the path of mmu_notifier-release() which will be replaced in
the later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff
This function is used to reset the large page info of all guest page
which will be used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 25 +
arch/x86/kvm/x86.h |2 ++
2 files changed, 27 insertions(+), 0
retry
(the wait is very rare and clear one rmap is very fast, it
is not bad even if wait is needed)
Then, we can sure the spte is always available when we do
unmap_memslot_rmap_nolock
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch
It frees pte-list-descs used by memslot rmap after update
memslot is completed
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 26 ++
arch/x86/kvm/mmu.h |1 +
2 files changed, 27 insertions(+), 0 deletions(-)
diff --git
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 62 +++
1 files changed, 57 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 99ad2a4..850eab5 100644
--- a/arch/x86/kvm
to unmap invalid rmap
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 80
1 files changed, 80 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 850eab5..2a7a5d0
Let kvm do not reuse the rmap of the memslot which is being moved
then the rmap of moved or deleted memslot can only be unmapped, no
new spte can be added on it.
This is good for us to unmap rmap out of mmu-lock in the later patches
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
released by the first call.
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
mm/mmu_notifier.c | 81 +++--
1 files changed, 41 insertions(+), 40 deletions(-)
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index be04122..606777a
On 04/16/2013 05:31 PM, Robin Holt wrote:
On Tue, Apr 16, 2013 at 02:39:49PM +0800, Xiao Guangrong wrote:
The commit 751efd8610d3 (mmu_notifier_unregister NULL Pointer deref
and multiple -release()) breaks the fix:
3ad3d901bbcfb15a5e4690e55350db0899095a68
(mm: mmu_notifier: fix freed
On 04/16/2013 07:43 PM, Robin Holt wrote:
Argh. Taking a step back helped clear my head.
For the -stable releases, I agree we should just go with your
revert-plus-hlist_del_init_rcu patch. I will give it a test
when I am in the office.
Okay. Wait for your test report. Thank you in
On 04/17/2013 02:08 AM, Robin Holt wrote:
On Tue, Apr 16, 2013 at 09:07:20PM +0800, Xiao Guangrong wrote:
On 04/16/2013 07:43 PM, Robin Holt wrote:
Argh. Taking a step back helped clear my head.
For the -stable releases, I agree we should just go with your
revert-plus-hlist_del_init_rcu
Hi Marcelo,
On 04/16/2013 08:54 AM, Marcelo Tosatti wrote:
On Mon, Apr 01, 2013 at 05:56:43PM +0800, Xiao Guangrong wrote:
Changelog in v2:
- rename kvm_mmu_invalid_mmio_spte to kvm_mmu_invalid_mmio_sptes
- use kvm-memslots-generation as kvm global generation-number
- fix comment
Let mmio spte only use bit62 and bit63 on upper 32 bits, then bit 52 ~ bit 61
can be used for other purposes
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/vmx.c |4 ++--
arch/x86/kvm/x86.c |8 +++-
2 files changed, 9 insertions(+), 3 deletions
Store the generation-number into bit3 ~ bit11 and bit52 ~ bit61, totally
19 bits can be used, it should be enough for nearly all most common cases
In this patch, the generation-number is always 0, it will be changed in
the later patch
Signed-off-by: Xiao Guangrong xiaoguangr
Then it has chance to trigger mmio generation number wrap-around
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c |8
virt/kvm/kvm_main.c |6 ++
3 files changed, 15
It is useful for debug mmio spte invalidation
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |9 +++--
arch/x86/kvm/mmutrace.h | 24
2 files changed, 31 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
page table and get the mmio spte. If the
generation-number on the spte does not equal the global generation-number,
it will go to the normal #PF handler to update the mmio spte
Since 19 bits are used to store generation-number on mmio spte, we zap all
mmio sptes when the number is round
Xiao
Define some meaningful names instead of raw code
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 15 +--
arch/x86/kvm/mmu.h | 14 ++
arch/x86/kvm/vmx.c |4 ++--
3 files changed, 21 insertions(+), 12 deletions(-)
diff --git
the global generation-number,
it will go to the normal #PF handler to update the mmio spte
Since 19 bits are used to store generation-number on mmio spte, we zap all
mmio sptes when the number is round
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm
This patch makes kvm_mmu_zap_all be preemptable since it is a slow path,
break the mmu-lock if needed to avoid potential soft lockup. Also it drops
unnecessary kvm_reload_remote_mmus
This is the preparing work about kvm_mmu_zap_all, the fast approach is being
developed
Xiao Guangrong (2):
KVM
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86
kvm_mmu_zap_all is a slow path, break the mmu-lock if needed to
avoid potential soft lockup
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 11 ++-
1 files changed, 10 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
On 03/22/2013 10:11 AM, Xiao Guangrong wrote:
The modifications should be contained to kvm_mmu_get_page() mostly,
correct? (would also have to keep counters to increase SLAB freeing
ratio, relative to number of outdated shadow pages).
Yes.
And then have codepaths that nuke shadow pages
On 03/22/2013 06:54 PM, Marcelo Tosatti wrote:
And then have codepaths that nuke shadow pages break from the spinlock,
I think this is not needed any more. We can let mmu_notify use the generation
number to invalid all shadow pages, then we only need to free them after
all vcpus down and
On 03/22/2013 07:28 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 07:10:44PM +0800, Xiao Guangrong wrote:
On 03/22/2013 06:54 PM, Marcelo Tosatti wrote:
And then have codepaths that nuke shadow pages break from the spinlock,
I think this is not needed any more. We can let mmu_notify use
On 03/22/2013 07:47 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 07:39:24PM +0800, Xiao Guangrong wrote:
On 03/22/2013 07:28 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 07:10:44PM +0800, Xiao Guangrong wrote:
On 03/22/2013 06:54 PM, Marcelo Tosatti wrote:
And then have codepaths
On 03/22/2013 08:12 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 08:03:04PM +0800, Xiao Guangrong wrote:
On 03/22/2013 07:47 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 07:39:24PM +0800, Xiao Guangrong wrote:
On 03/22/2013 07:28 PM, Gleb Natapov wrote:
On Fri, Mar 22, 2013 at 07:10
On 03/22/2013 06:21 AM, Marcelo Tosatti wrote:
On Wed, Mar 20, 2013 at 04:30:20PM +0800, Xiao Guangrong wrote:
Changlog:
V2:
- do not reset n_requested_mmu_pages and n_max_mmu_pages
- batch free root shadow pages to reduce vcpu notification and mmu-lock
contention
- remove
On 03/21/2013 09:14 PM, Gleb Natapov wrote:
On Wed, Mar 20, 2013 at 04:30:24PM +0800, Xiao Guangrong wrote:
Move deletion shadow page from the hash list from kvm_mmu_commit_zap_page to
kvm_mmu_prepare_zap_page, we that we can free the shadow page out of
mmu-lock.
Also, delete the invalid
On 03/21/2013 10:29 PM, Marcelo Tosatti wrote:
On Thu, Mar 21, 2013 at 01:41:59PM +0800, Xiao Guangrong wrote:
On 03/21/2013 04:14 AM, Marcelo Tosatti wrote:
kvm_mmu_calculate_mmu_pages numbers,
maximum number of shadow pages = 2% of mapped guest pages
Does not make sense for TDP guests
beautiful if other vcpus and mmu notification need to hold the mmu-lock.
Guest VCPU:6, Mem:2048M
before: Run 10 times, Avg time:46078825 ns.
after: Run 10 times, Avg time:21558774 ns. (+ 113%)
Xiao Guangrong (7):
KVM: MMU: introduce mmu_cache-pte_list_descs
KVM: x86: introduce
It is the responsibility of kvm_mmu_zap_all that keeps the consistent
of mmu and tlbs
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c |1 -
1 files changed, 0 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index
- we do not need
to care all hash entries after reset mmu-cache
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c |8 ++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index dc37512..5578c91 100644
901 - 1000 of 2424 matches
Mail list logo