On 05/01/2013 12:12 AM, Pavel Emelyanov wrote:
+static inline void clear_soft_dirty(struct vm_area_struct *vma,
+ unsigned long addr, pte_t *pte)
+{
+#ifdef CONFIG_MEM_SOFT_DIRTY
+ /*
+ * The soft-dirty tracker uses #PF-s to catch writes
+ * to pages, so
On 05/03/2013 11:53 PM, Marcelo Tosatti wrote:
On Fri, May 03, 2013 at 01:52:07PM +0800, Xiao Guangrong wrote:
On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
+
+/*
+ * Fast invalid all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot
On 05/03/2013 10:15 AM, Takuya Yoshikawa wrote:
> On Sat, 27 Apr 2013 11:13:19 +0800
> Xiao Guangrong wrote:
>
>> This function is used to reset the large page info of all guest pages
>> which will be used in later patch
>>
>> Signed-off-by: Xiao Guangrong
>
On 05/03/2013 10:10 AM, Takuya Yoshikawa wrote:
> On Sat, 27 Apr 2013 11:13:18 +0800
> Xiao Guangrong wrote:
>
>> It is used to set disallowed large page on the specified level, can be
>> used in later patch
>>
>> Signed-off-by: Xiao Guangrong
>
On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
>> +
>> +/*
>> + * Fast invalid all shadow pages belong to @slot.
>> + *
>> + * @slot != NULL means the invalidation is caused the memslot specified
>> + * by @slot is being deleted, in this case, we should ensure that rmap
>> + * and lpage-info of
On 05/03/2013 09:05 AM, Marcelo Tosatti wrote:
+
+/*
+ * Fast invalid all shadow pages belong to @slot.
+ *
+ * @slot != NULL means the invalidation is caused the memslot specified
+ * by @slot is being deleted, in this case, we should ensure that rmap
+ * and lpage-info of the @slot can
On 05/03/2013 10:10 AM, Takuya Yoshikawa wrote:
On Sat, 27 Apr 2013 11:13:18 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
It is used to set disallowed large page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr
On 05/03/2013 10:15 AM, Takuya Yoshikawa wrote:
On Sat, 27 Apr 2013 11:13:19 +0800
Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com wrote:
This function is used to reset the large page info of all guest pages
which will be used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr
It is used to set disallowed large page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 53 ++-
1 files changed, 35 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch
memslots so
that rmap and lpage info can be safely freed.
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 77 ++-
arch/x86/kvm/mmu.h |2 +
3 files changed, 80 insertions(+), 1 deletions(-)
er mmu-notify handlers.)
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 11 ++-
1 files changed, 10 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 63110c7..46d1d47 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -4197,12 +4
This function is used to reset the large page info of all guest pages
which will be used in later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 25 +
arch/x86/kvm/x86.h |2 ++
2 files changed, 27 insertions(+), 0 deletions(-)
diff --git a/arch/x86
Replace kvm_mmu_zap_all by kvm_mmu_invalid_all_pages except on
the path of mmu_notifier->release() which will be fixed in
the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/a
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |5 +
1 files
t step.
* TODO
Unmapping invalid rmap out of mmu-lock with a clear way.
Xiao Guangrong (6):
KVM: MMU: drop unnecessary kvm_reload_remote_mmus
KVM: x86: introduce memslot_set_lpage_disallowed
KVM: MMU: introduce kvm_clear_all_lpage_info
KVM: MMU: fast invalid all shadow pages
KVM: x86: u
invalid rmap out of mmu-lock with a clear way.
Xiao Guangrong (6):
KVM: MMU: drop unnecessary kvm_reload_remote_mmus
KVM: x86: introduce memslot_set_lpage_disallowed
KVM: MMU: introduce kvm_clear_all_lpage_info
KVM: MMU: fast invalid all shadow pages
KVM: x86: use the fast way to invalid
It is the responsibility of kvm_mmu_zap_all that keeps the
consistent of mmu and tlbs. And it is also unnecessary after
zap all mmio sptes since no mmio spte exists on root shadow
page and it can not be cached into tlb
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86
This function is used to reset the large page info of all guest pages
which will be used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 25 +
arch/x86/kvm/x86.h |2 ++
2 files changed, 27 insertions(+), 0
Replace kvm_mmu_zap_all by kvm_mmu_invalid_all_pages except on
the path of mmu_notifier-release() which will be fixed in
the later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git
-notify handlers.)
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 11 ++-
1 files changed, 10 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 63110c7..46d1d47 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm
and lpage info can be safely freed.
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 77 ++-
arch/x86/kvm/mmu.h |2 +
3 files changed, 80 insertions
It is used to set disallowed large page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 53 ++-
1 files changed, 35 insertions(+), 18 deletions(-)
diff
On 04/24/2013 09:34 PM, Gleb Natapov wrote:
>> diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
>> index 2adcbc2..6b4ba1e 100644
>> --- a/arch/x86/kvm/mmu.h
>> +++ b/arch/x86/kvm/mmu.h
>> @@ -52,6 +52,20 @@
>>
>> int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64
>>
On 04/24/2013 08:59 PM, Gleb Natapov wrote:
> On Mon, Apr 01, 2013 at 05:56:49PM +0800, Xiao Guangrong wrote:
>> Then it has chance to trigger mmio generation number wrap-around
>>
>> Signed-off-by: Xiao Guangrong
>> ---
>> arch/x86/include/asm/kvm_host.
On 04/24/2013 08:59 PM, Gleb Natapov wrote:
On Mon, Apr 01, 2013 at 05:56:49PM +0800, Xiao Guangrong wrote:
Then it has chance to trigger mmio generation number wrap-around
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86
On 04/24/2013 09:34 PM, Gleb Natapov wrote:
diff --git a/arch/x86/kvm/mmu.h b/arch/x86/kvm/mmu.h
index 2adcbc2..6b4ba1e 100644
--- a/arch/x86/kvm/mmu.h
+++ b/arch/x86/kvm/mmu.h
@@ -52,6 +52,20 @@
int kvm_mmu_get_spte_hierarchy(struct kvm_vcpu *vcpu, u64 addr, u64
sptes[4]);
void
On 04/23/2013 02:28 PM, Gleb Natapov wrote:
> On Tue, Apr 23, 2013 at 08:19:02AM +0800, Xiao Guangrong wrote:
>> On 04/22/2013 05:21 PM, Gleb Natapov wrote:
>>> On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
>>>> On 04/21/2013 09:03 PM, Gleb Natap
On 04/23/2013 02:28 PM, Gleb Natapov wrote:
On Tue, Apr 23, 2013 at 08:19:02AM +0800, Xiao Guangrong wrote:
On 04/22/2013 05:21 PM, Gleb Natapov wrote:
On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32
On 04/22/2013 05:21 PM, Gleb Natapov wrote:
> On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
>> On 04/21/2013 09:03 PM, Gleb Natapov wrote:
>>> On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
>>>> This patchset is based on my previous
On 04/22/2013 05:21 PM, Gleb Natapov wrote:
On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
This patchset is based on my previous two patchset:
[PATCH 0/2] KVM: x86
On 04/21/2013 11:24 PM, Marcelo Tosatti wrote:
> On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
>> On 04/21/2013 09:03 PM, Gleb Natapov wrote:
>>> On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
>>>> This patchset is based on my previo
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
> On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
>> This patchset is based on my previous two patchset:
>> [PATCH 0/2] KVM: x86: avoid potential soft lockup and unneeded mmu reload
>> (https://lkml.org/lkml/2013/4/1
On 04/21/2013 01:18 AM, Marcelo Tosatti wrote:
> On Thu, Apr 18, 2013 at 12:03:45PM +0800, Xiao Guangrong wrote:
>> On 04/18/2013 08:08 AM, Marcelo Tosatti wrote:
>>> On Tue, Apr 16, 2013 at 02:32:53PM +0800, Xiao Guangrong wrote:
>>>> Use kvm_mmu_invalid_all_page
On 04/21/2013 01:18 AM, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 12:03:45PM +0800, Xiao Guangrong wrote:
On 04/18/2013 08:08 AM, Marcelo Tosatti wrote:
On Tue, Apr 16, 2013 at 02:32:53PM +0800, Xiao Guangrong wrote:
Use kvm_mmu_invalid_all_pages in kvm_arch_flush_shadow_all and
rename
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
This patchset is based on my previous two patchset:
[PATCH 0/2] KVM: x86: avoid potential soft lockup and unneeded mmu reload
(https://lkml.org/lkml/2013/4/1/2)
[PATCH v2 0/6] KVM: MMU
On 04/21/2013 11:24 PM, Marcelo Tosatti wrote:
On Sun, Apr 21, 2013 at 10:09:29PM +0800, Xiao Guangrong wrote:
On 04/21/2013 09:03 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:38PM +0800, Xiao Guangrong wrote:
This patchset is based on my previous two patchset:
[PATCH 0/2] KVM: x86
On 04/18/2013 09:29 PM, Marcelo Tosatti wrote:
> On Thu, Apr 18, 2013 at 10:03:06AM -0300, Marcelo Tosatti wrote:
>> On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
>>>>
>>>> What is the justification for this?
>>>
>>> We wan
On 04/18/2013 07:38 PM, Gleb Natapov wrote:
> On Thu, Apr 18, 2013 at 07:22:23PM +0800, Xiao Guangrong wrote:
>> On 04/18/2013 07:00 PM, Gleb Natapov wrote:
>>> On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
>>>> pte_list_clear_concurrently all
On 04/18/2013 07:00 PM, Gleb Natapov wrote:
> On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
>> pte_list_clear_concurrently allows us to reset pte-desc entry
>> out of mmu-lock. We can reset spte out of mmu-lock if we can protect the
>> lifecycle of sp, we us
On 04/18/2013 07:00 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
pte_list_clear_concurrently allows us to reset pte-desc entry
out of mmu-lock. We can reset spte out of mmu-lock if we can protect the
lifecycle of sp, we use this way to achieve
On 04/18/2013 07:38 PM, Gleb Natapov wrote:
On Thu, Apr 18, 2013 at 07:22:23PM +0800, Xiao Guangrong wrote:
On 04/18/2013 07:00 PM, Gleb Natapov wrote:
On Tue, Apr 16, 2013 at 02:32:46PM +0800, Xiao Guangrong wrote:
pte_list_clear_concurrently allows us to reset pte-desc entry
out of mmu-lock
On 04/18/2013 09:29 PM, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 10:03:06AM -0300, Marcelo Tosatti wrote:
On Thu, Apr 18, 2013 at 12:00:16PM +0800, Xiao Guangrong wrote:
What is the justification for this?
We want the rmap of being deleted memslot is removed-only that is
needed
On 04/18/2013 08:08 AM, Marcelo Tosatti wrote:
> On Tue, Apr 16, 2013 at 02:32:53PM +0800, Xiao Guangrong wrote:
>> Use kvm_mmu_invalid_all_pages in kvm_arch_flush_shadow_all and
>> rename kvm_zap_all to kvm_free_all which is used to free all
>> memmory used by kvm mmu when
On 04/18/2013 08:05 AM, Marcelo Tosatti wrote:
> On Tue, Apr 16, 2013 at 02:32:50PM +0800, Xiao Guangrong wrote:
>> The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
>> walk and zap all shadow pages one by one, also it need to zap all guest
>> page's rmap
On 04/18/2013 07:38 AM, Marcelo Tosatti wrote:
> On Tue, Apr 16, 2013 at 02:32:45PM +0800, Xiao Guangrong wrote:
>> Invalid rmaps is the rmap of the invalid memslot which is being
>> deleted, especially, we can treat all rmaps are invalid when
>> kvm is being destro
On 04/18/2013 02:45 AM, Robin Holt wrote:
>>> For the v3.10 release, we should work on making this more
>>> correct and completely documented.
>>
>> Better document is always welcomed.
>>
>> Double call ->release is not bad, like i mentioned it in the changelog:
>>
On 04/17/2013 10:10 PM, Robin Holt wrote:
> On Wed, Apr 17, 2013 at 10:55:26AM +0800, Xiao Guangrong wrote:
>> On 04/17/2013 02:08 AM, Robin Holt wrote:
>>> On Tue, Apr 16, 2013 at 09:07:20PM +0800, Xiao Guangrong wrote:
>>>> On 04/16/2013 07:43 PM, Robin Holt wro
On 04/17/2013 10:10 PM, Robin Holt wrote:
On Wed, Apr 17, 2013 at 10:55:26AM +0800, Xiao Guangrong wrote:
On 04/17/2013 02:08 AM, Robin Holt wrote:
On Tue, Apr 16, 2013 at 09:07:20PM +0800, Xiao Guangrong wrote:
On 04/16/2013 07:43 PM, Robin Holt wrote:
Argh. Taking a step back helped clear
On 04/18/2013 02:45 AM, Robin Holt wrote:
For the v3.10 release, we should work on making this more
correct and completely documented.
Better document is always welcomed.
Double call -release is not bad, like i mentioned it in the changelog:
it is really rare (e.g, can not happen on kvm
On 04/18/2013 07:38 AM, Marcelo Tosatti wrote:
On Tue, Apr 16, 2013 at 02:32:45PM +0800, Xiao Guangrong wrote:
Invalid rmaps is the rmap of the invalid memslot which is being
deleted, especially, we can treat all rmaps are invalid when
kvm is being destroyed since all memslot will be deleted
On 04/18/2013 08:05 AM, Marcelo Tosatti wrote:
On Tue, Apr 16, 2013 at 02:32:50PM +0800, Xiao Guangrong wrote:
The current kvm_mmu_zap_all is really slow - it is holding mmu-lock to
walk and zap all shadow pages one by one, also it need to zap all guest
page's rmap and all shadow page's parent
On 04/18/2013 08:08 AM, Marcelo Tosatti wrote:
On Tue, Apr 16, 2013 at 02:32:53PM +0800, Xiao Guangrong wrote:
Use kvm_mmu_invalid_all_pages in kvm_arch_flush_shadow_all and
rename kvm_zap_all to kvm_free_all which is used to free all
memmory used by kvm mmu when vm is being destroyed
On 04/17/2013 02:08 AM, Robin Holt wrote:
> On Tue, Apr 16, 2013 at 09:07:20PM +0800, Xiao Guangrong wrote:
>> On 04/16/2013 07:43 PM, Robin Holt wrote:
>>> Argh. Taking a step back helped clear my head.
>>>
>>> For the -stable releases, I agree we shou
On 04/16/2013 07:43 PM, Robin Holt wrote:
> Argh. Taking a step back helped clear my head.
>
> For the -stable releases, I agree we should just go with your
> revert-plus-hlist_del_init_rcu patch. I will give it a test
> when I am in the office.
Okay. Wait for your test report. Thank you in
On 04/16/2013 05:31 PM, Robin Holt wrote:
> On Tue, Apr 16, 2013 at 02:39:49PM +0800, Xiao Guangrong wrote:
>> The commit 751efd8610d3 (mmu_notifier_unregister NULL Pointer deref
>> and multiple ->release()) breaks the fix:
>> 3ad3d901bbcfb15a5e4690e55350db0899095a68
ll the pages have already been released by the first call.
Signed-off-by: Xiao Guangrong
---
mm/mmu_notifier.c | 81 +++--
1 files changed, 41 insertions(+), 40 deletions(-)
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index be04122
Let kvm do not reuse the rmap of the memslot which is being moved
then the rmap of moved or deleted memslot can only be unmapped, no
new spte can be added on it.
This is good for us to unmap rmap out of mmu-lock in the later patches
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |2
to unmap invalid rmap
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 80
1 files changed, 80 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 850eab5..2a7a5d0 100644
--- a/arch/x86/kvm/mmu.c
It frees pte-list-descs used by memslot rmap after update
memslot is completed
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 26 ++
arch/x86/kvm/mmu.h |1 +
2 files changed, 27 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 62 +++
1 files changed, 57 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 99ad2a4..850eab5 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
This function is used to reset the large page info of all guest page
which will be used in later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 25 +
arch/x86/kvm/x86.h |2 ++
2 files changed, 27 insertions(+), 0 deletions(-)
diff --git a/arch/x86
goto retry
(the wait is very rare and clear one rmap is very fast, it
is not bad even if wait is needed)
Then, we can sure the spte is always available when we do
unmap_memslot_rmap_nolock
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/
Replace kvm_mmu_zap_all by kvm_mmu_invalid_all_pages except on
the path of mmu_notifier->release() which will be replaced in
the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/a
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c |4
virt/kvm/kvm_main.c |3 ---
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6e7c85b..d3dd0d5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86/kvm/x86.c
@@ -7056,7 +7056,11
memslots so
that rmap and lpage info can be safely freed.
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 85 +-
arch/x86/kvm/mmu.h |4 ++
arch/x86/kvm/x86.c |6 +++
4 f
-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c | 15 ++-
arch/x86/kvm/x86.c |9 -
3 files changed, 7 insertions(+), 19 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm
It is used to set disallowed lage page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 53 ++-
1 files changed, 35 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch
Introduce slot_rmap_* functions to abstract memslot rmap related
operations which makes the later patch more clearer
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 108 +-
arch/x86/kvm/mmu_audit.c | 10 +++--
2 files changed, 84
memslot rmap and lpage-info are never partly reused and nothing need
be freed when new memslot is created
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 21 -
1 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>more.
* Performance
We observably reduce the contention of mmu-lock and make the invalidation
be preemptable.
Xiao Guangrong (15):
KVM: x86: clean up and optimize for kvm_arch_free_memslot
KVM: fold kvm_arch_create_memslot into kvm_arch_prepare_memory_region
KVM: x86: do not reuse rmap when
It removes a arch-specified interface and also removes unnecessary
empty functions on some architectures
Signed-off-by: Xiao Guangrong
---
arch/arm/kvm/arm.c |5 -
arch/ia64/kvm/kvm-ia64.c |5 -
arch/powerpc/kvm/powerpc.c |8 ++--
arch/s390/kvm/kvm-s390.c
Introduce rmap_operations to allow rmap having different operations,
then, we are able to handle invalid rmap specially
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 31 ---
arch/x86/kvm/mmu.h
It removes a arch-specified interface and also removes unnecessary
empty functions on some architectures
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/arm/kvm/arm.c |5 -
arch/ia64/kvm/kvm-ia64.c |5 -
arch/powerpc/kvm/powerpc.c |8
Introduce rmap_operations to allow rmap having different operations,
then, we are able to handle invalid rmap specially
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 31
reduce the contention of mmu-lock and make the invalidation
be preemptable.
Xiao Guangrong (15):
KVM: x86: clean up and optimize for kvm_arch_free_memslot
KVM: fold kvm_arch_create_memslot into kvm_arch_prepare_memory_region
KVM: x86: do not reuse rmap when memslot is moved
KVM: MMU: abstract
memslot rmap and lpage-info are never partly reused and nothing need
be freed when new memslot is created
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 21 -
1 files changed, 12 insertions(+), 9 deletions(-)
diff --git a/arch/x86
Introduce slot_rmap_* functions to abstract memslot rmap related
operations which makes the later patch more clearer
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 108 +-
arch/x86/kvm/mmu_audit.c | 10
It is used to set disallowed lage page on the specified level, can be
used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 53 ++-
1 files changed, 35 insertions(+), 18 deletions(-)
diff
and lpage info can be safely freed.
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch/x86/kvm/mmu.c | 85 +-
arch/x86/kvm/mmu.h |4 ++
arch/x86/kvm/x86.c
-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/mmu.c | 15 ++-
arch/x86/kvm/x86.c |9 -
3 files changed, 7 insertions(+), 19 deletions(-)
diff --git a/arch/x86/include/asm
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c |4
virt/kvm/kvm_main.c |3 ---
2 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index 6e7c85b..d3dd0d5 100644
--- a/arch/x86/kvm/x86.c
+++ b/arch/x86
Replace kvm_mmu_zap_all by kvm_mmu_invalid_all_pages except on
the path of mmu_notifier-release() which will be replaced in
the later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff
retry
(the wait is very rare and clear one rmap is very fast, it
is not bad even if wait is needed)
Then, we can sure the spte is always available when we do
unmap_memslot_rmap_nolock
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/include/asm/kvm_host.h |2 +
arch
This function is used to reset the large page info of all guest page
which will be used in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/x86.c | 25 +
arch/x86/kvm/x86.h |2 ++
2 files changed, 27 insertions(+), 0
It frees pte-list-descs used by memslot rmap after update
memslot is completed
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 26 ++
arch/x86/kvm/mmu.h |1 +
2 files changed, 27 insertions(+), 0 deletions(-)
diff --git
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 62 +++
1 files changed, 57 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 99ad2a4..850eab5 100644
--- a/arch/x86/kvm
to unmap invalid rmap
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
arch/x86/kvm/mmu.c | 80
1 files changed, 80 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 850eab5..2a7a5d0
Let kvm do not reuse the rmap of the memslot which is being moved
then the rmap of moved or deleted memslot can only be unmapped, no
new spte can be added on it.
This is good for us to unmap rmap out of mmu-lock in the later patches
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
released by the first call.
Signed-off-by: Xiao Guangrong xiaoguangr...@linux.vnet.ibm.com
---
mm/mmu_notifier.c | 81 +++--
1 files changed, 41 insertions(+), 40 deletions(-)
diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c
index be04122..606777a
On 04/16/2013 05:31 PM, Robin Holt wrote:
On Tue, Apr 16, 2013 at 02:39:49PM +0800, Xiao Guangrong wrote:
The commit 751efd8610d3 (mmu_notifier_unregister NULL Pointer deref
and multiple -release()) breaks the fix:
3ad3d901bbcfb15a5e4690e55350db0899095a68
(mm: mmu_notifier: fix freed
On 04/16/2013 07:43 PM, Robin Holt wrote:
Argh. Taking a step back helped clear my head.
For the -stable releases, I agree we should just go with your
revert-plus-hlist_del_init_rcu patch. I will give it a test
when I am in the office.
Okay. Wait for your test report. Thank you in
On 04/17/2013 02:08 AM, Robin Holt wrote:
On Tue, Apr 16, 2013 at 09:07:20PM +0800, Xiao Guangrong wrote:
On 04/16/2013 07:43 PM, Robin Holt wrote:
Argh. Taking a step back helped clear my head.
For the -stable releases, I agree we should just go with your
revert-plus-hlist_del_init_rcu
Hi Marcelo,
On 04/16/2013 08:54 AM, Marcelo Tosatti wrote:
> On Mon, Apr 01, 2013 at 05:56:43PM +0800, Xiao Guangrong wrote:
>> Changelog in v2:
>> - rename kvm_mmu_invalid_mmio_spte to kvm_mmu_invalid_mmio_sptes
>> - use kvm->memslots->generation as kvm global g
Hi Marcelo,
On 04/16/2013 08:54 AM, Marcelo Tosatti wrote:
On Mon, Apr 01, 2013 at 05:56:43PM +0800, Xiao Guangrong wrote:
Changelog in v2:
- rename kvm_mmu_invalid_mmio_spte to kvm_mmu_invalid_mmio_sptes
- use kvm-memslots-generation as kvm global generation-number
- fix comment
ual the global generation-number,
it will go to the normal #PF handler to update the mmio spte
Since 19 bits are used to store generation-number on mmio spte, we zap all
mmio sptes when the number is round
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/
it walks the shadow page table and get the mmio spte. If the
generation-number on the spte does not equal the global generation-number,
it will go to the normal #PF handler to update the mmio spte
Since 19 bits are used to store generation-number on mmio spte, we zap all
mmio sptes when the number is ro
It is useful for debug mmio spte invalidation
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c |9 +++--
arch/x86/kvm/mmutrace.h | 24
2 files changed, 31 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index
Define some meaningful names instead of raw code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 15 +--
arch/x86/kvm/mmu.h | 14 ++
arch/x86/kvm/vmx.c |4 ++--
3 files changed, 21 insertions(+), 12 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86
Let mmio spte only use bit62 and bit63 on upper 32 bits, then bit 52 ~ bit 61
can be used for other purposes
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/vmx.c |4 ++--
arch/x86/kvm/x86.c |8 +++-
2 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx.c b
Store the generation-number into bit3 ~ bit11 and bit52 ~ bit61, totally
19 bits can be used, it should be enough for nearly all most common cases
In this patch, the generation-number is always 0, it will be changed in
the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c
1301 - 1400 of 2152 matches
Mail list logo