This patch does:
- 'sp' parameter in inspect_spte_fn() is not used, so remove it
- fix 'kvm' and 'slots' is not defined in count_rmaps()
- fix a bug in inspect_spte_has_rmap()
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 15 ---
1 files
The read only spte mapping can't hurt shadow page cache,
so, no need to record it.
Using bit9 to record whether the spte is re-mapped
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 17 +++--
arch/x86/kvm/mmu.h |1 +
2 files changed, 16
Avi Kivity wrote:
We've considered this in the past, it makes sense. The big question is
whether any guests actually map the same page table through PDEs with
different permissions (mapping the same page table through multiple PDEs
is very common, but always with the same
kvm_mmu_page.oos_link is not used, so remove it
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |2 --
arch/x86/kvm/mmu.c |1 -
2 files changed, 0 insertions(+), 3 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b
After is_rsvd_bits_set() checks, EFER.NXE must be enabled if NX bit is seted
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/paging_tmpl.h |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
Remove 'struct kvm_unsync_walk' since it's not used now
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |5 -
1 files changed, 0 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b44380b..a23ca75 100644
--- a/arch
- calculate zapped page number properly in mmu_zap_unsync_children()
- calculate freeed page number properly kvm_mmu_change_mmu_pages()
- restart list walking if have children page zapped
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |7 ---
1 files
- 'vcpu' is not used while mark parent unsync, so remove it
- if it has alread marked unsync, no need to walk it's parent
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 69 +--
1 files changed, 23 insertions
Usually, OS changes CR4.PGE bit to flush all global page, under this
case, no need reset mmu and just flush tlb
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/x86.c |9 +
1 files changed, 9 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/x86.c b
'multimapped' and 'unsync' in 'struct kvm_mmu_page' are just indication
field, we can use flag bits instand of them
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |5 ++-
arch/x86/kvm/mmu.c | 65
- chain all unsync shadow pages then we can fetch them quickly
- flush local/remote tlb after all shadow page synced
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |1 +
arch/x86/kvm/mmu.c | 82
Avi Kivity wrote:
kvm-arch.n_free_mmu_pages = 0;
@@ -1589,7 +1589,8 @@ static void mmu_unshadow(struct kvm *kvm, gfn_t
gfn)
!sp-role.invalid) {
pgprintk(%s: zap %lx %x\n,
__func__, gfn, sp-role.word);
-
Avi Kivity wrote:
On 04/12/2010 11:02 AM, Xiao Guangrong wrote:
- 'vcpu' is not used while mark parent unsync, so remove it
- if it has alread marked unsync, no need to walk it's parent
Please separate these two changes.
The optimization looks good. Perhaps it can be done even
Hi Avi,
Avi Kivity wrote:
hlist_for_each_entry_safe() is supposed to be be safe against removal of
the element that is pointed to by the iteration cursor.
If we destroyed the next point, hlist_for_each_entry_safe() is unsafe.
List hlist_for_each_entry_safe()'s code:
|#define
Hi Avi,
Thanks for your comments.
Avi Kivity wrote:
Later we have:
kvm_x86_ops-set_cr4(vcpu, cr4);
vcpu-arch.cr4 = cr4;
vcpu-arch.mmu.base_role.cr4_pge = (cr4 X86_CR4_PGE)
!tdp_enabled;
All of which depend on cr4.
Oh, destroy_kvm_mmu() is not really
Avi Kivity wrote:
On 04/12/2010 11:05 AM, Xiao Guangrong wrote:
'multimapped' and 'unsync' in 'struct kvm_mmu_page' are just indication
field, we can use flag bits instand of them
@@ -202,9 +202,10 @@ struct kvm_mmu_page {
* in this shadow page.
*/
DECLARE_BITMAP
Avi Kivity wrote:
On 04/12/2010 11:06 AM, Xiao Guangrong wrote:
- chain all unsync shadow pages then we can fetch them quickly
- flush local/remote tlb after all shadow page synced
Signed-off-by: Xiao Guangrongxiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |1
Avi Kivity wrote:
On 04/12/2010 12:22 PM, Xiao Guangrong wrote:
Hi Avi,
Avi Kivity wrote:
But kvm_mmu_zap_page() will only destroy sp == tpos == pos; n points at
pos-next already, so it's safe.
kvm_mmu_zap_page(sp) not only zaps sp but also zaps all sp's unsync children
pages, if n
Marcelo Tosatti wrote:
Xiao,
Did you actually see this codepath as being performance sensitive?
Actually, i not run benchmarks to contrast the performance before this patch
and after this patch.
I'd prefer to not touch it.
This patch avoids walk all parents and i think this overload
Avi Kivity wrote:
See 6364a3918cb. It was reverted later due to a problem with the
implementation. I'm not sure whether I want to fix the bug and restore
that patch, or to drop it altogether and give the guest ownership of
cr4.pge. See cr4_guest_owned_bits (currently only used on ept).
Marcelo Tosatti wrote:
I'd prefer to not touch it.
This patch avoids walk all parents and i think this overload is really
unnecessary.
It has other tricks in this codepath but i not noticed? :-)
My point is that there is no point in optimizing something unless its
performance
Remove 'struct kvm_unsync_walk' since it's not used
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |5 -
1 files changed, 0 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b44380b..a23ca75 100644
--- a/arch/x86
- calculate zapped page number properly in mmu_zap_unsync_children()
- calculate freeed page number properly kvm_mmu_change_mmu_pages()
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 12
1 files changed, 8 insertions(+), 4 deletions(-)
diff
Quote from Avi:
|Just change the assignment to a 'goto restart;' please,
|I don't like playing with list_for_each internals.
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 15 ++-
1 files changed, 10 insertions(+), 5 deletions(-)
diff --git
define 'multimapped' as 'bool'
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 0c49c88..cace232
'vcpu' is unused, remove it
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 24 +++-
1 files changed, 11 insertions(+), 13 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a32c60c..2f8ae9e 100644
--- a/arch/x86/kvm
This patch fix:
- calculate zapped page number properly in mmu_zap_unsync_children()
- calculate freeed page number properly kvm_mmu_change_mmu_pages()
- if zapped children page it shoud restart hlist walking
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c
Quote from Avi:
|Just change the assignment to a 'goto restart;' please,
|I don't like playing with list_for_each internals.
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 15 ++-
1 files changed, 10 insertions(+), 5 deletions(-)
diff --git
In current code, shadow page can become asynchronous only if one
shadow page for a gfn, this rule is too strict, in fact, we can
let all last mapping page(i.e, it's the pte page) become unsync
and sync them at invlpg or flush tlb time.
Address this thinking, a gfn may have many shadow pages, for
If the guest is 32-bit, we should use 'quadrant' to adjust gpa
offset
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/paging_tmpl.h |7 ++-
1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
Convert mmu tracepoints by using DECLARE_EVENT_CLASS
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmutrace.h | 69 +-
1 files changed, 26 insertions(+), 43 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86
Move unsync/sync tracepoints to the proper place, it's good
for us to obtain unsync page live time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
Using '!sp-role.cr4_pae' replaces 'PTTYPE == 32' and using
'pte_size = sp-role.cr4_pae ? 8 : 4' replaces sizeof(pt_element_t)
Then no need compile twice for this code
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 60
Using is_last_spte() to cleanup invlpg code
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |4 +---
1 files changed, 1 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index fac7c09..fd027a6 100644
--- a/arch/x86/kvm/mmu.c
If have new mapping to the unsync page(i.e, add a new parent), just
update the page from sp-gfn but not write-protect gfn, and if need
create new shadow page form sp-gfn, we should sync it
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 27
mapping time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 81 +++
1 files changed, 37 insertions(+), 44 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8607a64..13378e7 100644
--- a/arch
Allow more page become asynchronous at getting sp time, if need create new
shadow page for gfn but it not allow unsync(level 0), we should unsync all
gfn's unsync page
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 22 --
1 files
Let invlpg not depends on kvm_mmu_pte_write path, later patch will need
this feature
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 40
1 files changed, 24 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kvm
unsync page, the unsync
page only updated at invlpg/flush TLB time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 10 ++
1 files changed, 6 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f092e71..5bdcc17
Marcelo Tosatti wrote:
role = vcpu-arch.mmu.base_role;
@@ -1332,12 +1336,16 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct
kvm_vcpu *vcpu,
hlist_for_each_entry_safe(sp, node, tmp, bucket, hash_link)
if (sp-gfn == gfn) {
if (sp-unsync)
Gui Jianfeng wrote:
Currently, in kvm_mmu_change_mmu_pages(kvm, page), used_pages-- is
performed after calling
kvm_mmu_zap_page() in spite of that whether page is actually reclaimed.
Because root sp won't be
reclaimed by kvm_mmu_zap_page(). So making kvm_mmu_zap_page() return total
Avi Kivity wrote:
On 04/22/2010 09:12 AM, Xiao Guangrong wrote:
If the guest is 32-bit, we should use 'quadrant' to adjust gpa
offset
Good catch. Only affects kvm_mmu_pte_write(), so I don't think this had
ill effects other than not prefetching the correct address?
Yes
Avi Kivity wrote:
On 04/23/2010 02:27 PM, Avi Kivity wrote:
On 04/22/2010 09:12 AM, Xiao Guangrong wrote:
Using '!sp-role.cr4_pae' replaces 'PTTYPE == 32' and using
'pte_size = sp-role.cr4_pae ? 8 : 4' replaces sizeof(pt_element_t)
Then no need compile twice for this code
I think we
Changlog v2:
- when level is PT_DIRECTORY_LEVEL, the 'offset' should be
'role.quadrant 8', thanks Avi for point it out
- keep invlpg code in paging_tmpl.h address Avi's suggestion
- split kvm_sync_page() into kvm_sync_page() and kvm_sync_page_transient()
to clarify the code address Avi's
If the guest is 32-bit, we should use 'quadrant' to adjust gpa
offset
Changlog v2:
- when level is PT_DIRECTORY_LEVEL, the 'offset' should be
'role.quadrant 8', thanks Avi for point it out
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/paging_tmpl.h | 13
Convert mmu tracepoints by using DECLARE_EVENT_CLASS
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmutrace.h | 69 +-
1 files changed, 26 insertions(+), 43 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86
Move unsync/sync tracepoints to the proper place, it's good
for us to obtain unsync page live time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
Using is_last_spte() to cleanup invlpg code
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/paging_tmpl.h |4 +---
1 files changed, 1 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 83cc72f..8eb98eb 100644
Split kvm_sync_page() into kvm_sync_page() and kvm_sync_page_transient()
to clarify the code address Avi's suggestion
kvm_sync_page_transient() function only update shadow page but not mark
it sync and not write protect sp-gfn. it will be used by later patch
Signed-off-by: Xiao Guangrong
not allow to become unsync(also for the unsyc
rule, the new rule is: allow all pte page become unsync)
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 14 +++---
1 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
mapping time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 81 +++
1 files changed, 37 insertions(+), 44 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b946a5f..5198fc9 100644
--- a/arch
Allow more page become asynchronous at getting sp time, if need create new
shadow page for gfn but it not allow unsync(level 1), we should unsync all
gfn's unsync page
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 21 +++--
1 files changed
Let invlpg not depends on kvm_mmu_pte_write path, later patch will need
this feature
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 12 +++-
arch/x86/kvm/paging_tmpl.h | 33 ++---
2 files changed, 29 insertions
unsync page, the unsync
page only updated at invlpg/flush TLB time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |3 ++-
arch/x86/kvm/paging_tmpl.h | 23 +++
2 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/arch
Avi Kivity wrote:
On 04/25/2010 10:00 AM, Xiao Guangrong wrote:
If the guest is 32-bit, we should use 'quadrant' to adjust gpa
offset
Changlog v2:
- when level is PT_DIRECTORY_LEVEL, the 'offset' should be
'role.quadrant 8', thanks Avi for point it out
Signed-off-by: Xiao
Avi Kivity wrote:
This isn't a split; it duplicates the code.
Since there are some parts in the middle of kvm_sync_page() you don't
want in sync_page_transient(), you can put them into helpers so that
sync_page and sync_page_transient only call helpers.
Will fix it in v3, thanks
Avi Kivity wrote:
On 04/25/2010 10:02 AM, Xiao Guangrong wrote:
Let invlpg not depends on kvm_mmu_pte_write path, later patch will need
this feature
if (mmu_topup_memory_caches(vcpu))
return;
-kvm_mmu_pte_write(vcpu, pte_gpa, NULL, sizeof(pt_element_t), 0
Avi Kivity wrote:
On 04/25/2010 10:00 AM, Xiao Guangrong wrote:
Two cases maybe happen in kvm_mmu_get_page() function:
- one case is, the goal sp is already in cache, if the sp is unsync,
we only need update it to assure this mapping is valid, but not
mark it sync and not write
If the guest is 32-bit, we should use 'quadrant' to adjust gpa
offset
Changelog v3:
- use smart way to fix this bug address Avi's suggestion
Changelog v2:
- when level is PT_DIRECTORY_LEVEL, the 'offset' should be
'role.quadrant 8', thanks Avi for point it out
Signed-off-by: Xiao Guangrong
Using is_last_spte() to cleanup invlpg code
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/paging_tmpl.h |4 +---
1 files changed, 1 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 3464fdb..89d66ca 100644
Split kvm_sync_page() into kvm_sync_page() and kvm_sync_page_transient()
to clarify the code address Avi's suggestion
kvm_sync_page_transient() function only update shadow page but not mark
it sync and not write protect sp-gfn. it will be used by later patch
Signed-off-by: Xiao Guangrong
mapping time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 81 +++
1 files changed, 37 insertions(+), 44 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index fb0c33c..a60cd51 100644
--- a/arch
Convert mmu tracepoints by using DECLARE_EVENT_CLASS
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmutrace.h | 69 +-
1 files changed, 26 insertions(+), 43 deletions(-)
diff --git a/arch/x86/kvm/mmutrace.h b/arch/x86
Move unsync/sync tracepoints to the proper place, it's good
for us to obtain unsync page live time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
not allow to become unsync(also for the unsyc
rule, the new rule is: allow all pte page become unsync)
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 14 +++---
1 files changed, 11 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
Let invlpg not depends on kvm_mmu_pte_write path, later patch will need
this feature
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 22 +-
arch/x86/kvm/paging_tmpl.h | 36 +++-
2 files changed
unsync page, the unsync
page only updated at invlpg/flush TLB time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |3 ++-
arch/x86/kvm/paging_tmpl.h | 11 +++
2 files changed, 9 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b
Marcelo Tosatti wrote:
On Wed, Apr 28, 2010 at 11:55:49AM +0800, Xiao Guangrong wrote:
In current code, shadow page can become asynchronous only if one
shadow page for a gfn, this rule is too strict, in fact, we can
let all last mapping page(i.e, it's the pte page) become unsync,
and sync
When mapping a new parent to unsync shadow page, we should mark
parent's unsync_children bit
Reported-by: Marcelo Tosatti mtosa...@redhat.com
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff
-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/paging_tmpl.h | 22 --
1 files changed, 20 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 624b38f..13ea675 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b
Avi Kivity wrote:
On 05/05/2010 03:19 PM, Xiao Guangrong wrote:
When mapping a new parent to unsync shadow page, we should mark
parent's unsync_children bit
Reported-by: Marcelo Tosattimtosa...@redhat.com
Signed-off-by: Xiao Guangrongxiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c
Avi Kivity wrote:
spin_lock(vcpu-kvm-mmu_lock);
+index = kvm_page_table_hashfn(gfn);
+bucket =vcpu-kvm-arch.mmu_page_hash[index];
+hlist_for_each_entry_safe(s, node, tmp, bucket, hash_link)
+if (s == sp) {
+if (s-gfn == gfn s-role.word == role.word)
Avi Kivity wrote:
On 04/30/2010 12:05 PM, Xiao Guangrong wrote:
If 'oos_shadow' == 0, intercepting invlpg command is really
unnecessary.
And it's good for us to compare the performance between enable
'oos_shadow'
and disable 'oos_shadow'
@@ -74,8 +74,9 @@ static int dbg = 0
Changlog v4:
- fix the bug that reported by Marcelo
- fix the race in invlpg code
Changlog v3:
Those changes all form Avi's suggestion, thanks.
- use smart way to fix the bug in patch 1
- remove duplicates code in patch 5
- check error code and fix forgot release page in patch 9
- sync shadow
Split kvm_sync_page() into kvm_sync_page() and kvm_sync_page_transient()
to clarify the code address Avi's suggestion
kvm_sync_page_transient() function only update shadow page but not mark
it sync and not write protect sp-gfn. it will be used by later patch
Signed-off-by: Xiao Guangrong
not allow to become unsync(also for the unsyc
rule, the new rule is: allow all pte page become unsync)
Changlog:
- fix for forget to mark parent's unsync_children bit when mapping a new
parent to unsync shadow page
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm
mapping time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 81 +++
1 files changed, 37 insertions(+), 44 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 1dbb96e..ae8c43b 100644
--- a/arch
Allow more page become asynchronous at getting sp time, if need create new
shadow page for gfn but it not allow unsync(level 1), we should unsync all
gfn's unsync page
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 47
Rename 'root_count' to 'active_count' in kvm_mmu_page, since the unsync pages
also will use it in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |7 ++-
arch/x86/kvm/mmu.c | 14 +++---
arch/x86/kvm
() then we can
free the invalid unsync page to call kvm_mmu_free_page directly.
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 15 +--
1 files changed, 9 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 58cf0f1
Let invlpg not depends on kvm_mmu_pte_write path, later patch will need
this feature
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 22 +-
arch/x86/kvm/paging_tmpl.h | 44 +++-
2 files
'invlpg_counter' is protected by 'kvm-mmu_lock', no need atomic
operation anymore
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |2 +-
arch/x86/kvm/paging_tmpl.h |7 ---
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git
unsync page, the unsync
page only updated at invlpg/flush TLB time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |3 ++-
arch/x86/kvm/paging_tmpl.h | 12
2 files changed, 10 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/mmu.c
Hi Avi, Marcelo,
patch 5 and patch 6 are can't apply to current kvm tree, i'll
rebase those two patches.
Marcelo, does this patchset fix your issue? I have tested it with
Fedora12/Ubuntu/CentOS 32/64 guests, it works well.
Thanks,
Xiao
--
To unsubscribe from this list: send the line unsubscribe
Rename 'root_count' to 'active_count' in kvm_mmu_page, since the unsync pages
also will use it in later patch
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/include/asm/kvm_host.h |8 +++-
arch/x86/kvm/mmu.c | 14 +++---
arch/x86/kvm
() then we can
free the invalid unsync page to call kvm_mmu_free_page directly.
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 11 +++
1 files changed, 7 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 4077a9c
Where to alloc, where to free
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 18 ++
1 files changed, 10 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 604eb3f..67da751 100644
--- a/arch/x86/kvm
Remove rmap before clear spte otherwise it will trigger BUG_ON() in
some functions such as rmap_write_protect()
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86
sp-gfns[] are not mapping gfn since it has cooked by unalias_gfn()
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/paging_tmpl.h |7 ---
1 files changed, 4 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
fix two typos in next branch
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c |4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a474d93..68f79b0 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86
Split kvm_sync_page() into kvm_sync_page() and kvm_sync_page_transient()
to clarify the code address Avi's suggestion
kvm_sync_page_transient() function only update shadow page but not mark
it sync and not write protect sp-gfn. it will be used by later patch
Signed-off-by: Xiao Guangrong
not allow to become unsync(also for the unsyc
rule, the new rule is: allow all pte page become unsync)
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 18 ++
1 files changed, 14 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c
Only unsync pages need updated at invlpg time since other shadow
pages are write-protected
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/paging_tmpl.h |8 ++--
1 files changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch
mapping time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 81 +++
1 files changed, 37 insertions(+), 44 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 97c5217..1c558ba 100644
--- a/arch
Allow more page become asynchronous at getting sp time, if need create new
shadow page for gfn but it not allow unsync(level 1), we should unsync all
gfn's unsync page
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 47
Avi Kivity wrote:
+if (need_unsync)
+kvm_unsync_pages(vcpu, gfn);
return 0;
}
Looks good, I'm just uncertain about role.invalid handling. What's the
reasoning here?
Avi,
Thanks for your reply.
We no need worry about 'role.invalid' here, since we only allow
Avi Kivity wrote:
On 05/23/2010 03:16 PM, Xiao Guangrong wrote:
Allow more page become asynchronous at getting sp time, if need create
new
shadow page for gfn but it not allow unsync(level 1), we should
unsync all
gfn's unsync page
+/* @gfn should be write-protected at the call site
Avi Kivity wrote:
On 05/24/2010 05:03 AM, Xiao Guangrong wrote:
Avi Kivity wrote:
+if (need_unsync)
+kvm_unsync_pages(vcpu, gfn);
return 0;
}
Looks good, I'm just uncertain about role.invalid handling. What's the
reasoning here?
Avi
mapping time
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 82
1 files changed, 38 insertions(+), 44 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 97c5217..170c8f7 100644
--- a/arch
Allow more page become asynchronous at getting sp time, if need create new
shadow page for gfn but it not allow unsync(level 1), we should unsync all
gfn's unsync page
Signed-off-by: Xiao Guangrong xiaoguangr...@cn.fujitsu.com
---
arch/x86/kvm/mmu.c | 47
1 - 100 of 2424 matches
Mail list logo