On 2016/03/08 17:30, Paolo Bonzini wrote:
> On 08/03/2016 09:00, Takuya Yoshikawa wrote:
>>> KVM: MMU: introduce kvm_mmu_flush_or_zap
>>> KVM: MMU: move TLB flush out of __kvm_sync_page
>>> KVM: MMU: use kvm_sync_page in kvm_sync_pages
>>> KV
On 2016/03/08 17:30, Paolo Bonzini wrote:
> On 08/03/2016 09:00, Takuya Yoshikawa wrote:
>>> KVM: MMU: introduce kvm_mmu_flush_or_zap
>>> KVM: MMU: move TLB flush out of __kvm_sync_page
>>> KVM: MMU: use kvm_sync_page in kvm_sync_pages
>>> KV
On 2016/03/07 23:15, Paolo Bonzini wrote:
> Having committed the ubsan fixes, this are the cleanups that are left.
>
> Compared to v1, I have fixed the patch to coalesce page zapping after
> mmu_sync_children (as requested by Takuya and Guangrong), and I have
> rewritten is_last_gpte again in an
On 2016/03/07 23:15, Paolo Bonzini wrote:
> Having committed the ubsan fixes, this are the cleanups that are left.
>
> Compared to v1, I have fixed the patch to coalesce page zapping after
> mmu_sync_children (as requested by Takuya and Guangrong), and I have
> rewritten is_last_gpte again in an
On 2016/02/24 22:17, Paolo Bonzini wrote:
> Move the call to kvm_mmu_flush_or_zap outside the loop.
>
> Signed-off-by: Paolo Bonzini
> ---
> arch/x86/kvm/mmu.c | 9 ++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c
On 2016/02/24 22:17, Paolo Bonzini wrote:
> Move the call to kvm_mmu_flush_or_zap outside the loop.
>
> Signed-off-by: Paolo Bonzini
> ---
> arch/x86/kvm/mmu.c | 9 ++---
> 1 file changed, 6 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
> index
The end result is very similar to handle_ept_misconfig()'s corresponding code.
It may also be possible to change handle_ept_misconfig() not to call
handle_mmio_page_fault() separately from kvm_mmu_page_fault():
the only difference seems to be whether it checks for PFERR_RSVD_MASK.
Takuya
The end result is very similar to handle_ept_misconfig()'s corresponding code.
It may also be possible to change handle_ept_misconfig() not to call
handle_mmio_page_fault() separately from kvm_mmu_page_fault():
the only difference seems to be whether it checks for PFERR_RSVD_MASK.
Takuya
extra error_code check
- avoids returning both RET_MMIO_PF_* values and raw integer values
from vcpu->arch.mmu.page_fault()
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 39 ---
arch/x86/kv
extra error_code check
- avoids returning both RET_MMIO_PF_* values and raw integer values
from vcpu->arch.mmu.page_fault()
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 39 ---
arch/x86/kvm/paging_tmpl.h | 19 ++-
for.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 15 ---
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 95a955d..a28b734 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x
for.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 15 ---
1 file changed, 4 insertions(+), 11 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 95a955d..a28b734 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3273,7 +3273,7 @@ static
Not just in order to clean up the code, but to make it faster by using
enhanced instructions: the initialization became 20-30% faster on our
testing machine.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 10 +-
1 file changed, 1 insertion(+), 9 deletions(-)
diff --git a/arch
Not just in order to clean up the code, but to make it faster by using
enhanced instructions: the initialization became 20-30% faster on our
testing machine.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 10 +-
1 file changed, 1 ins
As kvm_mmu_get_page() was changed so that every parent pointer would not
get into the sp->parent_ptes chain before the entry pointed to by it was
set properly, we can use the for_each_rmap_spte macro instead of
pte_list_walk().
Signed-off-by: Takuya Yoshikawa
Cc: Xiao Guangrong
---
arch/
off-by: Takuya Yoshikawa
Cc: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 23 +--
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 11 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7f46e3e..ec61b22 100644
--- a/arch/x86/kvm/mm
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 204c7d4..a1a3d19 100644
--- a/arch/x86/kvm/mmu.c
+++ b
Guests worked normally in shadow paging mode (ept=0) on my test machine.
Please check if the first two patches reflect what you meant correctly.
Takuya Yoshikawa (3):
[1] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to
link_shadow_page()
[2] KVM: x86: MMU: Use
Guests worked normally in shadow paging mode (ept=0) on my test machine.
Please check if the first two patches reflect what you meant correctly.
Takuya Yoshikawa (3):
[1] KVM: x86: MMU: Move parent_pte handling from kvm_mmu_get_page() to
link_shadow_page()
[2] KVM: x86: MMU: Use
off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
Cc: Xiao Guangrong <guangrong.x...@linux.intel.com>
---
arch/x86/kvm/mmu.c | 23 +--
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 11 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/m
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 204c7d4..a1a3d19
As kvm_mmu_get_page() was changed so that every parent pointer would not
get into the sp->parent_ptes chain before the entry pointed to by it was
set properly, we can use the for_each_rmap_spte macro instead of
pte_list_walk().
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt
On 2015/11/26 1:32, Paolo Bonzini wrote:
On 20/11/2015 09:57, Xiao Guangrong wrote:
You can move this patch to the front of
[PATCH 08/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of
pte_list_walk()
By moving kvm_mmu_mark_parents_unsync() to the behind of mmu_spte_set()
(then the
On 2015/11/26 1:32, Paolo Bonzini wrote:
On 20/11/2015 09:57, Xiao Guangrong wrote:
You can move this patch to the front of
[PATCH 08/10] KVM: x86: MMU: Use for_each_rmap_spte macro instead of
pte_list_walk()
By moving kvm_mmu_mark_parents_unsync() to the behind of mmu_spte_set()
(then the
On 2015/11/20 17:46, Xiao Guangrong wrote:
You just ignored my comment on the previous version...
I'm sorry but please read the explanation in patch 00.
I've read your comments and I'm not ignoring you.
Since this patch set has become huge than expected, I'm sending
this version so that
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b020323..9baf884 100644
--- a/arch/x86/kvm/mmu.c
+++ b
to kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 22 --
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 10 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
is not set yet.
By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 +---
1 file changed, 13
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra allocation error check and zero-initialization of parent_ptes:
shadow page headers allocated by kmem_cache_zalloc() are always in the
per-VCPU pools.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 14
-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 26 +-
2 files changed, 19 insertions(+), 11 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt
b/Documentation/virtual/kvm/mmu.txt
index 3a4d681..daf9c0f 100644
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them without clear distinction just makes the code
confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 13 -
arch/x86/kvm/mmu_audi
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files changed, 19 insertions(+), 18 deletions
-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8a1593f..9832bc9 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1809,6 +1809,13 @@ static
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9a6801..8a1593f 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void
New struct kvm_rmap_head makes the code type-safe to some extent.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/kvm/mmu.c | 196
arch/x86/kvm/mmu_audit.c| 13 +--
3 files changed, 113
three, I'm not sure what we should do now, still RFC?
We can also consider other approaches, e.g. moving link_shadow_page() in the
kvm_get_mmu_page() as Paolo suggested before.
Takuya
Takuya Yoshikawa (10):
[01] KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new struct
[02] KV
On 2015/11/20 17:46, Xiao Guangrong wrote:
You just ignored my comment on the previous version...
I'm sorry but please read the explanation in patch 00.
I've read your comments and I'm not ignoring you.
Since this patch set has become huge than expected, I'm sending
this version so that
three, I'm not sure what we should do now, still RFC?
We can also consider other approaches, e.g. moving link_shadow_page() in the
kvm_get_mmu_page() as Paolo suggested before.
Takuya
Takuya Yoshikawa (10):
[01] KVM: x86: MMU: Encapsulate the type of rmap-chain head in a new struct
[02] KV
New struct kvm_rmap_head makes the code type-safe to some extent.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/kvm/mmu.c | 196
arch/x86/kvm/mmu_a
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index d9a6801..8a1593f 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm
-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 8a1593f..9832bc9 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them without clear distinction just makes the code
confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c |
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files c
-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 26 +-
2 files changed, 19 insertions(+), 11 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt
b/Documentation/virtu
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra allocation error check and zero-initialization of parent_ptes:
shadow page headers allocated by kmem_cache_zalloc() are always in the
per-VCPU pools.
Signed-off-by: Takuya Yoshikawa <yoshikawa_tak
is not set yet.
By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.
to kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 22 --
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 10 insertions(+), 18 deletions(-)
diff
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index b020323..9baf884
On 2015/11/19 11:46, Xiao Guangrong wrote:
Actually, some people prefer to put braces when one of the
if/else-if/else cases has multiple lines. You can see
some examples in kernel/sched/core.c: see hrtick_start(),
sched_fork(), free_sched_domain().
In our case, I thought putting braces would
On 2015/11/18 18:09, Paolo Bonzini wrote:
On 18/11/2015 04:21, Xiao Guangrong wrote:
On 11/12/2015 07:55 PM, Takuya Yoshikawa wrote:
@@ -1720,7 +1724,7 @@ static struct kvm_mmu_page
*kvm_mmu_alloc_page(struct kvm_vcpu *vcpu,
* this feature. See the comments in kvm_zap_obsolete_pages
On 2015/11/18 11:44, Xiao Guangrong wrote:
On 11/12/2015 07:50 PM, Takuya Yoshikawa wrote:
+if (!ret) {
+clear_unsync_child_bit(sp, i);
+continue;
+} else if (ret > 0) {
nr_unsync_leaf += ret;
Just a single line h
On 2015/11/18 11:44, Xiao Guangrong wrote:
On 11/12/2015 07:50 PM, Takuya Yoshikawa wrote:
+if (!ret) {
+clear_unsync_child_bit(sp, i);
+continue;
+} else if (ret > 0) {
nr_unsync_leaf += ret;
Just a single line h
On 2015/11/18 18:09, Paolo Bonzini wrote:
On 18/11/2015 04:21, Xiao Guangrong wrote:
On 11/12/2015 07:55 PM, Takuya Yoshikawa wrote:
@@ -1720,7 +1724,7 @@ static struct kvm_mmu_page
*kvm_mmu_alloc_page(struct kvm_vcpu *vcpu,
* this feature. See the comments in kvm_zap_obsolete_pages
On 2015/11/19 11:46, Xiao Guangrong wrote:
Actually, some people prefer to put braces when one of the
if/else-if/else cases has multiple lines. You can see
some examples in kernel/sched/core.c: see hrtick_start(),
sched_fork(), free_sched_domain().
In our case, I thought putting braces would
On 2015/11/14 7:08, Marcelo Tosatti wrote:
On Thu, Nov 12, 2015 at 08:53:43PM +0900, Takuya Yoshikawa wrote:
At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which must not be
found in the reverse-mapping list.
Move
On 2015/11/14 18:20, Marcelo Tosatti wrote:
The actual issue is this: a higher level page that had, under its children,
no out of sync pages, now, due to your addition, a child that is unsync:
initial state:
level1
final state:
level1 -x-> level2 -x-> level3
Where -x-> are
On 2015/11/14 18:20, Marcelo Tosatti wrote:
The actual issue is this: a higher level page that had, under its children,
no out of sync pages, now, due to your addition, a child that is unsync:
initial state:
level1
final state:
level1 -x-> level2 -x-> level3
Where -x-> are
On 2015/11/14 7:08, Marcelo Tosatti wrote:
On Thu, Nov 12, 2015 at 08:53:43PM +0900, Takuya Yoshikawa wrote:
At some call sites of rmap_get_first() and rmap_get_next(), BUG_ON is
placed right after the call to detect unrelated sptes which must not be
found in the reverse-mapping list.
Move
On 2015/11/12 23:27, Paolo Bonzini wrote:
On 12/11/2015 12:56, Takuya Yoshikawa wrote:
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 9d21b44..f414ca6 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -598,7 +598,7 @@ static int FNAME(fetch
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 33fe720..101e77d 100644
--- a/arch/x86/kvm/mmu.c
+++ b
to kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 21 -
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 10 insertions(+), 17 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra error check at its call site since the allocation cannot fail.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 15 ---
1 file changed, 8 insertions(+), 7 deletions(-)
diff --git a/arch/x86
New struct kvm_rmap_head makes the code type-safe to some extent.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/kvm/mmu.c | 169 +---
arch/x86/kvm/mmu_audit.c| 13 ++--
3 files changed, 100
sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.
Signed-off-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 26 +-
2 files changed, 19 insertions(+), 11 deletions(-)
diff
is not set yet.
By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 +---
1 file changed, 13
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them with no clear distinction just makes the code
confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 13 -
arch/x86/kvm/mmu_audi
-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c3bbc82..f3120aa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1806,6 +1806,13 @@ static
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files changed, 19 insertions(+), 18 deletions
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e7c2c14..c3bbc82 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void
o alleviate the sadness.
Takuya
Takuya Yoshikawa (10):
01: KVM: x86: MMU: Remove unused parameter of __direct_map()
02: KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
03: KVM: x86: MMU: Make mmu_set_spte() return emulate value
04: KVM: x86: MMU: Remove
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files c
-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c3bbc82..f3120aa 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x
New struct kvm_rmap_head makes the code type-safe to some extent.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/include/asm/kvm_host.h | 8 +-
arch/x86/kvm/mmu.c | 169 +---
arch/x86/kvm/mmu_a
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them with no clear distinction just makes the code
confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c |
Make kvm_mmu_alloc_page() do just what its name tells to do, and remove
the extra error check at its call site since the allocation cannot fail.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 15 ---
1 file changed, 8 insertions
o alleviate the sadness.
Takuya
Takuya Yoshikawa (10):
01: KVM: x86: MMU: Remove unused parameter of __direct_map()
02: KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
03: KVM: x86: MMU: Make mmu_set_spte() return emulate value
04: KVM: x86: MMU: Remove
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 12
1 file changed, 4 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e7c2c14..c3bbc82 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 20 +++-
arch/x86/kvm/paging_tmpl.h | 4 ++--
2 files changed, 9 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 33fe720..101e77d
to kvm_mmu_get_page() just for mark_unsync() and
mmu_page_add_parent_pte().
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 21 -
arch/x86/kvm/paging_tmpl.h | 6 ++
2 files changed, 10 insertions(+), 17 deletions(-)
diff
sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 26 +-
2 files chang
is not set yet.
By calling mark_unsync() separately for the parent and adding the parent
pointer to the parent_ptes chain later in kvm_mmu_get_page(), the macro
works with no problem.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.
On 2015/11/12 23:27, Paolo Bonzini wrote:
On 12/11/2015 12:56, Takuya Yoshikawa wrote:
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 9d21b44..f414ca6 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -598,7 +598,7 @@ static int FNAME(fetch
On 2015/11/09 19:14, Paolo Bonzini wrote:
Can you also change kvm_mmu_mark_parents_unsync to use
for_each_rmap_spte instead of pte_list_walk? It is the last use of
pte_list_walk, and it's nice if we have two uses of for_each_rmap_spte
with parent_ptes as the argument.
No problem, I will do.
On 2015/11/09 19:14, Paolo Bonzini wrote:
Can you also change kvm_mmu_mark_parents_unsync to use
for_each_rmap_spte instead of pte_list_walk? It is the last use of
pte_list_walk, and it's nice if we have two uses of for_each_rmap_spte
with parent_ptes as the argument.
No problem, I will do.
sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.
Signed-off-by: Takuya Yoshikawa
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 31 ++-
2 files changed, 24 insertions(+), 11 deletions
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them without no clear distinction just makes the
code confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 13 -
arch/x86/kvm/mmu_audi
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files changed, 19 insertions(+), 18 deletions
-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a76bc04..a9622a2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1806,6 +1806,13 @@ static
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 11 ---
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7d85bca..a76bc04 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -2708,9 +2708,8 @@ static void
Patch 1/2/3 are easy ones.
Following two, patch 4/5, may not be ideal solutions, but at least
explain, or try to explain, the problems.
Takuya Yoshikawa (5):
KVM: x86: MMU: Remove unused parameter of __direct_map()
KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 11 ---
1 file changed, 4 insertions(+), 7 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7d85bca..a76bc04 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm
changed their roles somewhat, and is_rmap_spte()
just calls is_shadow_present_pte() now.
Since using both of them without no clear distinction just makes the
code confusing, remove is_rmap_spte().
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c
-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 36 ++--
1 file changed, 18 insertions(+), 18 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index a76bc04..a9622a2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x
value instead to clean up this complex interface. Prefetch functions
can just throw away the return value.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 27 ++-
arch/x86/kvm/paging_tmpl.h | 10 +-
2 files c
Patch 1/2/3 are easy ones.
Following two, patch 4/5, may not be ideal solutions, but at least
explain, or try to explain, the problems.
Takuya Yoshikawa (5):
KVM: x86: MMU: Remove unused parameter of __direct_map()
KVM: x86: MMU: Add helper function to clear a bit in unsync child bitmap
sptes are present, at least until drop_parent_pte()
actually unlinks them, and not mmio-sptes.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
Documentation/virtual/kvm/mmu.txt | 4 ++--
arch/x86/kvm/mmu.c| 31 ++-
2 files c
lot() when
!check_hugepage_cache_consistency() check in tdp_page_fault() forces
page table level mapping.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 7 ---
arch/x86/kvm/paging_tmpl.h | 2 +-
2 files changed, 5 insertions(+), 4 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/m
lot() when
!check_hugepage_cache_consistency() check in tdp_page_fault() forces
page table level mapping.
Signed-off-by: Takuya Yoshikawa <yoshikawa_takuya...@lab.ntt.co.jp>
---
arch/x86/kvm/mmu.c | 7 ---
arch/x86/kvm/paging_tmpl.h | 2 +-
2 files changed, 5 insertions(+), 4 deletions(-)
diff --gi
Calling kvm_vcpu_gfn_to_memslot() twice in mapping_level() should be
avoided since getting a slot by binary search may not be negligible,
especially for virtual machines with many memory slots.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 17 +++--
1 file changed, 11
Now that it has only one caller, and its name is not so helpful for
readers, remove it. Instead, the new memslot_valid_for_gpte() function
makes it possible to share the common code.
Signed-off-by: Takuya Yoshikawa
---
arch/x86/kvm/mmu.c | 24
1 file changed, 16
1 - 100 of 264 matches
Mail list logo