This table summarizes the information of fixed MTRRs and introduce some APIs
to abstract its operation which helps us to clean up the code and will be
used in later patches
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 191
Based on Intel's SDM, mapping huge page which do not have consistent
memory cache for each 4k page will cause undefined behavior
In order to avoiding this kind of undefined behavior, we force to use
4k pages under this case
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch
vMTRR does not depend on any host MTRR feature and fixed MTRRs have always
been implemented, so drop this field
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 9 -
arch/x86/kvm/mtrr.c | 7 +++
arch/x86/kvm/x86.c
Only KVM_NR_VAR_MTRR variable MTRRs are available in KVM guest
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
- kvm_mtrr_get_guest_memory_type() only checks one page in MTRRs so that
it's unnecessary to check to see if the range is partially covered in
MTRR
- optimize the check of overlap memory type and add some comments to explain
the precedence
Signed-off-by: Xiao Guangrong guangrong.x
It gets the range for the specified variable MTRR
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 19 +--
1 file changed, 13 insertions(+), 6 deletions(-)
diff --git a/arch/x86/kvm/mtrr.c b/arch/x86/kvm/mtrr.c
index 888441e..aeb9767 100644
On 05/30/2015 06:59 PM, Xiao Guangrong wrote:
Currently guest MTRR is completely prohibited if cache snoop is supported on
IOMMU (!noncoherent_dma) and host does the emulation based on the knowledge
from host side, however, host side is not the good point to know
what the purpose of guest
It walks all MTRRs and gets all the memory cache type setting for the
specified range also it checks if the range is fully covered by MTRRs
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 183
1 file
MTRR code locates in x86.c and mmu.c so that move them to a separate file to
make the organization more clearer and it will be the place where we fully
implement vMTRR
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 2 +-
arch/x86/kvm/Makefile
Variable MTRR MSRs are 64 bits which are directly accessed with full length,
no reason to split them to two 32 bits
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 7 ++-
arch/x86/kvm/mtrr.c | 32
Sort all valid variable MTRRs based on its base address, it will help us to
check a range to see if it's fully contained in variable MTRRs
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 3 +++
arch/x86/kvm/mtrr.c | 39
Two functions are introduced:
- fixed_mtrr_addr_to_seg() translates the address to the fixed
MTRR segment
- fixed_mtrr_addr_seg_to_range_index() translates the address to
the index of kvm_mtrr.fixed_ranges[]
They will be used in the later patch
Signed-off-by: Xiao Guangrong guangrong.x
mtrr_for_each_mem_type() is ready now, use it to simplify
kvm_mtrr_get_guest_memory_type()
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mtrr.c | 61 +
1 file changed, 15 insertions(+), 46 deletions(-)
diff
union initializer
...
gcc-4.4.4 (at least) has issues when using anonymous unions in
initializers.
Fixes: edc90b7dc4ceef6 ("KVM: MMU: fix SMAP virtualization")
Cc: Xiao Guangrong
Cc: Paolo Bonzini
Signed-off-by: Andrew Morton
Should be found at -mm tree.
--
To unsubscribe from this lis
elements in union initializer
...
gcc-4.4.4 (at least) has issues when using anonymous unions in
initializers.
Fixes: edc90b7dc4ceef6 (KVM: MMU: fix SMAP virtualization)
Cc: Xiao Guangrong guangrong.x...@linux.intel.com
Cc: Paolo Bonzini pbonz...@redhat.com
Signed-off-by: Andrew Morton a...@linux
On 05/18/2015 09:48 PM, Paolo Bonzini wrote:
@@ -5473,6 +5473,7 @@ void kvm_set_hflags(struct kvm_vcpu *vcpu, unsigned
emul_flags)
}
vcpu->arch.hflags = emul_flags;
+ kvm_mmu_reset_context(vcpu);
reset root table only if SMM flag is changed?
--
To unsubscribe from
On 05/18/2015 09:48 PM, Paolo Bonzini wrote:
This is always available, and we can use the role to look up the right
memslots array.
How about pass role instead of sp so that it can be used if no sp is
available?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
On 05/18/2015 09:48 PM, Paolo Bonzini wrote:
This is always available, and we can use the role to look up the right
memslots array.
How about pass role instead of sp so that it can be used if no sp is
available?
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the
On 05/18/2015 09:48 PM, Paolo Bonzini wrote:
@@ -5473,6 +5473,7 @@ void kvm_set_hflags(struct kvm_vcpu *vcpu, unsigned
emul_flags)
}
vcpu-arch.hflags = emul_flags;
+ kvm_mmu_reset_context(vcpu);
reset root table only if SMM flag is changed?
--
To unsubscribe from
ate->enabled (bit 10 of IA32_MTRR_DEF_TYPE)
3: if MTRR is disabled, UC is applied to all of physical memory rather
than mtrr_state->def_type
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kv
It's used to walk all the sptes on the rmap to clean up the
code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 53
arch/x86/kvm/mmu_audit.c | 4 +---
2 files changed, 19 insertions(+), 38 deletions(-)
diff --git a/arch/x86/kvm
It's used to clean up the code. Thanks for Paolo Bonzini's
suggestion
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 24 +---
arch/x86/kvm/mmu.h | 1 +
2 files changed, 10 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index
slot_handle_level and its helper functions are ready now, use them to
clean up the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 128 +++--
1 file changed, 16 insertions(+), 112 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
cleanups to make current MMU code more cleaner and help
us fixing the bug more easier.
Xiao Guangrong (10):
KVM: MMU: fix decoding cache type from MTRR
KVM: MMU: introduce for_each_rmap_spte()
KVM: MMU: introduce PT_MAX_HUGEPAGE_LEVEL
KVM: MMU: introduce for_each_slot_rmap_range
KVM: MMU:
It is used to zap all the rmaps of the specified gfn range and will
be used by the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 24
arch/x86/kvm/mmu.h | 1 +
2 files changed, 25 insertions(+)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the
later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c059822..a990ad9
There are several places walking all rmaps for the memslot so that
introduce common functions to cleanup the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 69 ++
1 file changed, 69 insertions(+)
diff --git a/arch/x86/kvm/mmu.c b
It's used to abstract the code from kvm_handle_hva_range and it will be
used by later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 97 +-
1 file changed, 75 insertions(+), 22 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
is set on the last spte, that means we should sync
the last sptes when MTRR is changed
This patch fixs this issue by drop all the spte in the gfn range which
is being updated by MTRR
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 59
CR0.CD and CR0.NW are not used by shadow page table so that need
not adjust mmu if these two bit are changed
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index bbe184f..457b908
Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the
later patch
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
There are several places walking all rmaps for the memslot so that
introduce common functions to cleanup the code
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 69 ++
1 file changed, 69 insertions
It's used to abstract the code from kvm_handle_hva_range and it will be
used by later patch
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 97 +-
1 file changed, 75 insertions(+), 22 deletions(-)
diff
is set on the last spte, that means we should sync
the last sptes when MTRR is changed
This patch fixs this issue by drop all the spte in the gfn range which
is being updated by MTRR
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/x86.c | 59
It is used to zap all the rmaps of the specified gfn range and will
be used by the later patch
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 24
arch/x86/kvm/mmu.h | 1 +
2 files changed, 25 insertions(+)
diff --git a/arch/x86
current MMU code more cleaner and help
us fixing the bug more easier.
Xiao Guangrong (10):
KVM: MMU: fix decoding cache type from MTRR
KVM: MMU: introduce for_each_rmap_spte()
KVM: MMU: introduce PT_MAX_HUGEPAGE_LEVEL
KVM: MMU: introduce for_each_slot_rmap_range
KVM: MMU: introduce
It's used to walk all the sptes on the rmap to clean up the
code
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 53
arch/x86/kvm/mmu_audit.c | 4 +---
2 files changed, 19 insertions(+), 38 deletions
It's used to clean up the code. Thanks for Paolo Bonzini's
suggestion
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 24 +---
arch/x86/kvm/mmu.h | 1 +
2 files changed, 10 insertions(+), 15 deletions(-)
diff --git a/arch/x86/kvm/mmu.c
slot_handle_level and its helper functions are ready now, use them to
clean up the code
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 128 +++--
1 file changed, 16 insertions(+), 112 deletions(-)
diff --git
CR0.CD and CR0.NW are not used by shadow page table so that need
not adjust mmu if these two bit are changed
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/x86.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm
-enabled (bit 10 of IA32_MTRR_DEF_TYPE)
3: if MTRR is disabled, UC is applied to all of physical memory rather
than mtrr_state-def_type
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff
On 05/12/2015 04:22 PM, Paolo Bonzini wrote:
On 12/05/2015 04:32, Xiao Guangrong wrote:
+#define for_each_slot_rmap_range(_slot_, _start_level_, _end_level_, \
+ _start_gfn, _end_gfn, _iter_)\
+ for (slot_rmap_walk_init(_iter_, _slot_
On 05/12/2015 04:22 PM, Paolo Bonzini wrote:
On 12/05/2015 04:32, Xiao Guangrong wrote:
+ if (iterator.rmap)
+ flush |= fn(kvm, iterator.rmap);
+
+ if (need_resched() || spin_needbreak(>mmu_lock)) {
+ if (fl
On 05/12/2015 04:08 PM, Paolo Bonzini wrote:
On 12/05/2015 04:32, Xiao Guangrong wrote:
- while ((sptep = rmap_get_first(*rmapp, ))) {
- BUG_ON(!(*sptep & PT_PRESENT_MASK));
+restart:
+ for_each_rmap_spte(rmapp, , sptep) {
rmap_pr
On 05/12/2015 04:08 PM, Paolo Bonzini wrote:
On 12/05/2015 04:32, Xiao Guangrong wrote:
- while ((sptep = rmap_get_first(*rmapp, iter))) {
- BUG_ON(!(*sptep PT_PRESENT_MASK));
+restart:
+ for_each_rmap_spte(rmapp, iter, sptep) {
rmap_printk
On 05/12/2015 04:22 PM, Paolo Bonzini wrote:
On 12/05/2015 04:32, Xiao Guangrong wrote:
+#define for_each_slot_rmap_range(_slot_, _start_level_, _end_level_, \
+ _start_gfn, _end_gfn, _iter_)\
+ for (slot_rmap_walk_init(_iter_, _slot_
On 05/12/2015 04:22 PM, Paolo Bonzini wrote:
On 12/05/2015 04:32, Xiao Guangrong wrote:
+ if (iterator.rmap)
+ flush |= fn(kvm, iterator.rmap);
+
+ if (need_resched() || spin_needbreak(kvm-mmu_lock)) {
+ if (flush
Hi Paolo,
Could you please apply this patch to kvm-unit-tests if it looks good to you?
Thanks!
On 05/07/2015 04:44 PM, Xiao Guangrong wrote:
This test case is used to produce the bug that:
KVM may turn a user page to a kernel page when kernel writes a readonly
user page if CR0.WP = 1
There are several places walking all rmaps for the memslot so that
introduce common functions to cleanup the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 69 ++
1 file changed, 69 insertions(+)
diff --git a/arch/x86/kvm/mmu.c b
ate->enabled (bit 10 of IA32_MTRR_DEF_TYPE)
3: if MTRR is disabled, UC is applied to all of physical memory rather
than mtrr_state->def_type
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kv
slot_handle_level and its helper functions are ready now, use them to
clean up the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 130 +++--
1 file changed, 16 insertions(+), 114 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
is set on the last spte, that means we should sync
the last sptes when MTRR is changed
This patch fixs this issue by drop all the spte in the gfn range which
is being updated by MTRR
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 59
Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the
later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index fae349a..10d5e03
It's used to walk all the sptes on the rmap to clean up the
code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 63 +++-
arch/x86/kvm/mmu_audit.c | 4 +--
2 files changed, 26 insertions(+), 41 deletions(-)
diff --git a/arch/x86/kvm
It is used to zap all the rmaps of the specified gfn range and will
be used by the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 25 +
arch/x86/kvm/mmu.h | 1 +
2 files changed, 26 insertions(+)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
CR0.CD and CR0.NW are not used by shadow page table so that need
not adjust mmu if these two bit are changed
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
index a527dd0..a82d26f
It's used to abstract the code from kvm_handle_hva_range and it will be
used by later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 97 +-
1 file changed, 75 insertions(+), 22 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch
he policy
Also, these are some cleanups to make current MMU code more cleaner and help
us fixing the bug more easier.
Xiao Guangrong (9):
KVM: MMU: fix decoding cache type from MTRR
KVM: MMU: introduce for_each_rmap_spte()
KVM: MMU: introduce for_each_slot_rmap_range
KVM: MMU:
CR4.SMAP is updated
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.c | 16
arch/x86/kvm/mmu.h | 2 --
arch/x86/kvm/x86.c | 8 +++-
4 files changed, 16 insertions(+), 11 deletions(-)
diff --gi
Document this new role field
Signed-off-by: Xiao Guangrong
---
Documentation/virtual/kvm/mmu.txt | 18 ++
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt
b/Documentation/virtual/kvm/mmu.txt
index 53838d9..c59bd9b 100644
From: Xiao Guangrong
Date: Mon, 11 May 2015 21:09:15 +0800
Subject: [PATCH] KVM: MMU: fix SMAP virtualization
KVM may turn a user page to a kernel page when kernel writes a readonly
user page if CR0.WP = 1. This shadow page entry will be reused after
SMAP is enabled so that kernel is allowed
On 05/08/2015 12:53 AM, Paolo Bonzini wrote:
On 30/04/2015 12:24, guangrong.x...@linux.intel.com wrote:
+static void vmx_set_msr_mtrr(struct kvm_vcpu *vcpu, u32 msr)
+{
+ struct mtrr_state_type *mtrr_state = >arch.mtrr_state;
+ unsigned char mtrr_enabled = mtrr_state->enabled;
+
On 05/07/2015 08:04 PM, Paolo Bonzini wrote:
On 30/04/2015 12:24, guangrong.x...@linux.intel.com wrote:
From: Xiao Guangrong
There are several places walking all rmaps for the memslot so that
introduce common functions to cleanup the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm
On 05/07/2015 08:04 PM, Paolo Bonzini wrote:
On 30/04/2015 12:24, guangrong.x...@linux.intel.com wrote:
From: Xiao Guangrong guangrong.x...@linux.intel.com
There are several places walking all rmaps for the memslot so that
introduce common functions to cleanup the code
Signed-off-by: Xiao
On 05/08/2015 12:53 AM, Paolo Bonzini wrote:
On 30/04/2015 12:24, guangrong.x...@linux.intel.com wrote:
+static void vmx_set_msr_mtrr(struct kvm_vcpu *vcpu, u32 msr)
+{
+ struct mtrr_state_type *mtrr_state = vcpu-arch.mtrr_state;
+ unsigned char mtrr_enabled =
From: Xiao Guangrong guangrong.x...@linux.intel.com
Date: Mon, 11 May 2015 21:09:15 +0800
Subject: [PATCH] KVM: MMU: fix SMAP virtualization
KVM may turn a user page to a kernel page when kernel writes a readonly
user page if CR0.WP = 1. This shadow page entry will be reused after
SMAP
is updated
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.c | 16
arch/x86/kvm/mmu.h | 2 --
arch/x86/kvm/x86.c | 8 +++-
4 files changed, 16 insertions(+), 11
Document this new role field
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
Documentation/virtual/kvm/mmu.txt | 18 ++
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt
b/Documentation/virtual/kvm/mmu.txt
index
Also, these are some cleanups to make current MMU code more cleaner and help
us fixing the bug more easier.
Xiao Guangrong (9):
KVM: MMU: fix decoding cache type from MTRR
KVM: MMU: introduce for_each_rmap_spte()
KVM: MMU: introduce for_each_slot_rmap_range
KVM: MMU: introduce
It's used to abstract the code from kvm_handle_hva_range and it will be
used by later patch
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 97 +-
1 file changed, 75 insertions(+), 22 deletions(-)
diff
CR0.CD and CR0.NW are not used by shadow page table so that need
not adjust mmu if these two bit are changed
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/x86.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm
Hi Paolo,
Could you please apply this patch to kvm-unit-tests if it looks good to you?
Thanks!
On 05/07/2015 04:44 PM, Xiao Guangrong wrote:
This test case is used to produce the bug that:
KVM may turn a user page to a kernel page when kernel writes a readonly
user page if CR0.WP = 1
It's used to walk all the sptes on the rmap to clean up the
code
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 63 +++-
arch/x86/kvm/mmu_audit.c | 4 +--
2 files changed, 26 insertions(+), 41 deletions
slot_handle_level and its helper functions are ready now, use them to
clean up the code
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 130 +++--
1 file changed, 16 insertions(+), 114 deletions(-)
diff --git
is set on the last spte, that means we should sync
the last sptes when MTRR is changed
This patch fixs this issue by drop all the spte in the gfn range which
is being updated by MTRR
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/x86.c | 59
Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the
later patch
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm
It is used to zap all the rmaps of the specified gfn range and will
be used by the later patch
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 25 +
arch/x86/kvm/mmu.h | 1 +
2 files changed, 26 insertions(+)
diff --git a/arch/x86
There are several places walking all rmaps for the memslot so that
introduce common functions to cleanup the code
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 69 ++
1 file changed, 69 insertions
-enabled (bit 10 of IA32_MTRR_DEF_TYPE)
3: if MTRR is disabled, UC is applied to all of physical memory rather
than mtrr_state-def_type
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/kvm/mmu.c | 14 ++
1 file changed, 6 insertions(+), 8 deletions(-)
diff
On 04/30/2015 07:36 PM, Paolo Bonzini wrote:
smep_andnot_wp is initialized in kvm_init_shadow_mmu and shadow pages
should not be reused for different values of it. Thus, it has to be
added to the mask in kvm_mmu_pte_write.
Good catch!
Reviewed-by: Xiao Guangrong
--
To unsubscribe from
On 05/07/2015 05:32 PM, Paolo Bonzini wrote:
On 07/05/2015 10:20, Xiao Guangrong wrote:
Current permission check assumes that RSVD bit in PFEC is always zero,
however, it is not true since MMIO #PF will use it to quickly identify
MMIO access
Fix it by clearing the bit if walking guest page
This test case is used to produce the bug that:
KVM may turn a user page to a kernel page when kernel writes a readonly
user page if CR0.WP = 1. This shadow page entry will be reused after
SMAP is enabled so that kernel is allowed to access this user page
Signed-off-by: Xiao Guangrong
---
x86
On 05/07/2015 04:30 PM, Xiao Guangrong wrote:
From: root
Sorry for the noise... I miss configured this git repo. Please ignore
this patch, i will repost it.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kerne
From: root
This test case is used to produce the bug that:
KVM may turn a user page to a kernel page when kernel writes a readonly
user page if CR0.WP = 1. This shadow page entry will be reused after
SMAP is enabled so that kernel is allowed to access this user page
Signed-off-by: Xiao
Current permission check assumes that RSVD bit in PFEC is always zero,
however, it is not true since MMIO #PF will use it to quickly identify
MMIO access
Fix it by clearing the bit if walking guest page table is needed
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.h | 2 ++
arch
CR4.SMAP is updated
Signed-off-by: Xiao Guangrong
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.c | 7 +--
arch/x86/kvm/mmu.h | 2 --
arch/x86/kvm/x86.c | 8 +++-
4 files changed, 9 insertions(+), 9 deletions(-)
diff --git a/arch/x86/inc
Document this new role field
Signed-off-by: Xiao Guangrong
---
Documentation/virtual/kvm/mmu.txt | 18 ++
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt
b/Documentation/virtual/kvm/mmu.txt
index 53838d9..c59bd9b 100644
on that entry
This patchset fixes these bugs and a test case will be posted out soon
Xiao Guangrong (3):
KVM: MMU: fix smap permission check
KVM: MMU: fix SMAP virtualization
KVM: MMU: document smap_andnot_wp
Documentation/virtual/kvm/mmu.txt | 18 ++
arch/x86/include/asm
On 05/07/2015 05:32 PM, Paolo Bonzini wrote:
On 07/05/2015 10:20, Xiao Guangrong wrote:
Current permission check assumes that RSVD bit in PFEC is always zero,
however, it is not true since MMIO #PF will use it to quickly identify
MMIO access
Fix it by clearing the bit if walking guest page
This test case is used to produce the bug that:
KVM may turn a user page to a kernel page when kernel writes a readonly
user page if CR0.WP = 1. This shadow page entry will be reused after
SMAP is enabled so that kernel is allowed to access this user page
Signed-off-by: Xiao Guangrong
Current permission check assumes that RSVD bit in PFEC is always zero,
however, it is not true since MMIO #PF will use it to quickly identify
MMIO access
Fix it by clearing the bit if walking guest page table is needed
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86
On 05/07/2015 04:30 PM, Xiao Guangrong wrote:
From: root r...@lyu2-mobl2.ccr.corp.intel.com
Sorry for the noise... I miss configured this git repo. Please ignore
this patch, i will repost it.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message
on that entry
This patchset fixes these bugs and a test case will be posted out soon
Xiao Guangrong (3):
KVM: MMU: fix smap permission check
KVM: MMU: fix SMAP virtualization
KVM: MMU: document smap_andnot_wp
Documentation/virtual/kvm/mmu.txt | 18 ++
arch/x86/include/asm
Document this new role field
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
Documentation/virtual/kvm/mmu.txt | 18 ++
1 file changed, 14 insertions(+), 4 deletions(-)
diff --git a/Documentation/virtual/kvm/mmu.txt
b/Documentation/virtual/kvm/mmu.txt
index
is updated
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
arch/x86/include/asm/kvm_host.h | 1 +
arch/x86/kvm/mmu.c | 7 +--
arch/x86/kvm/mmu.h | 2 --
arch/x86/kvm/x86.c | 8 +++-
4 files changed, 9 insertions(+), 9 deletions(-)
diff
page
Signed-off-by: Xiao Guangrong guangrong.x...@linux.intel.com
---
x86/smap.c | 26 ++
1 file changed, 26 insertions(+)
diff --git a/x86/smap.c b/x86/smap.c
index 042c5aa..66f97b8 100644
--- a/x86/smap.c
+++ b/x86/smap.c
@@ -48,6 +48,7 @@ asm (pf_tss:\n
#define
On 04/30/2015 07:36 PM, Paolo Bonzini wrote:
smep_andnot_wp is initialized in kvm_init_shadow_mmu and shadow pages
should not be reused for different values of it. Thus, it has to be
added to the mask in kvm_mmu_pte_write.
Good catch!
Reviewed-by: Xiao Guangrong guangrong.x
On 05/07/2015 05:42 AM, David Matlack wrote:
On Thu, Apr 30, 2015 at 3:24 AM, wrote:
From: Xiao Guangrong
There are some bugs in current get_mtrr_type();
1: bit 2 of mtrr_state->enabled is corresponding bit 11 of IA32_MTRR_DEF_TYPE
bit 1, not bit 2. (code is correct though)
Oh
Hi David,
Thanks for your review.
On 05/07/2015 05:36 AM, David Matlack wrote:
+static void vmx_set_msr_mtrr(struct kvm_vcpu *vcpu, u32 msr)
+{
+ struct mtrr_state_type *mtrr_state = >arch.mtrr_state;
+ unsigned char mtrr_enabled = mtrr_state->enabled;
+ gfn_t start, end,
Hi David,
Thanks for your review.
On 05/07/2015 05:36 AM, David Matlack wrote:
+static void vmx_set_msr_mtrr(struct kvm_vcpu *vcpu, u32 msr)
+{
+ struct mtrr_state_type *mtrr_state = vcpu-arch.mtrr_state;
+ unsigned char mtrr_enabled = mtrr_state-enabled;
+ gfn_t start,
601 - 700 of 2152 matches
Mail list logo