On 07/04/2016 04:45 PM, Xiao Guangrong wrote:
On 07/04/2016 04:41 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:53 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 03:37:35PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:03 PM, Neo Jia
On 07/04/2016 04:41 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:53 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 03:37:35PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:03 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 02:39:22PM +0800
On 07/04/2016 04:41 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 04:19:20PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:53 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 03:37:35PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:03 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 02:39:22PM +0800
On 07/04/2016 04:14 PM, Paolo Bonzini wrote:
On 04/07/2016 09:59, Xiao Guangrong wrote:
But apart from this, it's much more obvious to consider the refcount.
The x86 MMU code doesn't care if the page is reserved or not;
mmu_set_spte does a kvm_release_pfn_clean, hence it makes sense
On 07/04/2016 04:14 PM, Paolo Bonzini wrote:
On 04/07/2016 09:59, Xiao Guangrong wrote:
But apart from this, it's much more obvious to consider the refcount.
The x86 MMU code doesn't care if the page is reserved or not;
mmu_set_spte does a kvm_release_pfn_clean, hence it makes sense
On 07/04/2016 03:53 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 03:37:35PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:03 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 02:39:22PM +0800, Xiao Guangrong wrote:
On 06/30/2016 09:01 PM, Paolo Bonzini wrote:
The vGPU folks would like to trap
On 07/04/2016 03:53 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 03:37:35PM +0800, Xiao Guangrong wrote:
On 07/04/2016 03:03 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 02:39:22PM +0800, Xiao Guangrong wrote:
On 06/30/2016 09:01 PM, Paolo Bonzini wrote:
The vGPU folks would like to trap
On 07/04/2016 03:48 PM, Paolo Bonzini wrote:
On 04/07/2016 09:37, Xiao Guangrong wrote:
It actually is a portion of the physical mmio which is set by vfio mmap.
So i do not think we need to care its refcount, i,e, we can consider it
as reserved_pfn,
Paolo?
nVidia provided me (offlist
On 07/04/2016 03:48 PM, Paolo Bonzini wrote:
On 04/07/2016 09:37, Xiao Guangrong wrote:
It actually is a portion of the physical mmio which is set by vfio mmap.
So i do not think we need to care its refcount, i,e, we can consider it
as reserved_pfn,
Paolo?
nVidia provided me (offlist
On 07/04/2016 03:38 PM, Paolo Bonzini wrote:
On 04/07/2016 08:39, Xiao Guangrong wrote:
Why the memory mapped by this mmap() is not a portion of MMIO from
underlayer physical device? If it is a valid system memory, is this
interface
really needed to implemented in vfio? (you at least need
On 07/04/2016 03:38 PM, Paolo Bonzini wrote:
On 04/07/2016 08:39, Xiao Guangrong wrote:
Why the memory mapped by this mmap() is not a portion of MMIO from
underlayer physical device? If it is a valid system memory, is this
interface
really needed to implemented in vfio? (you at least need
On 07/04/2016 03:03 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 02:39:22PM +0800, Xiao Guangrong wrote:
On 06/30/2016 09:01 PM, Paolo Bonzini wrote:
The vGPU folks would like to trap the first access to a BAR by setting
vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault
On 07/04/2016 03:03 PM, Neo Jia wrote:
On Mon, Jul 04, 2016 at 02:39:22PM +0800, Xiao Guangrong wrote:
On 06/30/2016 09:01 PM, Paolo Bonzini wrote:
The vGPU folks would like to trap the first access to a BAR by setting
vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault
On 06/30/2016 09:01 PM, Paolo Bonzini wrote:
The vGPU folks would like to trap the first access to a BAR by setting
vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault handler
then can use remap_pfn_range to place some non-reserved pages in the VMA.
Why does it require fetching
On 06/30/2016 09:01 PM, Paolo Bonzini wrote:
The vGPU folks would like to trap the first access to a BAR by setting
vm_ops on the VMAs produced by mmap-ing a VFIO device. The fault handler
then can use remap_pfn_range to place some non-reserved pages in the VMA.
Why does it require fetching
On 06/29/2016 04:18 PM, Paolo Bonzini wrote:
On 29/06/2016 05:17, Xiao Guangrong wrote:
+++ b/arch/x86/kvm/mmu.c
@@ -2516,13 +2516,17 @@ static int set_spte(struct kvm_vcpu *vcpu, u64
*sptep,
gfn_t gfn, kvm_pfn_t pfn, bool speculative,
bool can_unsync, bool
On 06/29/2016 04:18 PM, Paolo Bonzini wrote:
On 29/06/2016 05:17, Xiao Guangrong wrote:
+++ b/arch/x86/kvm/mmu.c
@@ -2516,13 +2516,17 @@ static int set_spte(struct kvm_vcpu *vcpu, u64
*sptep,
gfn_t gfn, kvm_pfn_t pfn, bool speculative,
bool can_unsync, bool
On 06/28/2016 12:32 PM, Bandan Das wrote:
To support execute only mappings on behalf of L1
hypervisors, we teach set_spte() to honor L1's valid XWR
bits. This is only if host supports EPT execute only. Reuse
ACC_USER_MASK to signify if the L1 hypervisor has the R bit
set
Signed-off-by: Bandan
On 06/28/2016 12:32 PM, Bandan Das wrote:
To support execute only mappings on behalf of L1
hypervisors, we teach set_spte() to honor L1's valid XWR
bits. This is only if host supports EPT execute only. Reuse
ACC_USER_MASK to signify if the L1 hypervisor has the R bit
set
Signed-off-by: Bandan
On 06/28/2016 12:32 PM, Bandan Das wrote:
In reset_tdp_shadow_zero_bits_mask, we always pass false
when initializing the reserved bits. By initializing with the
correct value of ept exec only, the host can correctly
identify if the guest pte is valid. Note that
kvm_init_shadow_ept_mmu()
On 06/28/2016 12:32 PM, Bandan Das wrote:
In reset_tdp_shadow_zero_bits_mask, we always pass false
when initializing the reserved bits. By initializing with the
correct value of ept exec only, the host can correctly
identify if the guest pte is valid. Note that
kvm_init_shadow_ept_mmu()
On 06/29/2016 04:49 AM, Paolo Bonzini wrote:
On 28/06/2016 22:37, Bandan Das wrote:
Paolo Bonzini writes:
On 28/06/2016 19:33, Bandan Das wrote:
static int is_shadow_present_pte(u64 pte)
{
- return pte & PT_PRESENT_MASK && !is_mmio_spte(pte);
+
On 06/29/2016 04:49 AM, Paolo Bonzini wrote:
On 28/06/2016 22:37, Bandan Das wrote:
Paolo Bonzini writes:
On 28/06/2016 19:33, Bandan Das wrote:
static int is_shadow_present_pte(u64 pte)
{
- return pte & PT_PRESENT_MASK && !is_mmio_spte(pte);
+ return pte &
On 04/06/2016 04:56 PM, Paolo Bonzini wrote:
On 25/03/2016 14:19, Xiao Guangrong wrote:
@@ -193,11 +193,11 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu,
struct kvm_mmu *mmu,
((pte_access & PT_USER_MASK) << (PFERR_RSVD_BIT -
PT_USER_SHIFT));
On 04/06/2016 04:56 PM, Paolo Bonzini wrote:
On 25/03/2016 14:19, Xiao Guangrong wrote:
@@ -193,11 +193,11 @@ static inline u8 permission_fault(struct kvm_vcpu *vcpu,
struct kvm_mmu *mmu,
((pte_access & PT_USER_MASK) << (PFERR_RSVD_BIT -
PT_USER_SHIFT));
On 03/30/2016 02:39 PM, Xiao Guangrong wrote:
On 03/30/2016 02:36 PM, Paolo Bonzini wrote:
On 30/03/2016 03:56, Xiao Guangrong wrote:
x86/access.flat is currently using the "other" definition, i.e., PFEC.PK
is only set if W=1 or CR0.WP=0 && PFEC.U=0 or PFEC.W=0. Can yo
On 03/30/2016 02:39 PM, Xiao Guangrong wrote:
On 03/30/2016 02:36 PM, Paolo Bonzini wrote:
On 30/03/2016 03:56, Xiao Guangrong wrote:
x86/access.flat is currently using the "other" definition, i.e., PFEC.PK
is only set if W=1 or CR0.WP=0 && PFEC.U=0 or PFEC.W=0. Can yo
On 03/30/2016 02:36 PM, Paolo Bonzini wrote:
On 30/03/2016 03:56, Xiao Guangrong wrote:
x86/access.flat is currently using the "other" definition, i.e., PFEC.PK
is only set if W=1 or CR0.WP=0 && PFEC.U=0 or PFEC.W=0. Can you use it
(with ept=1 of course) to check
On 03/30/2016 02:36 PM, Paolo Bonzini wrote:
On 30/03/2016 03:56, Xiao Guangrong wrote:
x86/access.flat is currently using the "other" definition, i.e., PFEC.PK
is only set if W=1 or CR0.WP=0 && PFEC.U=0 or PFEC.W=0. Can you use it
(with ept=1 of course) to check
On 03/30/2016 04:09 AM, Paolo Bonzini wrote:
On 29/03/2016 19:43, Xiao Guangrong wrote:
Based on the SDM:
PK flag (bit 5).
This flag is 1 if (1) IA32_EFER.LMA = CR4.PKE = 1; (2) the access
causing the page-fault exception was a data access; (3) the linear
address was a user-mode address
On 03/30/2016 04:09 AM, Paolo Bonzini wrote:
On 29/03/2016 19:43, Xiao Guangrong wrote:
Based on the SDM:
PK flag (bit 5).
This flag is 1 if (1) IA32_EFER.LMA = CR4.PKE = 1; (2) the access
causing the page-fault exception was a data access; (3) the linear
address was a user-mode address
On 03/25/2016 10:21 PM, Paolo Bonzini wrote:
On 25/03/2016 14:19, Xiao Guangrong wrote:
WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK));
- pfec |= PFERR_PRESENT_MASK;
+ errcode = PFERR_PRESENT_MASK;
if (unlikely(mmu->pkru_mask)) {
u32 pk
On 03/25/2016 10:21 PM, Paolo Bonzini wrote:
On 25/03/2016 14:19, Xiao Guangrong wrote:
WARN_ON(pfec & (PFERR_PK_MASK | PFERR_RSVD_MASK));
- pfec |= PFERR_PRESENT_MASK;
+ errcode = PFERR_PRESENT_MASK;
if (unlikely(mmu->pkru_mask)) {
u32 pk
On 03/25/2016 09:56 PM, Paolo Bonzini wrote:
On 25/03/2016 14:48, Xiao Guangrong wrote:
This patch and the previous one are basically redoing commit
0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you
find your version easier to understand, I of course find m
On 03/25/2016 09:56 PM, Paolo Bonzini wrote:
On 25/03/2016 14:48, Xiao Guangrong wrote:
This patch and the previous one are basically redoing commit
0a47cd85833e ("KVM: MMU: Fix ubsan warnings", 2016-03-04). While you
find your version easier to understand, I of course find m
On 03/25/2016 09:45 PM, Paolo Bonzini wrote:
On 25/03/2016 14:19, Xiao Guangrong wrote:
Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry
in .parent[] is used as a sentinel, the additional entry in .idx[] is
purely wasted
This patch reduces its size and sets
On 03/25/2016 09:45 PM, Paolo Bonzini wrote:
On 25/03/2016 14:19, Xiao Guangrong wrote:
Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry
in .parent[] is used as a sentinel, the additional entry in .idx[] is
purely wasted
This patch reduces its size and sets
On 03/25/2016 09:35 PM, Paolo Bonzini wrote:
On 25/03/2016 14:19, Xiao Guangrong wrote:
kvm-unit-tests complained about the PFEC is not set properly, e.g,:
test pte.rw pte.d pte.nx pde.p pde.rw pde.pse user fetch: FAIL: error code 15
expected 5
Dump mapping: address: 0x1234
--L4
On 03/25/2016 09:35 PM, Paolo Bonzini wrote:
On 25/03/2016 14:19, Xiao Guangrong wrote:
kvm-unit-tests complained about the PFEC is not set properly, e.g,:
test pte.rw pte.d pte.nx pde.p pde.rw pde.pse user fetch: FAIL: error code 15
expected 5
Dump mapping: address: 0x1234
--L4
Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry
in .parent[] is used as a sentinel, the additional entry in .idx[] is
purely wasted
This patch reduces its size and sets the sentinel on the upper level of
the place where we start from
Signed-off-by: Xiao Guangrong
This patch simplifies it by saving the sp and its index to kvm_mmu_pages,
then it is much easier to understand the operations on the its index
Signed-off-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
---
arch/x86/kvm/mmu.c | 40 ++--
1 file changed, 22 inse
Currently only PT64_ROOT_LEVEL - 1 levels are used, one additional entry
in .parent[] is used as a sentinel, the additional entry in .idx[] is
purely wasted
This patch reduces its size and sets the sentinel on the upper level of
the place where we start from
Signed-off-by: Xiao Guangrong
This patch simplifies it by saving the sp and its index to kvm_mmu_pages,
then it is much easier to understand the operations on the its index
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 40 ++--
1 file changed, 22 insertions(+), 18 deletions(-)
diff --git
to guest is copied from the
PFEC triggered by shadow page table
This patch fixes it and makes the logic of updating errcode more clean
Signed-off-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
---
arch/x86/kvm/mmu.h | 8
arch/x86/kvm/paging_tmpl.h | 2 +-
2 files chan
to guest is copied from the
PFEC triggered by shadow page table
This patch fixes it and makes the logic of updating errcode more clean
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.h | 8
arch/x86/kvm/paging_tmpl.h | 2 +-
2 files changed, 5 insertions(+), 5 deletions
-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
---
arch/x86/kvm/mmu.c | 28
1 file changed, 12 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c396e8b..4d66a9e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm
-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 28
1 file changed, 12 insertions(+), 16 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c396e8b..4d66a9e 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1906,18 +1906,17 @@ static void
[kvm]
[ 132.179372] [] handle_exception+0x1b2/0x430 [kvm_intel]
[ 132.187072] [] vmx_handle_exit+0x1e1/0xc50 [kvm_intel]
...
Thank you for fixing it, Paolo!
Reviewed-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
[kvm]
[ 132.179372] [] handle_exception+0x1b2/0x430 [kvm_intel]
[ 132.187072] [] vmx_handle_exit+0x1e1/0xc50 [kvm_intel]
...
Thank you for fixing it, Paolo!
Reviewed-by: Xiao Guangrong
On 03/16/2016 03:32 AM, Paolo Bonzini wrote:
On 15/03/2016 19:27, Andy Lutomirski wrote:
On Mon, Mar 14, 2016 at 6:17 AM, Paolo Bonzini wrote:
On 11/03/2016 22:33, David Matlack wrote:
Is this better than just always keeping the host's XCR0 loaded outside
if the
On 03/16/2016 03:32 AM, Paolo Bonzini wrote:
On 15/03/2016 19:27, Andy Lutomirski wrote:
On Mon, Mar 14, 2016 at 6:17 AM, Paolo Bonzini wrote:
On 11/03/2016 22:33, David Matlack wrote:
Is this better than just always keeping the host's XCR0 loaded outside
if the KVM interrupts-disabled
On 03/16/2016 03:01 AM, David Matlack wrote:
On Mon, Mar 14, 2016 at 12:46 AM, Xiao Guangrong
<guangrong.x...@linux.intel.com> wrote:
On 03/12/2016 04:47 AM, David Matlack wrote:
I have not been able to trigger this bug on Linux 4.3, and suspect
it is due to this commit from Lin
On 03/16/2016 03:01 AM, David Matlack wrote:
On Mon, Mar 14, 2016 at 12:46 AM, Xiao Guangrong
wrote:
On 03/12/2016 04:47 AM, David Matlack wrote:
I have not been able to trigger this bug on Linux 4.3, and suspect
it is due to this commit from Linux 4.2:
653f52c kvm,x86: load guest FPU
On 03/12/2016 04:47 AM, David Matlack wrote:
I have not been able to trigger this bug on Linux 4.3, and suspect
it is due to this commit from Linux 4.2:
653f52c kvm,x86: load guest FPU context more eagerly
With this commit, as long as the host is using eagerfpu, the guest's
fpu is always
On 03/12/2016 04:47 AM, David Matlack wrote:
I have not been able to trigger this bug on Linux 4.3, and suspect
it is due to this commit from Linux 4.2:
653f52c kvm,x86: load guest FPU context more eagerly
With this commit, as long as the host is using eagerfpu, the guest's
fpu is always
On 03/11/2016 01:07 AM, Paolo Bonzini wrote:
On 09/03/2016 08:18, Lan Tianyu wrote:
How about the following comments.
Log for kvm_mmu_commit_zap_page()
/*
* We need to make sure everyone sees our modifications to
* the page tables and see changes to vcpu->mode here.
On 03/11/2016 01:07 AM, Paolo Bonzini wrote:
On 09/03/2016 08:18, Lan Tianyu wrote:
How about the following comments.
Log for kvm_mmu_commit_zap_page()
/*
* We need to make sure everyone sees our modifications to
* the page tables and see changes to vcpu->mode here.
On 03/11/2016 12:04 AM, Paolo Bonzini wrote:
On 10/03/2016 16:45, Xiao Guangrong wrote:
Compared to smp_load_acquire(), smp_mb() adds an ordering between stores
and loads.
Here, the ordering is load-store, hence...
Yes, this is why i put smp_mb() in the code. :)
Here is a table
On 03/11/2016 12:04 AM, Paolo Bonzini wrote:
On 10/03/2016 16:45, Xiao Guangrong wrote:
Compared to smp_load_acquire(), smp_mb() adds an ordering between stores
and loads.
Here, the ordering is load-store, hence...
Yes, this is why i put smp_mb() in the code. :)
Here is a table
On 03/10/2016 11:31 PM, Paolo Bonzini wrote:
On 10/03/2016 16:26, Paolo Bonzini wrote:
Compared to smp_load_acquire(), smp_mb() adds an ordering between stores
and loads.
Here, the ordering is load-store, hence...
Yes, this is why i put smp_mb() in the code. :)
On 03/10/2016 11:31 PM, Paolo Bonzini wrote:
On 10/03/2016 16:26, Paolo Bonzini wrote:
Compared to smp_load_acquire(), smp_mb() adds an ordering between stores
and loads.
Here, the ordering is load-store, hence...
Yes, this is why i put smp_mb() in the code. :)
Signed-off-by: Xiao Guangrong <xiaoguangr...@cn.fujitsu.com>
Signed-off-by: Marcelo Tosatti <mtosa...@redhat.com>
Unfortunately that patch added a bad memory barrier: 1) it lacks a
comment; 2) it lacks obvious pairing; 3) it is an smp_mb() after a read,
so it's not even obvious tha
Signed-off-by: Xiao Guangrong
Signed-off-by: Marcelo Tosatti
Unfortunately that patch added a bad memory barrier: 1) it lacks a
comment; 2) it lacks obvious pairing; 3) it is an smp_mb() after a read,
so it's not even obvious that this memory barrier has to do with the
immediately precedin
now it only adds the P bit to the input parameter "pfec", but PKU
can change that.
Yep, i got the same idea when i reviewed the pkey patchset. This patch
looks good to me.
Reviewed-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
now it only adds the P bit to the input parameter "pfec", but PKU
can change that.
Yep, i got the same idea when i reviewed the pkey patchset. This patch
looks good to me.
Reviewed-by: Xiao Guangrong
On 03/08/2016 07:45 PM, Paolo Bonzini wrote:
For the next patch, we will want to filter PFERR_FETCH_MASK away early,
and not pass it to permission_fault if neither NX nor SMEP are enabled.
Prepare for the change.
Why it is needed? It is much easier to drop PFEC.F in
On 03/08/2016 07:45 PM, Paolo Bonzini wrote:
For the next patch, we will want to filter PFERR_FETCH_MASK away early,
and not pass it to permission_fault if neither NX nor SMEP are enabled.
Prepare for the change.
Why it is needed? It is much easier to drop PFEC.F in
On 03/08/2016 07:44 PM, Paolo Bonzini wrote:
Patch 1 ensures that all aspects of MPX are disabled when eager FPU
is disabled on the host. Patch 2 is just a cleanup.
It looks good to me.
Reviewed-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
Now, more and more features depend o
On 03/08/2016 07:44 PM, Paolo Bonzini wrote:
Patch 1 ensures that all aspects of MPX are disabled when eager FPU
is disabled on the host. Patch 2 is just a cleanup.
It looks good to me.
Reviewed-by: Xiao Guangrong
Now, more and more features depend on eger xsave, e.g, fpu, mpx
On 03/10/2016 06:09 PM, Paolo Bonzini wrote:
On 10/03/2016 09:27, Xiao Guangrong wrote:
+if (!enable_ept) {
+guest_efer |= EFER_NX;
+ignore_bits |= EFER_NX;
Update ignore_bits is not necessary i think.
More precisely, ignore_bits is only needed if guest EFER.NX=0
On 03/10/2016 06:09 PM, Paolo Bonzini wrote:
On 10/03/2016 09:27, Xiao Guangrong wrote:
+if (!enable_ept) {
+guest_efer |= EFER_NX;
+ignore_bits |= EFER_NX;
Update ignore_bits is not necessary i think.
More precisely, ignore_bits is only needed if guest EFER.NX=0
On 03/08/2016 07:44 PM, Paolo Bonzini wrote:
Yes, all of these are needed. :) This is admittedly a bit odd, but
kvm-unit-tests access.flat tests this if you run it with "-cpu host"
and of course ept=0.
KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by
setting U=0 and W=1 in
On 03/08/2016 07:44 PM, Paolo Bonzini wrote:
Yes, all of these are needed. :) This is admittedly a bit odd, but
kvm-unit-tests access.flat tests this if you run it with "-cpu host"
and of course ept=0.
KVM handles supervisor writes of a pte.u=0/pte.w=0/CR0.WP=0 page by
setting U=0 and W=1 in
it only looks at the guest's EFER.NX bit. Teach it
that smep_andnot_wp will also use the NX bit of SPTEs.
Cc: sta...@vger.kernel.org
Cc: Xiao Guangrong <guangrong.x...@redhat.com>
As a redhat guy i am so proud. :)
Fixes: c258b62b264fdc469b6d3610a907708068145e3b
Thanks for you
it only looks at the guest's EFER.NX bit. Teach it
that smep_andnot_wp will also use the NX bit of SPTEs.
Cc: sta...@vger.kernel.org
Cc: Xiao Guangrong
As a redhat guy i am so proud. :)
Fixes: c258b62b264fdc469b6d3610a907708068145e3b
Thanks for you fixing it, Paolo!
Reviewed-by: Xiao
a
separate patch for easier application to stable kernels.
Cc: sta...@vger.kernel.org
Cc: Xiao Guangrong <guangrong.x...@redhat.com>
Cc: Andy Lutomirski <l...@amacapital.net>
Fixes: f6577a5fa15d82217ca73c74cd2dcbc0f6c781dd
Signed-off-by: Paolo Bonzini <pbonz...@redhat.com>
---
a
separate patch for easier application to stable kernels.
Cc: sta...@vger.kernel.org
Cc: Xiao Guangrong
Cc: Andy Lutomirski
Fixes: f6577a5fa15d82217ca73c74cd2dcbc0f6c781dd
Signed-off-by: Paolo Bonzini
---
Documentation/virtual/kvm/mmu.txt | 3 ++-
arch/x86/kvm/vmx.c
in an even simpler way.
Nice work!
Reviewed-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
in an even simpler way.
Nice work!
Reviewed-by: Xiao Guangrong
On 03/04/2016 04:04 PM, Paolo Bonzini wrote:
On 04/03/2016 02:35, Lan Tianyu wrote:
The following kvm_flush_remote_tlbs() will call smp_mb() inside and so
remove smp_mb() here.
Signed-off-by: Lan Tianyu
---
arch/x86/kvm/mmu.c | 6 --
1 file changed, 6
On 03/04/2016 04:04 PM, Paolo Bonzini wrote:
On 04/03/2016 02:35, Lan Tianyu wrote:
The following kvm_flush_remote_tlbs() will call smp_mb() inside and so
remove smp_mb() here.
Signed-off-by: Lan Tianyu
---
arch/x86/kvm/mmu.c | 6 --
1 file changed, 6 deletions(-)
diff --git
_DR_EXITING);
}
static void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
Reviewed-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
ic void vmx_set_dr7(struct kvm_vcpu *vcpu, unsigned long val)
Reviewed-by: Xiao Guangrong
On 02/26/2016 07:46 PM, Paolo Bonzini wrote:
Commit 172b2386ed16 ("KVM: x86: fix missed hardware breakpoints",
2016-02-10) worked around a case where the debug registers are not loaded
correctly on preemption and on the first entry to KVM_RUN.
However, Xiao Guangrong pointed out tha
On 02/26/2016 07:46 PM, Paolo Bonzini wrote:
Commit 172b2386ed16 ("KVM: x86: fix missed hardware breakpoints",
2016-02-10) worked around a case where the debug registers are not loaded
correctly on preemption and on the first entry to KVM_RUN.
However, Xiao Guangrong pointed out tha
On 02/26/2016 07:28 PM, Nadav Amit wrote:
Xiao Guangrong <guangrong.x...@linux.intel.com> wrote:
On 02/19/2016 06:56 PM, Paolo Bonzini wrote:
Sometimes when setting a breakpoint a process doesn't stop on it.
This is because the debug registers are not loaded correctly on
VCPU load.
On 02/26/2016 07:28 PM, Nadav Amit wrote:
Xiao Guangrong wrote:
On 02/19/2016 06:56 PM, Paolo Bonzini wrote:
Sometimes when setting a breakpoint a process doesn't stop on it.
This is because the debug registers are not loaded correctly on
VCPU load.
diff --git a/arch/x86/kvm/x86.c b/arch
On 02/19/2016 06:56 PM, Paolo Bonzini wrote:
Sometimes when setting a breakpoint a process doesn't stop on it.
This is because the debug registers are not loaded correctly on
VCPU load.
The following simple reproducer from Oleg Nesterov tries using debug
registers in two threads. To see the
On 02/19/2016 06:56 PM, Paolo Bonzini wrote:
Sometimes when setting a breakpoint a process doesn't stop on it.
This is because the debug registers are not loaded correctly on
VCPU load.
The following simple reproducer from Oleg Nesterov tries using debug
registers in two threads. To see the
On 02/25/2016 04:49 PM, Paolo Bonzini wrote:
On 25/02/2016 08:35, Xiao Guangrong wrote:
This may release the mmu_lock before committing the zapping.
Is it safe? If so, we may want to see the reason in the changelog.
It is unsafe indeed, please do not do it.
Can you explain why
On 02/25/2016 04:49 PM, Paolo Bonzini wrote:
On 25/02/2016 08:35, Xiao Guangrong wrote:
This may release the mmu_lock before committing the zapping.
Is it safe? If so, we may want to see the reason in the changelog.
It is unsafe indeed, please do not do it.
Can you explain why
[ 168.792773] [] SyS_ioctl+0x79/0x90
[ 168.792777] [] entry_SYSCALL_64_fastpath+0x23/0xc1
[ 168.792780]
Signed-off-by: Mike Krinkin <krinkin@gmail.com>
Reviewed-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
[ 168.792773] [] SyS_ioctl+0x79/0x90
[ 168.792777] [] entry_SYSCALL_64_fastpath+0x23/0xc1
[ 168.792780]
Signed-off-by: Mike Krinkin
Reviewed-by: Xiao Guangrong
On 02/24/2016 09:17 PM, Paolo Bonzini wrote:
This series started from looking at mmu_unsync_walk for the ubsan thread.
Patches 1 and 2 are the result of the discussions in that thread.
Patches 3 to 9 do more cleanups in __kvm_sync_page and its callers.
Among other changes, it removes
On 02/24/2016 09:17 PM, Paolo Bonzini wrote:
This series started from looking at mmu_unsync_walk for the ubsan thread.
Patches 1 and 2 are the result of the discussions in that thread.
Patches 3 to 9 do more cleanups in __kvm_sync_page and its callers.
Among other changes, it removes
On 02/25/2016 10:15 AM, Takuya Yoshikawa wrote:
On 2016/02/24 22:17, Paolo Bonzini wrote:
Move the call to kvm_mmu_flush_or_zap outside the loop.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/mmu.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
On 02/25/2016 10:15 AM, Takuya Yoshikawa wrote:
On 2016/02/24 22:17, Paolo Bonzini wrote:
Move the call to kvm_mmu_flush_or_zap outside the loop.
Signed-off-by: Paolo Bonzini
---
arch/x86/kvm/mmu.c | 9 ++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git
On 02/24/2016 09:17 PM, Paolo Bonzini wrote:
kvm_mmu_get_page is the only caller of kvm_sync_page_transient
and kvm_sync_pages. Moving the handling of the invalid_list there
removes the need for the underdocumented kvm_sync_page_transient
function.
Signed-off-by: Paolo Bonzini
On 02/24/2016 09:17 PM, Paolo Bonzini wrote:
kvm_mmu_get_page is the only caller of kvm_sync_page_transient
and kvm_sync_pages. Moving the handling of the invalid_list there
removes the need for the underdocumented kvm_sync_page_transient
function.
Signed-off-by: Paolo Bonzini
---
Split rmap_write_protect() and introduce the function to abstract the write
protection based on the slot
This function will be used in the later patch
Reviewed-by: Paolo Bonzini <pbonz...@redhat.com>
Signed-off-by: Xiao Guangrong <guangrong.x...@linux.intel.com>
---
arch/x86/kv
101 - 200 of 2152 matches
Mail list logo