From: Xiao Guangrong
Current behavior of mmu_spte_update_no_track() does not match
the name of _no_track() as actually the A/D bits are tracked
and returned to the caller
This patch introduces the real _no_track() function to update
the spte regardless of A/D bits and rename the original functio
From: Xiao Guangrong
A new flag, KVM_DIRTY_LOG_WITHOUT_WRITE_PROTECT, is introduced which
indicates the userspace just wants to get the snapshot of dirty bitmap
During live migration, after all snapshot of dirty bitmap is fetched
from KVM, the guest memory can be write protected by calling
KVM_W
From: Xiao Guangrong
The functionality of write protection for all guest memory is ready,
it is the time to make its usable for userspace which is indicated
by KVM_CAP_X86_WRITE_PROTECT_ALL_MEM
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 21 +
include/uapi/
From: Xiao Guangrong
The writable spte can not be locklessly fixed and add a WARN_ON()
to trigger the warning if something out of our mind happens, that
is good for us to track if the log for writable spte is missed
on the fast path
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 11 +++
From: Xiao Guangrong
It is used to track possible writable sptes on the shadow page on
which the bit is set to 1 for the sptes that are already writable
or can be locklessly updated to writable on the fast_page_fault
path, also a counter for the number of possible writable sptes is
introduced to
From: Xiao Guangrong
The original idea is from Avi. kvm_mmu_write_protect_all_pages() is
extremely fast to write protect all the guest memory. Comparing with
the ordinary algorithm which write protects last level sptes based on
the rmap one by one, it just simply updates the generation number to
From: Xiao Guangrong
Changelog in v2:
thanks to Paolo's review, this version disables write-protect-all if
PML is supported
Background
==
The original idea of this patchset is from Avi who raised it in
the mailing list during my vMMU development some years ago
This patchset introduces a
From: Xiao Guangrong
mmu_spte_age() is under the protection of mmu-lock, no reason to use
mmu_spte_get_lockless()
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 7711953..dc00
From: Xiao Guangrong
The original idea is from Avi. kvm_mmu_write_protect_all_pages() is
extremely fast to write protect all the guest memory. Comparing with
the ordinary algorithm which write protects last level sptes based on
the rmap one by one, it just simply updates the generation number to
From: Xiao Guangrong
The writable spte can not be locklessly fixed and add a WARN_ON()
to trigger the warning if something out of our mind happens, that
is good for us to track if the log for writable spte is missed
on the fast path
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 11 +++
From: Xiao Guangrong
A new flag, KVM_DIRTY_LOG_WITHOUT_WRITE_PROTECT, is introduced which
indicates the userspace just wants to get the snapshot of dirty bitmap
During live migration, after all snapshot of dirty bitmap is fetched
from KVM, the guest memory can be write protected by calling
KVM_W
From: Xiao Guangrong
mmu_spte_age() is under the protection of mmu-lock, no reason to use
mmu_spte_get_lockless()
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f6a74e7..a8b9
From: Xiao Guangrong
The functionality of write protection for all guest memory is ready,
it is the time to make its usable for userspace which is indicated
by KVM_CAP_X86_WRITE_PROTECT_ALL_MEM
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 6 ++
include/uapi/linux/kvm.h | 2
From: Xiao Guangrong
Current behavior of mmu_spte_update_no_track() does not match
the name of _no_track() as actually the A/D bits are tracked
and returned to the caller
This patch introduces the real _no_track() function to update
the spte regardless of A/D bits and rename the original functio
From: Xiao Guangrong
It is used to track possible writable sptes on the shadow page on
which the bit is set to 1 for the sptes that are already writable
or can be locklessly updated to writable on the fast_page_fault
path, also a counter for the number of possible writable sptes is
introduced to
From: Xiao Guangrong
Background
==
The original idea of this patchset is from Avi who raised it in
the mailing list during my vMMU development some years ago
This patchset introduces a extremely fast way to write protect
all the guest memory. Comparing with the ordinary algorithm which
w
From: Xiao Guangrong
It is used to zap all the rmaps of the specified gfn range and will
be used by the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 30 ++
arch/x86/kvm/mmu.h | 1 +
2 files changed, 31 insertions(+)
diff --git a/arch/x86/kvm/
From: Xiao Guangrong
Currently, whenever guest MTRR registers are changed kvm_mmu_reset_context
is called to switch to the new root shadow page table, however, it's useless
since:
1) the cache type is not cached into shadow page's attribute so that the
original root shadow page will be reused
From: Xiao Guangrong
There are several places walking all rmaps for the memslot so that
introduce common functions to cleanup the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 63 ++
1 file changed, 63 insertions(+)
diff --git
From: Xiao Guangrong
It's used to walk all the sptes on the rmap to clean up the
code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 63 +++-
arch/x86/kvm/mmu_audit.c | 4 +--
2 files changed, 26 insertions(+), 41 deletions(-)
diff --
From: Xiao Guangrong
Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the
later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
in
From: Xiao Guangrong
It is used to clean up the code between kvm_handle_hva_range and
slot_handle_level, also it will be used by later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 144 -
1 file changed, 99 insertions(+), 45 de
From: Xiao Guangrong
It is used to zap all the rmaps of the specified gfn range and will
be used by the later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 30 ++
arch/x86/kvm/mmu.h | 1 +
2 files changed, 31 insertions(+)
diff --git a/arch/x86/kvm/
From: Xiao Guangrong
This are some MTRR bugs if legacy IOMMU device is used on Intel's CPU:
- In current code, whenever guest MTRR registers are changed
kvm_mmu_reset_context is called to switch to the new root shadow page
table, however, it's useless since:
1) the cache type is not cached
From: Xiao Guangrong
slot_handle_level and its helper functions are ready now, use them to
clean up the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 129 -
1 file changed, 18 insertions(+), 111 deletions(-)
diff --git a/arch/x
From: Xiao Guangrong
It's used to walk all the sptes on the rmap to clean up the
code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 63 +++-
arch/x86/kvm/mmu_audit.c | 4 +--
2 files changed, 26 insertions(+), 41 deletions(-)
diff --
From: Xiao Guangrong
Split kvm_unmap_rmapp and introduce kvm_zap_rmapp which will be used in the
later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 20
1 file changed, 12 insertions(+), 8 deletions(-)
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
in
From: Xiao Guangrong
There are several places walking all rmaps for the memslot so that
introduce common functions to cleanup the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 63 ++
1 file changed, 63 insertions(+)
diff --git
From: Xiao Guangrong
CR0.CD and CR0.NW are not used by shadow page table so that need
not adjust mmu if these two bit are changed
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
i
From: Xiao Guangrong
CR0.CD and CR0.NW are not used by shadow page table so that need
not adjust mmu if these two bit are changed
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/x86.c | 3 +--
1 file changed, 1 insertion(+), 2 deletions(-)
diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
i
From: Xiao Guangrong
slot_handle_level and its helper functions are ready now, use them to
clean up the code
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 129 -
1 file changed, 18 insertions(+), 111 deletions(-)
diff --git a/arch/x
From: Xiao Guangrong
It is used to clean up the code between kvm_handle_hva_range and
slot_handle_level, also it will be used by later patch
Signed-off-by: Xiao Guangrong
---
arch/x86/kvm/mmu.c | 144 -
1 file changed, 99 insertions(+), 45 de
From: Xiao Guangrong
There are some bugs in current get_mtrr_type();
1: bit 2 of mtrr_state->enabled is corresponding bit 11 of IA32_MTRR_DEF_TYPE
MSR which completely control MTRR's enablement that means other bits are
ignored if it is cleared
2: the fixed MTRR ranges are controlled by bi
From: Xiao Guangrong
This are some MTRR bugs if legacy IOMMU device is used on Intel's CPU:
- In current code, whenever guest MTRR registers are changed
kvm_mmu_reset_context is called to switch to the new root shadow page
table, however, it's useless since:
1) the cache type is not cached
From: Xiao Guangrong
Currently, whenever guest MTRR registers are changed kvm_mmu_reset_context
is called to switch to the new root shadow page table, however, it's useless
since:
1) the cache type is not cached into shadow page's attribute so that the
original root shadow page will be reused
From: Xiao Guangrong
There are some bugs in current get_mtrr_type();
1: bit 2 of mtrr_state->enabled is corresponding bit 11 of IA32_MTRR_DEF_TYPE
MSR which completely control MTRR's enablement that means other bits are
ignored if it is cleared
2: the fixed MTRR ranges are controlled by bi
36 matches
Mail list logo