[PATCH V6 1/3] x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available

2019-02-27 Thread lantianyu1986
From: Lan Tianyu 

Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
set x2apic destination mode to physcial mode when x2apic is available
and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
8-bit APIC id.

Reviewed-by: Thomas Gleixner 
Reviewed-by: Michael Kelley 
Signed-off-by: Lan Tianyu 
---
Change since v5:
   - Fix comile error due to x2apic_phys

Change since v2:
   - Fix compile error due to x2apic_phys
   - Fix comment indent
Change since v1:
   - Remove redundant extern for x2apic_phys
---
 arch/x86/kernel/cpu/mshyperv.c | 12 
 1 file changed, 12 insertions(+)

diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index e81a2db..3fa238a 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -328,6 +328,18 @@ static void __init ms_hyperv_init_platform(void)
 # ifdef CONFIG_SMP
smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
 # endif
+
+   /*
+* Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
+* set x2apic destination mode to physcial mode when x2apic is available
+* and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs
+* have 8-bit APIC id.
+*/
+# ifdef CONFIG_X86_X2APIC
+   if (x2apic_supported())
+   x2apic_phys = 1;
+# endif
+
 #endif
 }
 
-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[Resend PATCH V5 1/3] x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available

2019-02-26 Thread lantianyu1986
From: Lan Tianyu 

Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
set x2apic destination mode to physcial mode when x2apic is available
and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
8-bit APIC id.

Reviewed-by: Thomas Gleixner 
Reviewed-by: Michael Kelley 
Signed-off-by: Lan Tianyu 
---
Change since v2:
   - Fix compile error due to x2apic_phys
   - Fix comment indent
Change since v1:
   - Remove redundant extern for x2apic_phys

---
 arch/x86/kernel/cpu/mshyperv.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index e81a2db..0c29e4e 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -328,6 +328,16 @@ static void __init ms_hyperv_init_platform(void)
 # ifdef CONFIG_SMP
smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
 # endif
+
+   /*
+* Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
+* set x2apic destination mode to physcial mode when x2apic is available
+* and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs
+* have 8-bit APIC id.
+*/
+   if (IS_ENABLED(CONFIG_X86_X2APIC) && x2apic_supported())
+   x2apic_phys = 1;
+
 #endif
 }
 
-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[Resend PATCH V5 0/3] x86/Hyper-V/IOMMU: Add Hyper-V IOMMU driver to support x2apic mode

2019-02-26 Thread lantianyu1986
From: Lan Tianyu 

On the bare metal, enabling X2APIC mode requires interrupt remapping
function which helps to deliver irq to cpu with 32-bit APIC ID.
Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
MSI protocol already supports to deliver interrupt to the CPU whose
virtual processor index is more than 255. IO-APIC interrupt still has
8-bit APIC ID limitation.

This patchset is to add Hyper-V stub IOMMU driver in order to enable
X2APIC mode successfully in Hyper-V Linux guest. The driver returns X2APIC
interrupt remapping capability when X2APIC mode is available. X2APIC
destination mode is set to physical by PATCH 1 when X2APIC is available.
Hyper-V IOMMU driver will scan cpu 0~255 and set cpu into IO-APIC MAX cpu
affinity cpumask if its APIC ID is 8-bit. Driver creates a Hyper-V irq domain
to limit IO-APIC interrupts' affinity and make sure cpus assigned with IO-APIC
interrupt are in the scope of IO-APIC MAX cpu affinity.

Lan Tianyu (3):
  x86/Hyper-V: Set x2apic destination mode to physical when x2apic is   
 available
  HYPERV/IOMMU: Add Hyper-V stub IOMMU driver
  MAINTAINERS: Add Hyper-V IOMMU driver into Hyper-V CORE AND DRIVERS
scope

 MAINTAINERS|   1 +
 arch/x86/kernel/cpu/mshyperv.c |  10 +++
 drivers/iommu/Kconfig  |   9 ++
 drivers/iommu/Makefile |   1 +
 drivers/iommu/hyperv-iommu.c   | 194 +
 drivers/iommu/irq_remapping.c  |   3 +
 drivers/iommu/irq_remapping.h  |   1 +
 7 files changed, 219 insertions(+)
 create mode 100644 drivers/iommu/hyperv-iommu.c

-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V2] x86/Hyper-V: Fix definition HV_MAX_FLUSH_REP_COUNT

2019-02-25 Thread lantianyu1986
From: Lan Tianyu 

The max flush rep count of HvFlushGuestPhysicalAddressList hypercall
is equal with how many entries of union hv_gpa_page_range can be populated
into the input parameter page. The origin code lacks parenthesis around
PAGE_SIZE - 2 * sizeof(u64). This patch is to fix it.

Cc: 
Fixes: cc4edae4b924 ("x86/hyper-v: Add HvFlushGuestAddressList hypercall 
support")
Signed-off-by: Lan Tianyu 
---
Change since v1
- Update change log

 arch/x86/include/asm/hyperv-tlfs.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/hyperv-tlfs.h 
b/arch/x86/include/asm/hyperv-tlfs.h
index 705dafc2d11a..2bdbbbcfa393 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -841,7 +841,7 @@ union hv_gpa_page_range {
  * count is equal with how many entries of union hv_gpa_page_range can
  * be populated into the input parameter page.
  */
-#define HV_MAX_FLUSH_REP_COUNT (PAGE_SIZE - 2 * sizeof(u64) /  \
+#define HV_MAX_FLUSH_REP_COUNT ((PAGE_SIZE - 2 * sizeof(u64)) /\
sizeof(union hv_gpa_page_range))
 
 struct hv_guest_mapping_flush_list {
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[Update PATCH] x86/Hyper-V: Fix definition HV_MAX_FLUSH_REP_COUNT

2019-02-25 Thread lantianyu1986
From: Lan Tianyu 

The max flush rep count of HvFlushGuestPhysicalAddressList hypercall
is equal with how many entries of union hv_gpa_page_range can be populated
into the input parameter page. The origin code lacks parenthesis around
PAGE_SIZE - 2 * sizeof(u64). This patch is to fix it.

Cc: 
Fixs: cc4edae4b924 ("x86/hyper-v: Add HvFlushGuestAddressList hypercall 
support")
Signed-off-by: Lan Tianyu 
---
 arch/x86/include/asm/hyperv-tlfs.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/hyperv-tlfs.h 
b/arch/x86/include/asm/hyperv-tlfs.h
index 705dafc2d11a..2bdbbbcfa393 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -841,7 +841,7 @@ union hv_gpa_page_range {
  * count is equal with how many entries of union hv_gpa_page_range can
  * be populated into the input parameter page.
  */
-#define HV_MAX_FLUSH_REP_COUNT (PAGE_SIZE - 2 * sizeof(u64) /  \
+#define HV_MAX_FLUSH_REP_COUNT ((PAGE_SIZE - 2 * sizeof(u64)) /\
sizeof(union hv_gpa_page_range))
 
 struct hv_guest_mapping_flush_list {
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V3 00/10] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM

2019-02-22 Thread lantianyu1986
From: Lan Tianyu 

This patchset is to introduce hv ept tlb range list flush function
support in the KVM MMU component. Flushing ept tlbs of several address
range can be done via single hypercall and new list flush function is
used in the kvm_mmu_commit_zap_page() and FNAME(sync_page). This patchset
also adds more hv ept tlb range flush support in more KVM MMU function.

This patchset is based on the fix patch "x86/Hyper-V: Fix definition 
HV_MAX_FLUSH_REP_COUNT".
(https://www.mail-archive.com/linux-kernel@vger.kernel.org/msg1939455.html)

Change since v2:
1) Fix calculation of flush pages in the kvm_fill_hv_flush_list_func()
2) Change the logic of setting/clearing last_level flag

Change since v1:
1) Make flush list as a hlist instead of list in order to 
keep struct kvm_mmu_page size.
2) Add last_level flag in the struct kvm_mmu_page instead
of spte pointer
3) Move tlb flush from kvm_mmu_notifier_clear_flush_young() to 
kvm_age_hva()
4) Use range flush in the kvm_vm_ioctl_get/clear_dirty_log()


Lan Tianyu (10):
  X86/Hyper-V: Add parameter offset for
hyperv_fill_flush_guest_mapping_list()
  KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  KVM/MMU: Introduce tlb flush with range list
  KVM/MMU: Use range flush in sync_page()
  KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  KVM: Add kvm_get_memslot() to get memslot via slot id
  KVM: Use tlb range flush in the kvm_vm_ioctl_get/clear_dirty_log()
  KVM: Add flush parameter for kvm_age_hva()
  KVM/MMU: Use tlb range flush in the kvm_age_hva()
  KVM/MMU: Add last_level flag in the struct mmu_spte_page

 arch/arm/include/asm/kvm_host.h |  3 +-
 arch/arm64/include/asm/kvm_host.h   |  3 +-
 arch/mips/include/asm/kvm_host.h|  3 +-
 arch/mips/kvm/mmu.c | 11 ++--
 arch/powerpc/include/asm/kvm_host.h |  3 +-
 arch/powerpc/kvm/book3s.c   | 10 +--
 arch/powerpc/kvm/e500_mmu_host.c|  3 +-
 arch/x86/hyperv/nested.c|  4 +--
 arch/x86/include/asm/kvm_host.h | 11 +++-
 arch/x86/include/asm/mshyperv.h |  2 +-
 arch/x86/kvm/mmu.c  | 55 ++---
 arch/x86/kvm/mmu.h  |  7 +
 arch/x86/kvm/paging_tmpl.h  |  5 ++--
 arch/x86/kvm/vmx/vmx.c  | 18 ++--
 arch/x86/kvm/x86.c  | 16 ---
 include/linux/kvm_host.h|  1 +
 virt/kvm/arm/mmu.c  | 13 +++--
 virt/kvm/kvm_main.c | 51 ++
 18 files changed, 156 insertions(+), 63 deletions(-)

-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V3 1/10] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list()

2019-02-22 Thread lantianyu1986
From: Lan Tianyu 

Add parameter offset to specify start position to add flush ranges in
guest address list of struct hv_guest_mapping_flush_list.

Signed-off-by: Lan Tianyu 
---
 arch/x86/hyperv/nested.c| 4 ++--
 arch/x86/include/asm/mshyperv.h | 2 +-
 arch/x86/kvm/vmx/vmx.c  | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index dd0a843f766d..96f8bac7476d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -58,11 +58,11 @@ EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
 
 int hyperv_fill_flush_guest_mapping_list(
struct hv_guest_mapping_flush_list *flush,
-   u64 start_gfn, u64 pages)
+   int offset, u64 start_gfn, u64 pages)
 {
u64 cur = start_gfn;
u64 additional_pages;
-   int gpa_n = 0;
+   int gpa_n = offset;
 
do {
/*
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index cc60e617931c..d6be685ab6b0 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -357,7 +357,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
hyperv_fill_flush_list_func fill_func, void *data);
 int hyperv_fill_flush_guest_mapping_list(
struct hv_guest_mapping_flush_list *flush,
-   u64 start_gfn, u64 end_gfn);
+   int offset, u64 start_gfn, u64 end_gfn);
 
 #ifdef CONFIG_X86_64
 void hv_apic_init(void);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 4950bb20e06a..77b5379e3655 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -433,7 +433,7 @@ static int kvm_fill_hv_flush_list_func(struct 
hv_guest_mapping_flush_list *flush
 {
struct kvm_tlb_range *range = data;
 
-   return hyperv_fill_flush_guest_mapping_list(flush, range->start_gfn,
+   return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
range->pages);
 }
 
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH] x86/Hyper-V: Fix definition HV_MAX_FLUSH_REP_COUNT

2019-02-22 Thread lantianyu1986
From: Lan Tianyu 

The max flush rep count of HvFlushGuestPhysicalAddressList hypercall
is equal with how many entries of union hv_gpa_page_range can be populated
into the input parameter page. The origin code lacks parenthesis around
PAGE_SIZE - 2 * sizeof(u64). This patch is to fix it.

Cc: 
Fixs: cc4edae4b9(x86/hyper-v: Add HvFlushGuestAddressList hypercall support)
Signed-off-by: Lan Tianyu 
---
 arch/x86/include/asm/hyperv-tlfs.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/arch/x86/include/asm/hyperv-tlfs.h 
b/arch/x86/include/asm/hyperv-tlfs.h
index 705dafc2d11a..2bdbbbcfa393 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -841,7 +841,7 @@ union hv_gpa_page_range {
  * count is equal with how many entries of union hv_gpa_page_range can
  * be populated into the input parameter page.
  */
-#define HV_MAX_FLUSH_REP_COUNT (PAGE_SIZE - 2 * sizeof(u64) /  \
+#define HV_MAX_FLUSH_REP_COUNT ((PAGE_SIZE - 2 * sizeof(u64)) /\
sizeof(union hv_gpa_page_range))
 
 struct hv_guest_mapping_flush_list {
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 1/3] x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available

2019-02-11 Thread lantianyu1986
From: Lan Tianyu 

Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
set x2apic destination mode to physcial mode when x2apic is available
and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
8-bit APIC id.

Reviewed-by: Thomas Gleixner 
Signed-off-by: Lan Tianyu 
---
Change since v2:
   - Fix compile error due to x2apic_phys
   - Fix comment indent
Change since v1:
   - Remove redundant extern for x2apic_phys
---
 arch/x86/kernel/cpu/mshyperv.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index e81a2db..0c29e4e 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -328,6 +328,16 @@ static void __init ms_hyperv_init_platform(void)
 # ifdef CONFIG_SMP
smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
 # endif
+
+   /*
+* Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
+* set x2apic destination mode to physcial mode when x2apic is available
+* and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs
+* have 8-bit APIC id.
+*/
+   if (IS_ENABLED(CONFIG_X86_X2APIC) && x2apic_supported())
+   x2apic_phys = 1;
+
 #endif
 }
 
-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 0/3] x86/Hyper-V/IOMMU: Add Hyper-V IOMMU driver to support x2apic mode

2019-02-11 Thread lantianyu1986
From: Lan Tianyu 

On the bare metal, enabling X2APIC mode requires interrupt remapping
function which helps to deliver irq to cpu with 32-bit APIC ID.
Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
MSI protocol already supports to deliver interrupt to the CPU whose
virtual processor index is more than 255. IO-APIC interrupt still has
8-bit APIC ID limitation.

This patchset is to add Hyper-V stub IOMMU driver in order to enable
X2APIC mode successfully in Hyper-V Linux guest. The driver returns X2APIC
interrupt remapping capability when X2APIC mode is available. X2APIC
destination mode is set to physical by PATCH 1 when X2APIC is available.
Hyper-V IOMMU driver will scan cpu 0~255 and set cpu into IO-APIC MAX cpu
affinity cpumask if its APIC ID is 8-bit. Driver creates a Hyper-V irq domain
to limit IO-APIC interrupts' affinity and make sure cpus assigned with IO-APIC
interrupt are in the scope of IO-APIC MAX cpu affinity.

Lan Tianyu (3):
  x86/Hyper-V: Set x2apic destination mode to physical when x2apic is   
 available
  HYPERV/IOMMU: Add Hyper-V stub IOMMU driver
  MAINTAINERS: Add Hyper-V IOMMU driver into Hyper-V CORE AND DRIVERS
scope

 MAINTAINERS|   1 +
 arch/x86/kernel/cpu/mshyperv.c |  10 +++
 drivers/iommu/Kconfig  |   9 ++
 drivers/iommu/Makefile |   1 +
 drivers/iommu/hyperv-iommu.c   | 194 +
 drivers/iommu/irq_remapping.c  |   3 +
 drivers/iommu/irq_remapping.h  |   1 +
 7 files changed, 219 insertions(+)
 create mode 100644 drivers/iommu/hyperv-iommu.c

-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V3 0/3] x86/Hyper-V/IOMMU: Add Hyper-V IOMMU driver to support x2apic mode

2019-02-07 Thread lantianyu1986
From: Lan Tianyu 

On the bare metal, enabling X2APIC mode requires interrupt remapping
function which helps to deliver irq to cpu with 32-bit APIC ID.
Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
MSI protocol already supports to deliver interrupt to the CPU whose
virtual processor index is more than 255. IO-APIC interrupt still has
8-bit APIC ID limitation.

This patchset is to add Hyper-V stub IOMMU driver in order to enable
X2APIC mode successfully in Hyper-V Linux guest. The driver returns X2APIC
interrupt remapping capability when X2APIC mode is available. X2APIC
destination mode is set to physical by PATCH 1 when X2APIC is available.
Hyper-V IOMMU driver will scan cpu 0~255 and set cpu into IO-APIC MAX cpu
affinity cpumask if its APIC ID is 8-bit. Driver creates a Hyper-V irq domain
to limit IO-APIC interrupts' affinity and make sure cpus assigned with IO-APIC
interrupt are in the scope of IO-APIC MAX cpu affinity.

Lan Tianyu (3):
  x86/Hyper-V: Set x2apic destination mode to physical when x2apic is   
 available
  HYPERV/IOMMU: Add Hyper-V stub IOMMU driver
  MAINTAINERS: Add Hyper-V IOMMU driver into Hyper-V CORE AND DRIVERS
scope

 MAINTAINERS|   1 +
 arch/x86/kernel/cpu/mshyperv.c |  10 +++
 drivers/iommu/Kconfig  |   8 ++
 drivers/iommu/Makefile |   1 +
 drivers/iommu/hyperv-iommu.c   | 194 +
 drivers/iommu/irq_remapping.c  |   3 +
 drivers/iommu/irq_remapping.h  |   1 +
 7 files changed, 218 insertions(+)
 create mode 100644 drivers/iommu/hyperv-iommu.c

-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V3 1/3] x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available

2019-02-07 Thread lantianyu1986
From: Lan Tianyu 

Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
set x2apic destination mode to physcial mode when x2apic is available
and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
8-bit APIC id.

Signed-off-by: Lan Tianyu 
---
Change since v2:
   - Fix compile error due to x2apic_phys
   - Fix comment indent
Change since v1:
   - Remove redundant extern for x2apic_phys
---
 arch/x86/kernel/cpu/mshyperv.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index e81a2db..0c29e4e 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -328,6 +328,16 @@ static void __init ms_hyperv_init_platform(void)
 # ifdef CONFIG_SMP
smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
 # endif
+
+   /*
+* Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
+* set x2apic destination mode to physcial mode when x2apic is available
+* and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs
+* have 8-bit APIC id.
+*/
+   if (IS_ENABLED(CONFIG_X86_X2APIC) && x2apic_supported())
+   x2apic_phys = 1;
+
 #endif
 }
 
-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V2 1/3] x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available

2019-02-02 Thread lantianyu1986
From: Lan Tianyu 

Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
set x2apic destination mode to physcial mode when x2apic is available
and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
8-bit APIC id.

Signed-off-by: Lan Tianyu 
---
Change since v1:
   - Remove redundant extern for x2apic_phys
---
 arch/x86/kernel/cpu/mshyperv.c | 10 ++
 1 file changed, 10 insertions(+)

diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index e81a2db..4bd6d90 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -328,6 +328,16 @@ static void __init ms_hyperv_init_platform(void)
 # ifdef CONFIG_SMP
smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
 # endif
+
+/*
+ * Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
+ * set x2apic destination mode to physcial mode when x2apic is available
+ * and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs
+ * have 8-bit APIC id.
+ */
+   if (IS_ENABLED(CONFIG_HYPERV_IOMMU) && x2apic_supported())
+   x2apic_phys = 1;
+
 #endif
 }
 
-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V2 0/3] x86/Hyper-V/IOMMU: Add Hyper-V IOMMU driver to support x2apic mode

2019-02-02 Thread lantianyu1986
From: Lan Tianyu 

On the bare metal, enabling X2APIC mode requires interrupt remapping
function which helps to deliver irq to cpu with 32-bit APIC ID.
Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
MSI protocol already supports to deliver interrupt to the CPU whose
virtual processor index is more than 255. IO-APIC interrupt still has
8-bit APIC ID limitation.

This patchset is to add Hyper-V stub IOMMU driver in order to enable
X2APIC mode successfully in Hyper-V Linux guest. The driver returns X2APIC
interrupt remapping capability when X2APIC mode is available. X2APIC
destination mode is set to physical by PATCH 1 when X2APIC is available.
Hyper-V IOMMU driver will scan cpu 0~255 and set cpu into IO-APIC MAX cpu
affinity cpumask if its APIC ID is 8-bit. Driver creates a Hyper-V irq domain
to limit IO-APIC interrupts' affinity and make sure cpus assigned with IO-APIC
interrupt are in the scope of IO-APIC MAX cpu affinity.

Lan Tianyu (3):
  x86/Hyper-V: Set x2apic destination mode to physical when x2apic is   
 available
  HYPERV/IOMMU: Add Hyper-V stub IOMMU driver
  MAINTAINERS: Add Hyper-V IOMMU driver into Hyper-V CORE AND DRIVERS
scope

 MAINTAINERS|   1 +
 arch/x86/kernel/cpu/mshyperv.c |  10 +++
 drivers/iommu/Kconfig  |   8 ++
 drivers/iommu/Makefile |   1 +
 drivers/iommu/hyperv-iommu.c   | 193 +
 drivers/iommu/irq_remapping.c  |   3 +
 drivers/iommu/irq_remapping.h  |   1 +
 7 files changed, 217 insertions(+)
 create mode 100644 drivers/iommu/hyperv-iommu.c

-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V2 00/10] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM

2019-02-01 Thread lantianyu1986
From: Lan Tianyu 

This patchset is to introduce hv ept tlb range list flush function
support in the KVM MMU component. Flushing ept tlbs of several address
range can be done via single hypercall and new list flush function is
used in the kvm_mmu_commit_zap_page() and FNAME(sync_page). This patchset
also adds more hv ept tlb range flush support in more KVM MMU function.

Change since v1:
   1) Make flush list as a hlist instead of list in order to 
   keep struct kvm_mmu_page size.
   2) Add last_level flag in the struct kvm_mmu_page instead
   of spte pointer
   3) Move tlb flush from kvm_mmu_notifier_clear_flush_young() to 
kvm_age_hva()
   4) Use range flush in the kvm_vm_ioctl_get/clear_dirty_log()

Lan Tianyu (10):
  X86/Hyper-V: Add parameter offset for
hyperv_fill_flush_guest_mapping_list()
  KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  KVM/MMU: Add last_level in the struct mmu_spte_page
  KVM/MMU: Introduce tlb flush with range list
  KVM/MMU: Flush tlb with range list in sync_page()
  KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  KVM: Add kvm_get_memslot() to get memslot via slot id
  KVM: Use tlb range flush in the kvm_vm_ioctl_get/clear_dirty_log()
  KVM: Add flush parameter for kvm_age_hva()
  KVM/MMU: Use tlb range flush  in the kvm_age_hva()

 arch/arm/include/asm/kvm_host.h |  3 ++-
 arch/arm64/include/asm/kvm_host.h   |  3 ++-
 arch/mips/include/asm/kvm_host.h|  3 ++-
 arch/mips/kvm/mmu.c | 11 ++--
 arch/powerpc/include/asm/kvm_host.h |  3 ++-
 arch/powerpc/kvm/book3s.c   | 10 ++--
 arch/powerpc/kvm/e500_mmu_host.c|  3 ++-
 arch/x86/hyperv/nested.c|  4 +--
 arch/x86/include/asm/kvm_host.h | 11 +++-
 arch/x86/include/asm/mshyperv.h |  2 +-
 arch/x86/kvm/mmu.c  | 51 +
 arch/x86/kvm/mmu.h  |  7 +
 arch/x86/kvm/paging_tmpl.h  | 15 ---
 arch/x86/kvm/vmx/vmx.c  | 18 +++--
 arch/x86/kvm/x86.c  | 16 +---
 include/linux/kvm_host.h|  1 +
 virt/kvm/arm/mmu.c  | 13 --
 virt/kvm/kvm_main.c | 51 +++--
 18 files changed, 160 insertions(+), 65 deletions(-)

-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V2 1/10] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list()

2019-02-01 Thread lantianyu1986
From: Lan Tianyu 

Add parameter offset to specify start position to add flush ranges in
guest address list of struct hv_guest_mapping_flush_list.

Signed-off-by: Lan Tianyu 
---
arch/x86/hyperv/nested.c| 4 ++--
 arch/x86/include/asm/mshyperv.h | 2 +-
 arch/x86/kvm/vmx/vmx.c  | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index dd0a843f766d..96f8bac7476d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -58,11 +58,11 @@ EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
 
 int hyperv_fill_flush_guest_mapping_list(
struct hv_guest_mapping_flush_list *flush,
-   u64 start_gfn, u64 pages)
+   int offset, u64 start_gfn, u64 pages)
 {
u64 cur = start_gfn;
u64 additional_pages;
-   int gpa_n = 0;
+   int gpa_n = offset;
 
do {
/*
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index cc60e617931c..d6be685ab6b0 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -357,7 +357,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
hyperv_fill_flush_list_func fill_func, void *data);
 int hyperv_fill_flush_guest_mapping_list(
struct hv_guest_mapping_flush_list *flush,
-   u64 start_gfn, u64 end_gfn);
+   int offset, u64 start_gfn, u64 end_gfn);
 
 #ifdef CONFIG_X86_64
 void hv_apic_init(void);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index f6915f10e584..9d954b4adce3 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -428,7 +428,7 @@ int kvm_fill_hv_flush_list_func(struct 
hv_guest_mapping_flush_list *flush,
 {
struct kvm_tlb_range *range = data;
 
-   return hyperv_fill_flush_guest_mapping_list(flush, range->start_gfn,
+   return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
range->pages);
 }
 
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V2 00/10] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM

2019-02-01 Thread lantianyu1986
From: Lan Tianyu 

This patchset is to introduce hv ept tlb range list flush function
support in the KVM MMU component. Flushing ept tlbs of several address
range can be done via single hypercall and new list flush function is
used in the kvm_mmu_commit_zap_page() and FNAME(sync_page). This patchset
also adds more hv ept tlb range flush support in more KVM MMU function.

Change since v1:
   1) Make flush list as a hlist instead of list in order to 
   keep struct kvm_mmu_page size.
   2) Add last_level flag in the struct kvm_mmu_page instead
   of spte pointer
   3) Move tlb flush from kvm_mmu_notifier_clear_flush_young() to 
kvm_age_hva()
   4) Use range flush in the kvm_vm_ioctl_get/clear_dirty_log()

Lan Tianyu (10):
  X86/Hyper-V: Add parameter offset for
hyperv_fill_flush_guest_mapping_list()
  KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  KVM/MMU: Add last_level in the struct mmu_spte_page
  KVM/MMU: Introduce tlb flush with range list
  KVM/MMU: Flush tlb with range list in sync_page()
  KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  KVM: Add kvm_get_memslot() to get memslot via slot id
  KVM: Use tlb range flush in the kvm_vm_ioctl_get/clear_dirty_log()
  KVM: Add flush parameter for kvm_age_hva()
  KVM/MMU: Use tlb range flush  in the kvm_age_hva()

 arch/arm/include/asm/kvm_host.h |  3 ++-
 arch/arm64/include/asm/kvm_host.h   |  3 ++-
 arch/mips/include/asm/kvm_host.h|  3 ++-
 arch/mips/kvm/mmu.c | 11 ++--
 arch/powerpc/include/asm/kvm_host.h |  3 ++-
 arch/powerpc/kvm/book3s.c   | 10 ++--
 arch/powerpc/kvm/e500_mmu_host.c|  3 ++-
 arch/x86/hyperv/nested.c|  4 +--
 arch/x86/include/asm/kvm_host.h | 11 +++-
 arch/x86/include/asm/mshyperv.h |  2 +-
 arch/x86/kvm/mmu.c  | 51 +
 arch/x86/kvm/mmu.h  |  7 +
 arch/x86/kvm/paging_tmpl.h  | 15 ---
 arch/x86/kvm/vmx/vmx.c  | 18 +++--
 arch/x86/kvm/x86.c  | 16 +---
 include/linux/kvm_host.h|  1 +
 virt/kvm/arm/mmu.c  | 13 --
 virt/kvm/kvm_main.c | 51 +++--
 18 files changed, 160 insertions(+), 65 deletions(-)

-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH 1/3] x86/Hyper-V: Set x2apic destination mode to physical when x2apic is available

2019-01-31 Thread lantianyu1986
From: Lan Tianyu 

Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
set x2apic destination mode to physcial mode when x2apic is available
and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs have
8-bit APIC id.

Signed-off-by: Lan Tianyu 
---
 arch/x86/kernel/cpu/mshyperv.c | 14 ++
 1 file changed, 14 insertions(+)

diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index e81a2db..9d62f33 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -36,6 +36,8 @@
 struct ms_hyperv_info ms_hyperv;
 EXPORT_SYMBOL_GPL(ms_hyperv);
 
+extern int x2apic_phys;
+
 #if IS_ENABLED(CONFIG_HYPERV)
 static void (*vmbus_handler)(void);
 static void (*hv_stimer0_handler)(void);
@@ -328,6 +330,18 @@ static void __init ms_hyperv_init_platform(void)
 # ifdef CONFIG_SMP
smp_ops.smp_prepare_boot_cpu = hv_smp_prepare_boot_cpu;
 # endif
+
+/*
+ * Hyper-V doesn't provide irq remapping for IO-APIC. To enable x2apic,
+ * set x2apic destination mode to physcial mode when x2apic is available
+ * and Hyper-V IOMMU driver makes sure cpus assigned with IO-APIC irqs
+ * have 8-bit APIC id.
+ */
+# if IS_ENABLED(CONFIG_HYPERV_IOMMU)
+   if (x2apic_supported())
+   x2apic_phys = 1;
+# endif
+
 #endif
 }
 
-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH 0/3] x86/Hyper-V/IOMMU: Add Hyper-V IOMMU driver to support x2apic mode

2019-01-31 Thread lantianyu1986
From: Lan Tianyu 

On the bare metal, enabling X2APIC mode requires interrupt remapping
function which helps to deliver irq to cpu with 32-bit APIC ID.
Hyper-V doesn't provide interrupt remapping function so far and Hyper-V
MSI protocol already supports to deliver interrupt to the CPU whose
virtual processor index is more than 255. IO-APIC interrupt still has
8-bit APIC ID limitation.

This patchset is to add Hyper-V stub IOMMU driver in order to enable
X2APIC mode successfully in Hyper-V Linux guest. The driver returns X2APIC
interrupt remapping capability when X2APIC mode is available. X2APIC
destination mode is set to physical by PATCH 1 when X2APIC is available.
Hyper-V IOMMU driver will scan cpu 0~255 and set cpu into IO-APIC MAX cpu
affinity cpumask if its APIC ID is 8-bit. Driver creates a Hyper-V irq domain
to limit IO-APIC interrupts' affinity and make sure cpus assigned with IO-APIC
interrupt are in the scope of IO-APIC MAX cpu affinity.

Lan Tianyu (3):
  x86/Hyper-V: Set x2apic destination mode to physical when x2apic is   
 available
  HYPERV/IOMMU: Add Hyper-V stub IOMMU driver
  MAINTAINERS: Add Hyper-V IOMMU driver into Hyper-V CORE AND DRIVERS
scope

 MAINTAINERS|   1 +
 arch/x86/kernel/cpu/mshyperv.c |  14 +++
 drivers/iommu/Kconfig  |   7 ++
 drivers/iommu/Makefile |   1 +
 drivers/iommu/hyperv-iommu.c   | 189 +
 drivers/iommu/irq_remapping.c  |   3 +
 drivers/iommu/irq_remapping.h  |   1 +
 7 files changed, 216 insertions(+)
 create mode 100644 drivers/iommu/hyperv-iommu.c

-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH 1/11] X86/Hyper-V: Add parameter offset for hyperv_fill_flush_guest_mapping_list()

2019-01-04 Thread lantianyu1986
From: Lan Tianyu 

Add parameter offset to specify start position to add flush ranges in
guest address list of struct hv_guest_mapping_flush_list.

Signed-off-by: Lan Tianyu 
---
 arch/x86/hyperv/nested.c| 4 ++--
 arch/x86/include/asm/mshyperv.h | 2 +-
 arch/x86/kvm/vmx/vmx.c  | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index dd0a843f766d..96f8bac7476d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -58,11 +58,11 @@ EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
 
 int hyperv_fill_flush_guest_mapping_list(
struct hv_guest_mapping_flush_list *flush,
-   u64 start_gfn, u64 pages)
+   int offset, u64 start_gfn, u64 pages)
 {
u64 cur = start_gfn;
u64 additional_pages;
-   int gpa_n = 0;
+   int gpa_n = offset;
 
do {
/*
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index cc60e617931c..d6be685ab6b0 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -357,7 +357,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
hyperv_fill_flush_list_func fill_func, void *data);
 int hyperv_fill_flush_guest_mapping_list(
struct hv_guest_mapping_flush_list *flush,
-   u64 start_gfn, u64 end_gfn);
+   int offset, u64 start_gfn, u64 end_gfn);
 
 #ifdef CONFIG_X86_64
 void hv_apic_init(void);
diff --git a/arch/x86/kvm/vmx/vmx.c b/arch/x86/kvm/vmx/vmx.c
index 87224e4c2fd9..2c159efedc40 100644
--- a/arch/x86/kvm/vmx/vmx.c
+++ b/arch/x86/kvm/vmx/vmx.c
@@ -428,7 +428,7 @@ int kvm_fill_hv_flush_list_func(struct 
hv_guest_mapping_flush_list *flush,
 {
struct kvm_tlb_range *range = data;
 
-   return hyperv_fill_flush_guest_mapping_list(flush, range->start_gfn,
+   return hyperv_fill_flush_guest_mapping_list(flush, 0, range->start_gfn,
range->pages);
 }
 
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH 00/11] X86/KVM/Hyper-V: Add HV ept tlb range list flush support in KVM

2019-01-04 Thread lantianyu1986
From: Lan Tianyu 

This patchset is to introduce hv ept tlb range list flush function
support in the KVM MMU component. Flushing ept tlbs of several address
range can be done via single hypercall and new list flush function is
used in the kvm_mmu_commit_zap_page() and FNAME(sync_page). This patchset
also adds more hv ept tlb range flush support in more KVM MMU function.

Lan Tianyu (11):
  X86/Hyper-V: Add parameter offset for
hyperv_fill_flush_guest_mapping_list()
  KVM/VMX: Fill range list in kvm_fill_hv_flush_list_func()
  KVM: Add spte's point in the struct kvm_mmu_page
  KVM/MMU: Introduce tlb flush with range list
  KVM/MMU: Flush tlb directly in the kvm_mmu_slot_gfn_write_protect()
  KVM/MMU: Flush tlb with range list in sync_page()
  KVM: Remove redundant check in the kvm_get_dirty_log_protect()
  KVM: Make kvm_arch_mmu_enable_log_dirty_pt_masked() return value
  KVM/MMU: Flush tlb in the kvm_mmu_write_protect_pt_masked()
  KVM: Add flush parameter for kvm_age_hva()
  KVM/MMU: Flush tlb in the kvm_age_rmapp()

 arch/arm/include/asm/kvm_host.h |  3 +-
 arch/arm64/include/asm/kvm_host.h   |  3 +-
 arch/mips/include/asm/kvm_host.h|  3 +-
 arch/mips/kvm/mmu.c |  8 +++-
 arch/powerpc/include/asm/kvm_host.h |  3 +-
 arch/powerpc/kvm/book3s.c   |  3 +-
 arch/powerpc/kvm/e500_mmu_host.c|  3 +-
 arch/x86/hyperv/nested.c|  4 +-
 arch/x86/include/asm/kvm_host.h | 11 +-
 arch/x86/include/asm/mshyperv.h |  2 +-
 arch/x86/kvm/mmu.c  | 73 -
 arch/x86/kvm/paging_tmpl.h  | 18 -
 arch/x86/kvm/vmx/vmx.c  | 18 -
 include/linux/kvm_host.h|  2 +-
 virt/kvm/arm/mmu.c  |  8 +++-
 virt/kvm/kvm_main.c | 18 -
 16 files changed, 141 insertions(+), 39 deletions(-)

-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[Resend PATCH V5 0/10] x86/KVM/Hyper-v: Add HV ept tlb range flush hypercall support in KVM

2018-12-06 Thread lantianyu1986
From: Lan Tianyu 

For nested memory virtualization, Hyper-v doesn't set write-protect
L1 hypervisor EPT page directory and page table node to track changes 
while it relies on guest to tell it changes via HvFlushGuestAddressLlist
hypercall. HvFlushGuestAddressLlist hypercall provides a way to flush
EPT page table with ranges which are specified by L1 hypervisor.

If L1 hypervisor uses INVEPT or HvFlushGuestAddressSpace hypercall to
flush EPT tlb, Hyper-V will invalidate associated EPT shadow page table
and sync L1's EPT table when next EPT page fault is triggered.
HvFlushGuestAddressLlist hypercall helps to avoid such redundant EPT
page fault and synchronization of shadow page table.

This patchset is based on the Patch "KVM/VMX: Check ept_pointer before
flushing ept tlb"(https://marc.info/?l=kvm=154408169705686=2).

Change since v4:
   1) Split flush address and flush list patches. This patchset only 
contains
   flush address patches. Will post flush list patches later.
   2) Expose function hyperv_fill_flush_guest_mapping_list()
   out of hyperv file
   3) Adjust parameter of hyperv_flush_guest_mapping_range()
   4) Reorder patchset and move Hyper-V and VMX changes ahead.

Change since v3:
1) Remove code of updating "tlbs_dirty" in 
kvm_flush_remote_tlbs_with_range()
2) Remove directly tlb flush in the kvm_handle_hva_range()
3) Move tlb flush in kvm_set_pte_rmapp() to 
kvm_mmu_notifier_change_pte()
4) Combine Vitaly's "don't pass EPT configuration info to
vmx_hv_remote_flush_tlb()" fix

Change since v2:
   1) Fix comment in the kvm_flush_remote_tlbs_with_range()
   2) Move HV_MAX_FLUSH_PAGES and HV_MAX_FLUSH_REP_COUNT to
hyperv-tlfs.h.
   3) Calculate HV_MAX_FLUSH_REP_COUNT in the macro definition
   4) Use HV_MAX_FLUSH_REP_COUNT to define length of gpa_list in
struct hv_guest_mapping_flush_list.

Change since v1:
   1) Convert "end_gfn" of struct kvm_tlb_range to "pages" in order
  to avoid confusion as to whether "end_gfn" is inclusive or exlusive.
   2) Add hyperv tlb range struct and replace kvm tlb range struct
  with new struct in order to avoid using kvm struct in the hyperv
  code directly.



Lan Tianyu (10):
  KVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops
  x86/hyper-v: Add HvFlushGuestAddressList hypercall support
  x86/Hyper-v: Add trace in the
hyperv_nested_flush_guest_mapping_range()
  KVM/VMX: Add hv tlb range flush support
  KVM/MMU: Add tlb flush with range helper function
  KVM: Replace old tlb flush function with new one to flush a specified
range.
  KVM: Make kvm_set_spte_hva() return int
  KVM/MMU: Move tlb flush in kvm_set_pte_rmapp() to
kvm_mmu_notifier_change_pte()
  KVM/MMU: Flush tlb directly in the kvm_set_pte_rmapp()
  KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()

 arch/arm/include/asm/kvm_host.h |  2 +-
 arch/arm64/include/asm/kvm_host.h   |  2 +-
 arch/mips/include/asm/kvm_host.h|  2 +-
 arch/mips/kvm/mmu.c |  3 +-
 arch/powerpc/include/asm/kvm_host.h |  2 +-
 arch/powerpc/kvm/book3s.c   |  3 +-
 arch/powerpc/kvm/e500_mmu_host.c|  3 +-
 arch/x86/hyperv/nested.c| 80 +++
 arch/x86/include/asm/hyperv-tlfs.h  | 32 +
 arch/x86/include/asm/kvm_host.h |  9 +++-
 arch/x86/include/asm/mshyperv.h | 15 ++
 arch/x86/include/asm/trace/hyperv.h | 14 ++
 arch/x86/kvm/mmu.c  | 96 +
 arch/x86/kvm/paging_tmpl.h  |  3 +-
 arch/x86/kvm/vmx.c  | 63 +---
 virt/kvm/arm/mmu.c  |  6 ++-
 virt/kvm/kvm_main.c |  5 +-
 17 files changed, 292 insertions(+), 48 deletions(-)

-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[Resend PATCH V5 1/10] KVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops

2018-12-06 Thread lantianyu1986
From: Lan Tianyu 

Add flush range call back in the kvm_x86_ops and platform can use it
to register its associated function. The parameter "kvm_tlb_range"
accepts a single range and flush list which contains a list of ranges.

Signed-off-by: Lan Tianyu 
---
Change since v1:
   Change "end_gfn" to "pages" to aviod confusion as to whether
"end_gfn" is inclusive or exlusive.
---
 arch/x86/include/asm/kvm_host.h | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index fbda5a917c5b..fc7513ecfc13 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -439,6 +439,11 @@ struct kvm_mmu {
u64 pdptrs[4]; /* pae */
 };
 
+struct kvm_tlb_range {
+   u64 start_gfn;
+   u64 pages;
+};
+
 enum pmc_type {
KVM_PMC_GP = 0,
KVM_PMC_FIXED,
@@ -1042,6 +1047,8 @@ struct kvm_x86_ops {
 
void (*tlb_flush)(struct kvm_vcpu *vcpu, bool invalidate_gpa);
int  (*tlb_remote_flush)(struct kvm *kvm);
+   int  (*tlb_remote_flush_with_range)(struct kvm *kvm,
+   struct kvm_tlb_range *range);
 
/*
 * Flush any TLB entries associated with the given GVA.
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[Resend PATCH V5 3/10] x86/Hyper-v: Add trace in the hyperv_nested_flush_guest_mapping_range()

2018-12-06 Thread lantianyu1986
From: Lan Tianyu 

This patch is to trace log in the hyperv_nested_flush_
guest_mapping_range().

Signed-off-by: Lan Tianyu 
---
 arch/x86/hyperv/nested.c|  1 +
 arch/x86/include/asm/trace/hyperv.h | 14 ++
 2 files changed, 15 insertions(+)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index 3d0f31e46954..dd0a843f766d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -130,6 +130,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
else
ret = status;
 fault:
+   trace_hyperv_nested_flush_guest_mapping_range(as, ret);
return ret;
 }
 EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping_range);
diff --git a/arch/x86/include/asm/trace/hyperv.h 
b/arch/x86/include/asm/trace/hyperv.h
index 2e6245a023ef..ace464f09681 100644
--- a/arch/x86/include/asm/trace/hyperv.h
+++ b/arch/x86/include/asm/trace/hyperv.h
@@ -42,6 +42,20 @@ TRACE_EVENT(hyperv_nested_flush_guest_mapping,
TP_printk("address space %llx ret %d", __entry->as, __entry->ret)
);
 
+TRACE_EVENT(hyperv_nested_flush_guest_mapping_range,
+   TP_PROTO(u64 as, int ret),
+   TP_ARGS(as, ret),
+
+   TP_STRUCT__entry(
+   __field(u64, as)
+   __field(int, ret)
+   ),
+   TP_fast_assign(__entry->as = as;
+  __entry->ret = ret;
+   ),
+   TP_printk("address space %llx ret %d", __entry->as, __entry->ret)
+   );
+
 TRACE_EVENT(hyperv_send_ipi_mask,
TP_PROTO(const struct cpumask *cpus,
 int vector),
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[Resend PATCH V5 2/10] x86/hyper-v: Add HvFlushGuestAddressList hypercall support

2018-12-06 Thread lantianyu1986
From: Lan Tianyu 

Hyper-V provides HvFlushGuestAddressList() hypercall to flush EPT tlb
with specified ranges. This patch is to add the hypercall support.

Reviewed-by:  Michael Kelley 
Signed-off-by: Lan Tianyu 
---
Change sincd v4:
   - Expose function hyperv_fill_flush_guest_mapping_list()
   out of hyperv file
   - Adjust parameter of hyperv_flush_guest_mapping_range()

Change since v2:
  Fix some coding style issues
- Move HV_MAX_FLUSH_PAGES and HV_MAX_FLUSH_REP_COUNT to
hyperv-tlfs.h.
- Calculate HV_MAX_FLUSH_REP_COUNT in the macro definition
- Use HV_MAX_FLUSH_REP_COUNT to define length of gpa_list in
struct hv_guest_mapping_flush_list.

Change since v1:
   Add hyperv tlb flush struct to avoid use kvm tlb flush struct
in the hyperv file.
---
 arch/x86/hyperv/nested.c   | 79 ++
 arch/x86/include/asm/hyperv-tlfs.h | 32 +++
 arch/x86/include/asm/mshyperv.h| 15 
 3 files changed, 126 insertions(+)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index b8e60cc50461..3d0f31e46954 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -7,6 +7,7 @@
  *
  * Author : Lan Tianyu 
  */
+#define pr_fmt(fmt)  "Hyper-V: " fmt
 
 
 #include 
@@ -54,3 +55,81 @@ int hyperv_flush_guest_mapping(u64 as)
return ret;
 }
 EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
+
+int hyperv_fill_flush_guest_mapping_list(
+   struct hv_guest_mapping_flush_list *flush,
+   u64 start_gfn, u64 pages)
+{
+   u64 cur = start_gfn;
+   u64 additional_pages;
+   int gpa_n = 0;
+
+   do {
+   /*
+* If flush requests exceed max flush count, go back to
+* flush tlbs without range.
+*/
+   if (gpa_n >= HV_MAX_FLUSH_REP_COUNT)
+   return -ENOSPC;
+
+   additional_pages = min_t(u64, pages, HV_MAX_FLUSH_PAGES) - 1;
+
+   flush->gpa_list[gpa_n].page.additional_pages = additional_pages;
+   flush->gpa_list[gpa_n].page.largepage = false;
+   flush->gpa_list[gpa_n].page.basepfn = cur;
+
+   pages -= additional_pages + 1;
+   cur += additional_pages + 1;
+   gpa_n++;
+   } while (pages > 0);
+
+   return gpa_n;
+}
+EXPORT_SYMBOL_GPL(hyperv_fill_flush_guest_mapping_list);
+
+int hyperv_flush_guest_mapping_range(u64 as,
+   hyperv_fill_flush_list_func fill_flush_list_func, void *data)
+{
+   struct hv_guest_mapping_flush_list **flush_pcpu;
+   struct hv_guest_mapping_flush_list *flush;
+   u64 status = 0;
+   unsigned long flags;
+   int ret = -ENOTSUPP;
+   int gpa_n = 0;
+
+   if (!hv_hypercall_pg || !fill_flush_list_func)
+   goto fault;
+
+   local_irq_save(flags);
+
+   flush_pcpu = (struct hv_guest_mapping_flush_list **)
+   this_cpu_ptr(hyperv_pcpu_input_arg);
+
+   flush = *flush_pcpu;
+   if (unlikely(!flush)) {
+   local_irq_restore(flags);
+   goto fault;
+   }
+
+   flush->address_space = as;
+   flush->flags = 0;
+
+   gpa_n = fill_flush_list_func(flush, data);
+   if (gpa_n < 0) {
+   local_irq_restore(flags);
+   goto fault;
+   }
+
+   status = hv_do_rep_hypercall(HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST,
+gpa_n, 0, flush, NULL);
+
+   local_irq_restore(flags);
+
+   if (!(status & HV_HYPERCALL_RESULT_MASK))
+   ret = 0;
+   else
+   ret = status;
+fault:
+   return ret;
+}
+EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping_range);
diff --git a/arch/x86/include/asm/hyperv-tlfs.h 
b/arch/x86/include/asm/hyperv-tlfs.h
index 4139f7650fe5..405a378e1c62 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -10,6 +10,7 @@
 #define _ASM_X86_HYPERV_TLFS_H
 
 #include 
+#include 
 
 /*
  * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
@@ -358,6 +359,7 @@ struct hv_tsc_emulation_status {
 #define HVCALL_POST_MESSAGE0x005c
 #define HVCALL_SIGNAL_EVENT0x005d
 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af
+#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0
 
 #define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE   0x0001
 #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT12
@@ -757,6 +759,36 @@ struct hv_guest_mapping_flush {
u64 flags;
 };
 
+/*
+ *  HV_MAX_FLUSH_PAGES = "additional_pages" + 1. It's limited
+ *  by the bitwidth of "additional_pages" in union hv_gpa_page_range.
+ */
+#define HV_MAX_FLUSH_PAGES (2048)
+
+/* HvFlushGuestPhysicalAddressList hypercall */
+union hv_gpa_page_range {
+   u64 address_space;
+   struct {
+   u64 additional_pages:11;
+   u64 largepage:1;
+  

[Resend PATCH V5 2/10] x86/hyper-v: Add HvFlushGuestAddressList hypercall support

2018-11-08 Thread lantianyu1986
From: Lan Tianyu 

Hyper-V provides HvFlushGuestAddressList() hypercall to flush EPT tlb
with specified ranges. This patch is to add the hypercall support.

Reviewed-by:  Michael Kelley 
Signed-off-by: Lan Tianyu 
---
Change sincd v4:
   - Expose function hyperv_fill_flush_guest_mapping_list()
   out of hyperv file
   - Adjust parameter of hyperv_flush_guest_mapping_range()

Change since v2:
  Fix some coding style issues
- Move HV_MAX_FLUSH_PAGES and HV_MAX_FLUSH_REP_COUNT to
hyperv-tlfs.h.
- Calculate HV_MAX_FLUSH_REP_COUNT in the macro definition
- Use HV_MAX_FLUSH_REP_COUNT to define length of gpa_list in
struct hv_guest_mapping_flush_list.

Change since v1:
   Add hyperv tlb flush struct to avoid use kvm tlb flush struct
in the hyperv file.
---
 arch/x86/hyperv/nested.c   | 79 ++
 arch/x86/include/asm/hyperv-tlfs.h | 32 +++
 arch/x86/include/asm/mshyperv.h| 15 
 3 files changed, 126 insertions(+)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index b8e60cc50461..3d0f31e46954 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -7,6 +7,7 @@
  *
  * Author : Lan Tianyu 
  */
+#define pr_fmt(fmt)  "Hyper-V: " fmt
 
 
 #include 
@@ -54,3 +55,81 @@ int hyperv_flush_guest_mapping(u64 as)
return ret;
 }
 EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
+
+int hyperv_fill_flush_guest_mapping_list(
+   struct hv_guest_mapping_flush_list *flush,
+   u64 start_gfn, u64 pages)
+{
+   u64 cur = start_gfn;
+   u64 additional_pages;
+   int gpa_n = 0;
+
+   do {
+   /*
+* If flush requests exceed max flush count, go back to
+* flush tlbs without range.
+*/
+   if (gpa_n >= HV_MAX_FLUSH_REP_COUNT)
+   return -ENOSPC;
+
+   additional_pages = min_t(u64, pages, HV_MAX_FLUSH_PAGES) - 1;
+
+   flush->gpa_list[gpa_n].page.additional_pages = additional_pages;
+   flush->gpa_list[gpa_n].page.largepage = false;
+   flush->gpa_list[gpa_n].page.basepfn = cur;
+
+   pages -= additional_pages + 1;
+   cur += additional_pages + 1;
+   gpa_n++;
+   } while (pages > 0);
+
+   return gpa_n;
+}
+EXPORT_SYMBOL_GPL(hyperv_fill_flush_guest_mapping_list);
+
+int hyperv_flush_guest_mapping_range(u64 as,
+   hyperv_fill_flush_list_func fill_flush_list_func, void *data)
+{
+   struct hv_guest_mapping_flush_list **flush_pcpu;
+   struct hv_guest_mapping_flush_list *flush;
+   u64 status = 0;
+   unsigned long flags;
+   int ret = -ENOTSUPP;
+   int gpa_n = 0;
+
+   if (!hv_hypercall_pg || !fill_flush_list_func)
+   goto fault;
+
+   local_irq_save(flags);
+
+   flush_pcpu = (struct hv_guest_mapping_flush_list **)
+   this_cpu_ptr(hyperv_pcpu_input_arg);
+
+   flush = *flush_pcpu;
+   if (unlikely(!flush)) {
+   local_irq_restore(flags);
+   goto fault;
+   }
+
+   flush->address_space = as;
+   flush->flags = 0;
+
+   gpa_n = fill_flush_list_func(flush, data);
+   if (gpa_n < 0) {
+   local_irq_restore(flags);
+   goto fault;
+   }
+
+   status = hv_do_rep_hypercall(HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST,
+gpa_n, 0, flush, NULL);
+
+   local_irq_restore(flags);
+
+   if (!(status & HV_HYPERCALL_RESULT_MASK))
+   ret = 0;
+   else
+   ret = status;
+fault:
+   return ret;
+}
+EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping_range);
diff --git a/arch/x86/include/asm/hyperv-tlfs.h 
b/arch/x86/include/asm/hyperv-tlfs.h
index 4139f7650fe5..405a378e1c62 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -10,6 +10,7 @@
 #define _ASM_X86_HYPERV_TLFS_H
 
 #include 
+#include 
 
 /*
  * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
@@ -358,6 +359,7 @@ struct hv_tsc_emulation_status {
 #define HVCALL_POST_MESSAGE0x005c
 #define HVCALL_SIGNAL_EVENT0x005d
 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af
+#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0
 
 #define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE   0x0001
 #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT12
@@ -757,6 +759,36 @@ struct hv_guest_mapping_flush {
u64 flags;
 };
 
+/*
+ *  HV_MAX_FLUSH_PAGES = "additional_pages" + 1. It's limited
+ *  by the bitwidth of "additional_pages" in union hv_gpa_page_range.
+ */
+#define HV_MAX_FLUSH_PAGES (2048)
+
+/* HvFlushGuestPhysicalAddressList hypercall */
+union hv_gpa_page_range {
+   u64 address_space;
+   struct {
+   u64 additional_pages:11;
+   u64 largepage:1;
+  

[Resend PATCH V5 3/10] x86/Hyper-v: Add trace in the hyperv_nested_flush_guest_mapping_range()

2018-11-08 Thread lantianyu1986
From: Lan Tianyu 

This patch is to trace log in the hyperv_nested_flush_
guest_mapping_range().

Signed-off-by: Lan Tianyu 
---
 arch/x86/hyperv/nested.c|  1 +
 arch/x86/include/asm/trace/hyperv.h | 14 ++
 2 files changed, 15 insertions(+)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index 3d0f31e46954..dd0a843f766d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -130,6 +130,7 @@ int hyperv_flush_guest_mapping_range(u64 as,
else
ret = status;
 fault:
+   trace_hyperv_nested_flush_guest_mapping_range(as, ret);
return ret;
 }
 EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping_range);
diff --git a/arch/x86/include/asm/trace/hyperv.h 
b/arch/x86/include/asm/trace/hyperv.h
index 2e6245a023ef..ace464f09681 100644
--- a/arch/x86/include/asm/trace/hyperv.h
+++ b/arch/x86/include/asm/trace/hyperv.h
@@ -42,6 +42,20 @@ TRACE_EVENT(hyperv_nested_flush_guest_mapping,
TP_printk("address space %llx ret %d", __entry->as, __entry->ret)
);
 
+TRACE_EVENT(hyperv_nested_flush_guest_mapping_range,
+   TP_PROTO(u64 as, int ret),
+   TP_ARGS(as, ret),
+
+   TP_STRUCT__entry(
+   __field(u64, as)
+   __field(int, ret)
+   ),
+   TP_fast_assign(__entry->as = as;
+  __entry->ret = ret;
+   ),
+   TP_printk("address space %llx ret %d", __entry->as, __entry->ret)
+   );
+
 TRACE_EVENT(hyperv_send_ipi_mask,
TP_PROTO(const struct cpumask *cpus,
 int vector),
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[Resend PATCH V5 1/10] KVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops

2018-11-08 Thread lantianyu1986
From: Lan Tianyu 

Add flush range call back in the kvm_x86_ops and platform can use it
to register its associated function. The parameter "kvm_tlb_range"
accepts a single range and flush list which contains a list of ranges.

Signed-off-by: Lan Tianyu 
---
Change since v1:
   Change "end_gfn" to "pages" to aviod confusion as to whether
"end_gfn" is inclusive or exlusive.
---
 arch/x86/include/asm/kvm_host.h | 7 +++
 1 file changed, 7 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 55e51ff7e421..c8a65f0a7107 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -439,6 +439,11 @@ struct kvm_mmu {
u64 pdptrs[4]; /* pae */
 };
 
+struct kvm_tlb_range {
+   u64 start_gfn;
+   u64 pages;
+};
+
 enum pmc_type {
KVM_PMC_GP = 0,
KVM_PMC_FIXED,
@@ -1042,6 +1047,8 @@ struct kvm_x86_ops {
 
void (*tlb_flush)(struct kvm_vcpu *vcpu, bool invalidate_gpa);
int  (*tlb_remote_flush)(struct kvm *kvm);
+   int  (*tlb_remote_flush_with_range)(struct kvm *kvm,
+   struct kvm_tlb_range *range);
 
/*
 * Flush any TLB entries associated with the given GVA.
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V5 00/10] x86/KVM/Hyper-v: Add HV ept tlb range flush hypercall support in KVM

2018-11-08 Thread lantianyu1986
From: Lan Tianyu 

Sorry. Some patches was blocked and I try to resend via another account.

For nested memory virtualization, Hyper-v doesn't set write-protect
L1 hypervisor EPT page directory and page table node to track changes 
while it relies on guest to tell it changes via HvFlushGuestAddressLlist
hypercall. HvFlushGuestAddressLlist hypercall provides a way to flush
EPT page table with ranges which are specified by L1 hypervisor.

If L1 hypervisor uses INVEPT or HvFlushGuestAddressSpace hypercall to
flush EPT tlb, Hyper-V will invalidate associated EPT shadow page table
and sync L1's EPT table when next EPT page fault is triggered.
HvFlushGuestAddressLlist hypercall helps to avoid such redundant EPT
page fault and synchronization of shadow page table.

This patchset is rebased on the Linux 4.20-rc1 and Patch "KVM/VMX: Check
ept_pointer before flushing ept tlb".(https://www.mail-archive.com/linux
-ker...@vger.kernel.org/msg1798827.html).

Change since v4:
   1) Split flush address and flush list patches. This patchset only 
contains
   flush address patches. Will post flush list patches later.
   2) Expose function hyperv_fill_flush_guest_mapping_list()
   out of hyperv file
   3) Adjust parameter of hyperv_flush_guest_mapping_range()
   4) Reorder patchset and move Hyper-V and VMX changes ahead.

Change since v3:
1) Remove code of updating "tlbs_dirty" in 
kvm_flush_remote_tlbs_with_range()
2) Remove directly tlb flush in the kvm_handle_hva_range()
3) Move tlb flush in kvm_set_pte_rmapp() to 
kvm_mmu_notifier_change_pte()
4) Combine Vitaly's "don't pass EPT configuration info to
vmx_hv_remote_flush_tlb()" fix

Change since v2:
   1) Fix comment in the kvm_flush_remote_tlbs_with_range()
   2) Move HV_MAX_FLUSH_PAGES and HV_MAX_FLUSH_REP_COUNT to
hyperv-tlfs.h.
   3) Calculate HV_MAX_FLUSH_REP_COUNT in the macro definition
   4) Use HV_MAX_FLUSH_REP_COUNT to define length of gpa_list in
struct hv_guest_mapping_flush_list.

Change since v1:
   1) Convert "end_gfn" of struct kvm_tlb_range to "pages" in order
  to avoid confusion as to whether "end_gfn" is inclusive or exlusive.
   2) Add hyperv tlb range struct and replace kvm tlb range struct
  with new struct in order to avoid using kvm struct in the hyperv
  code directly.


Lan Tianyu (10):
  KVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops
  x86/hyper-v: Add HvFlushGuestAddressList hypercall support
  x86/Hyper-v: Add trace in the
hyperv_nested_flush_guest_mapping_range()
  KVM/VMX: Add hv tlb range flush support
  KVM/MMU: Add tlb flush with range helper function
  KVM: Replace old tlb flush function with new one to flush a specified
range.
  KVM: Make kvm_set_spte_hva() return int
  KVM/MMU: Move tlb flush in kvm_set_pte_rmapp() to
kvm_mmu_notifier_change_pte()
  KVM/MMU: Flush tlb directly in the kvm_set_pte_rmapp()
  KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()

 arch/arm/include/asm/kvm_host.h |  2 +-
 arch/arm64/include/asm/kvm_host.h   |  2 +-
 arch/mips/include/asm/kvm_host.h|  2 +-
 arch/mips/kvm/mmu.c |  3 +-
 arch/powerpc/include/asm/kvm_host.h |  2 +-
 arch/powerpc/kvm/book3s.c   |  3 +-
 arch/powerpc/kvm/e500_mmu_host.c|  3 +-
 arch/x86/hyperv/nested.c| 80 +++
 arch/x86/include/asm/hyperv-tlfs.h  | 32 +
 arch/x86/include/asm/kvm_host.h |  9 +++-
 arch/x86/include/asm/mshyperv.h | 15 ++
 arch/x86/include/asm/trace/hyperv.h | 14 ++
 arch/x86/kvm/mmu.c  | 96 +
 arch/x86/kvm/paging_tmpl.h  |  3 +-
 arch/x86/kvm/vmx.c  | 69 ++
 virt/kvm/arm/mmu.c  |  6 ++-
 virt/kvm/kvm_main.c |  5 +-
 17 files changed, 295 insertions(+), 51 deletions(-)

-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 12/15] x86/hyper-v: Add HvFlushGuestAddressList hypercall support

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

Hyper-V provides HvFlushGuestAddressList() hypercall to flush EPT tlb
with specified ranges. This patch is to add the hypercall support.

Reviewed-by:  Michael Kelley 
Signed-off-by: Lan Tianyu 
---
Change since v2:
  Fix some coding style issues
- Move HV_MAX_FLUSH_PAGES and HV_MAX_FLUSH_REP_COUNT to
hyperv-tlfs.h.
- Calculate HV_MAX_FLUSH_REP_COUNT in the macro definition
- Use HV_MAX_FLUSH_REP_COUNT to define length of gpa_list in
struct hv_guest_mapping_flush_list.

Change since v1:
   Add hyperv tlb flush struct to avoid use kvm tlb flush struct
in the hyperv file.
---
 arch/x86/hyperv/nested.c   | 84 ++
 arch/x86/include/asm/hyperv-tlfs.h | 32 +++
 arch/x86/include/asm/mshyperv.h| 16 
 3 files changed, 132 insertions(+)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index b8e60cc50461..a6fdfec63c7d 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -7,6 +7,7 @@
  *
  * Author : Lan Tianyu 
  */
+#define pr_fmt(fmt)  "Hyper-V: " fmt
 
 
 #include 
@@ -54,3 +55,86 @@ int hyperv_flush_guest_mapping(u64 as)
return ret;
 }
 EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping);
+
+static int fill_flush_list(union hv_gpa_page_range gpa_list[],
+   int offset, u64 start_gfn, u64 pages)
+{
+   int gpa_n = offset;
+   u64 cur = start_gfn;
+   u64 additional_pages;
+
+   do {
+   /*
+* If flush requests exceed max flush count, go back to
+* flush tlbs without range.
+*/
+   if (gpa_n >= HV_MAX_FLUSH_REP_COUNT)
+   return -ENOSPC;
+
+   additional_pages = min_t(u64, pages, HV_MAX_FLUSH_PAGES) - 1;
+
+   gpa_list[gpa_n].page.additional_pages = additional_pages;
+   gpa_list[gpa_n].page.largepage = false;
+   gpa_list[gpa_n].page.basepfn = cur;
+
+   pages -= additional_pages + 1;
+   cur += additional_pages + 1;
+   gpa_n++;
+   } while (pages > 0);
+
+   return gpa_n;
+}
+
+int hyperv_flush_guest_mapping_range(u64 as, struct hyperv_tlb_range *range)
+{
+   struct hv_guest_mapping_flush_list **flush_pcpu;
+   struct hv_guest_mapping_flush_list *flush;
+   u64 status = 0;
+   unsigned long flags;
+   int ret = -ENOTSUPP;
+   int gpa_n = 0;
+
+   if (!hv_hypercall_pg)
+   goto fault;
+
+   local_irq_save(flags);
+
+   flush_pcpu = (struct hv_guest_mapping_flush_list **)
+   this_cpu_ptr(hyperv_pcpu_input_arg);
+
+   flush = *flush_pcpu;
+   if (unlikely(!flush)) {
+   local_irq_restore(flags);
+   goto fault;
+   }
+
+   flush->address_space = as;
+   flush->flags = 0;
+
+   if (!range->flush_list)
+   gpa_n = fill_flush_list(flush->gpa_list, gpa_n,
+   range->start_gfn, range->pages);
+   else if (range->parse_flush_list_func)
+   gpa_n = range->parse_flush_list_func(flush->gpa_list, gpa_n,
+   range->flush_list, fill_flush_list);
+   else
+   gpa_n = -1;
+
+   if (gpa_n < 0) {
+   local_irq_restore(flags);
+   goto fault;
+   }
+
+   status = hv_do_rep_hypercall(HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST,
+gpa_n, 0, flush, NULL);
+
+   local_irq_restore(flags);
+
+   if (!(status & HV_HYPERCALL_RESULT_MASK))
+   ret = 0;
+   else
+   ret = status;
+fault:
+   return ret;
+}
+EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping_range);
diff --git a/arch/x86/include/asm/hyperv-tlfs.h 
b/arch/x86/include/asm/hyperv-tlfs.h
index 00e01d215f74..cf59250c284a 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -10,6 +10,7 @@
 #define _ASM_X86_HYPERV_TLFS_H
 
 #include 
+#include 
 
 /*
  * The below CPUID leaves are present if VersionAndFeatures.HypervisorPresent
@@ -353,6 +354,7 @@ struct hv_tsc_emulation_status {
 #define HVCALL_POST_MESSAGE0x005c
 #define HVCALL_SIGNAL_EVENT0x005d
 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af
+#define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0
 
 #define HV_X64_MSR_VP_ASSIST_PAGE_ENABLE   0x0001
 #define HV_X64_MSR_VP_ASSIST_PAGE_ADDRESS_SHIFT12
@@ -752,6 +754,36 @@ struct hv_guest_mapping_flush {
u64 flags;
 };
 
+/*
+ *  HV_MAX_FLUSH_PAGES = "additional_pages" + 1. It's limited
+ *  by the bitwidth of "additional_pages" in union hv_gpa_page_range.
+ */
+#define HV_MAX_FLUSH_PAGES (2048)
+
+/* HvFlushGuestPhysicalAddressList hypercall */
+union hv_gpa_page_range {
+   u64 address_space;
+   struct {
+   u64 additional_pages:11;
+   u64 largepage:1;

[PATCH V4 13/15] x86/Hyper-v: Add trace in the hyperv_nested_flush_guest_mapping_range()

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

This patch is to trace log in the hyperv_nested_flush_
guest_mapping_range().

Signed-off-by: Lan Tianyu 
---
 arch/x86/hyperv/nested.c|  1 +
 arch/x86/include/asm/trace/hyperv.h | 14 ++
 2 files changed, 15 insertions(+)

diff --git a/arch/x86/hyperv/nested.c b/arch/x86/hyperv/nested.c
index a6fdfec63c7d..4850c74508f3 100644
--- a/arch/x86/hyperv/nested.c
+++ b/arch/x86/hyperv/nested.c
@@ -135,6 +135,7 @@ int hyperv_flush_guest_mapping_range(u64 as, struct 
hyperv_tlb_range *range)
else
ret = status;
 fault:
+   trace_hyperv_nested_flush_guest_mapping_range(as, ret);
return ret;
 }
 EXPORT_SYMBOL_GPL(hyperv_flush_guest_mapping_range);
diff --git a/arch/x86/include/asm/trace/hyperv.h 
b/arch/x86/include/asm/trace/hyperv.h
index 2e6245a023ef..ace464f09681 100644
--- a/arch/x86/include/asm/trace/hyperv.h
+++ b/arch/x86/include/asm/trace/hyperv.h
@@ -42,6 +42,20 @@ TRACE_EVENT(hyperv_nested_flush_guest_mapping,
TP_printk("address space %llx ret %d", __entry->as, __entry->ret)
);
 
+TRACE_EVENT(hyperv_nested_flush_guest_mapping_range,
+   TP_PROTO(u64 as, int ret),
+   TP_ARGS(as, ret),
+
+   TP_STRUCT__entry(
+   __field(u64, as)
+   __field(int, ret)
+   ),
+   TP_fast_assign(__entry->as = as;
+  __entry->ret = ret;
+   ),
+   TP_printk("address space %llx ret %d", __entry->as, __entry->ret)
+   );
+
 TRACE_EVENT(hyperv_send_ipi_mask,
TP_PROTO(const struct cpumask *cpus,
 int vector),
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 10/15] KVM: Add spte's point in the struct kvm_mmu_page

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

It's necessary to check whether mmu page is last or large page when add
mmu page into flush list. "spte" is needed for such check and so add
spte point in the struct kvm_mmu_page.

Signed-off-by: Lan Tianyu 
---
 arch/x86/include/asm/kvm_host.h | 1 +
 arch/x86/kvm/mmu.c  | 5 +
 arch/x86/kvm/paging_tmpl.h  | 2 ++
 3 files changed, 8 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 8279235285f8..c986ebefc9ac 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -333,6 +333,7 @@ struct kvm_mmu_page {
int root_count;  /* Currently serving as active root */
unsigned int unsync_children;
struct kvm_rmap_head parent_ptes; /* rmap pointers to parent sptes */
+   u64 *sptep;
 
/* The page is obsolete if mmu_valid_gen != kvm->arch.mmu_valid_gen.  */
unsigned long mmu_valid_gen;
diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index e984a0067a43..393f4048dd7a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -3165,6 +3165,7 @@ static int __direct_map(struct kvm_vcpu *vcpu, int write, 
int map_writable,
pseudo_gfn = base_addr >> PAGE_SHIFT;
sp = kvm_mmu_get_page(vcpu, pseudo_gfn, iterator.addr,
  iterator.level - 1, 1, ACC_ALL);
+   sp->sptep = iterator.sptep;
 
link_shadow_page(vcpu, iterator.sptep, sp);
}
@@ -3602,6 +3603,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
sp = kvm_mmu_get_page(vcpu, 0, 0,
vcpu->arch.mmu->shadow_root_level, 1, ACC_ALL);
++sp->root_count;
+   sp->sptep = NULL;
spin_unlock(>kvm->mmu_lock);
vcpu->arch.mmu->root_hpa = __pa(sp->spt);
} else if (vcpu->arch.mmu->shadow_root_level == PT32E_ROOT_LEVEL) {
@@ -3618,6 +3620,7 @@ static int mmu_alloc_direct_roots(struct kvm_vcpu *vcpu)
i << 30, PT32_ROOT_LEVEL, 1, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
+   sp->sptep = NULL;
spin_unlock(>kvm->mmu_lock);
vcpu->arch.mmu->pae_root[i] = root | PT_PRESENT_MASK;
}
@@ -3658,6 +3661,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
vcpu->arch.mmu->shadow_root_level, 0, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
+   sp->sptep = NULL;
spin_unlock(>kvm->mmu_lock);
vcpu->arch.mmu->root_hpa = root;
return 0;
@@ -3695,6 +3699,7 @@ static int mmu_alloc_shadow_roots(struct kvm_vcpu *vcpu)
  0, ACC_ALL);
root = __pa(sp->spt);
++sp->root_count;
+   sp->sptep = NULL;
spin_unlock(>kvm->mmu_lock);
 
vcpu->arch.mmu->pae_root[i] = root | pm_mask;
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 6bdca39829bc..833e8855bbc9 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -633,6 +633,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
table_gfn = gw->table_gfn[it.level - 2];
sp = kvm_mmu_get_page(vcpu, table_gfn, addr, it.level-1,
  false, access);
+   sp->sptep = it.sptep;
}
 
/*
@@ -663,6 +664,7 @@ static int FNAME(fetch)(struct kvm_vcpu *vcpu, gva_t addr,
 
sp = kvm_mmu_get_page(vcpu, direct_gfn, addr, it.level-1,
  true, direct_access);
+   sp->sptep = it.sptep;
link_shadow_page(vcpu, it.sptep, sp);
}
 
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 7/15] KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

Originally, flush tlb is done by slot_handle_level_range(). This patch
is to flush tlb directly in the kvm_zap_gfn_range() when range
flush is available.

Signed-off-by: Lan Tianyu 
---
 arch/x86/kvm/mmu.c | 16 +---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index f3742ff4ec18..c4f7679f12c3 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5647,6 +5647,7 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t gfn_start, 
gfn_t gfn_end)
 {
struct kvm_memslots *slots;
struct kvm_memory_slot *memslot;
+   bool flush = false;
int i;
 
spin_lock(>mmu_lock);
@@ -5654,18 +5655,27 @@ void kvm_zap_gfn_range(struct kvm *kvm, gfn_t 
gfn_start, gfn_t gfn_end)
slots = __kvm_memslots(kvm, i);
kvm_for_each_memslot(memslot, slots) {
gfn_t start, end;
+   bool flush_tlb = true;
 
start = max(gfn_start, memslot->base_gfn);
end = min(gfn_end, memslot->base_gfn + memslot->npages);
if (start >= end)
continue;
 
-   slot_handle_level_range(kvm, memslot, kvm_zap_rmapp,
-   PT_PAGE_TABLE_LEVEL, 
PT_MAX_HUGEPAGE_LEVEL,
-   start, end - 1, true);
+   if (kvm_available_flush_tlb_with_range())
+   flush_tlb = false;
+
+   flush = slot_handle_level_range(kvm, memslot,
+   kvm_zap_rmapp, PT_PAGE_TABLE_LEVEL,
+   PT_MAX_HUGEPAGE_LEVEL, start,
+   end - 1, flush_tlb);
}
}
 
+   if (flush && kvm_available_flush_tlb_with_range())
+   kvm_flush_remote_tlbs_with_address(kvm, gfn_start,
+   gfn_end - gfn_start + 1);
+
spin_unlock(>mmu_lock);
 }
 
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 5/15] KVM/MMU: Move tlb flush in kvm_set_pte_rmapp() to kvm_mmu_notifier_change_pte()

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

This patch is to move tlb flush in kvm_set_pte_rmapp() to
kvm_mmu_notifier_change_pte() in order to avoid redundant tlb flush.

Signed-off-by: Lan Tianyu 
---
 arch/x86/kvm/mmu.c  | 8 ++--
 virt/kvm/kvm_main.c | 5 -
 2 files changed, 6 insertions(+), 7 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index fd24a4dc45e9..5d3a180c57e2 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1781,10 +1781,7 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct 
kvm_rmap_head *rmap_head,
}
}
 
-   if (need_flush)
-   kvm_flush_remote_tlbs(kvm);
-
-   return 0;
+   return need_flush;
 }
 
 struct slot_rmap_walk_iterator {
@@ -1920,8 +1917,7 @@ int kvm_unmap_hva_range(struct kvm *kvm, unsigned long 
start, unsigned long end)
 
 int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
 {
-   kvm_handle_hva(kvm, hva, (unsigned long), kvm_set_pte_rmapp);
-   return 0;
+   return kvm_handle_hva(kvm, hva, (unsigned long), kvm_set_pte_rmapp);
 }
 
 static int kvm_age_rmapp(struct kvm *kvm, struct kvm_rmap_head *rmap_head,
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
index acc951cc2663..bd026d74541e 100644
--- a/virt/kvm/kvm_main.c
+++ b/virt/kvm/kvm_main.c
@@ -354,7 +354,10 @@ static void kvm_mmu_notifier_change_pte(struct 
mmu_notifier *mn,
idx = srcu_read_lock(>srcu);
spin_lock(>mmu_lock);
kvm->mmu_notifier_seq++;
-   kvm_set_spte_hva(kvm, address, pte);
+
+   if (kvm_set_spte_hva(kvm, address, pte))
+   kvm_flush_remote_tlbs(kvm);
+
spin_unlock(>mmu_lock);
srcu_read_unlock(>srcu, idx);
 }
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 9/15] KVM: Add flush_link and parent_pte in the struct kvm_mmu_page

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

PV EPT tlb flush function will accept a list of flush ranges and
use struct kvm_mmu_page as the list entry.

Signed-off-by: Lan Tianyu 
---
 arch/x86/include/asm/kvm_host.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 19985c602ed6..8279235285f8 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -316,6 +316,7 @@ struct kvm_rmap_head {
 
 struct kvm_mmu_page {
struct list_head link;
+   struct list_head flush_link;
struct hlist_node hash_link;
bool unsync;
 
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 8/15] KVM/MMU: Flush tlb directly in kvm_mmu_zap_collapsible_spte()

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

kvm_mmu_zap_collapsible_spte() returns flush request to the
slot_handle_leaf() and the latter does flush on demand. When
range flush is available, make kvm_mmu_zap_collapsible_spte()
to flush tlb with range directly to avoid returning range back
to slot_handle_leaf().

Signed-off-by: Lan Tianyu 
---
 arch/x86/kvm/mmu.c | 8 +++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c4f7679f12c3..e984a0067a43 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -5743,7 +5743,13 @@ static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
!kvm_is_reserved_pfn(pfn) &&
PageTransCompoundMap(pfn_to_page(pfn))) {
drop_spte(kvm, sptep);
-   need_tlb_flush = 1;
+
+   if (kvm_available_flush_tlb_with_range())
+   kvm_flush_remote_tlbs_with_address(kvm, sp->gfn,
+   KVM_PAGES_PER_HPAGE(sp->role.level));
+   else
+   need_tlb_flush = 1;
+
goto restart;
}
}
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 6/15] KVM/MMU: Flush tlb directly in the kvm_set_pte_rmapp()

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

This patch is to flush tlb directly in the kvm_set_pte_rmapp()
and return 0.

Signed-off-by: Lan Tianyu 
---
 arch/x86/kvm/mmu.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 5d3a180c57e2..f3742ff4ec18 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1781,6 +1781,11 @@ static int kvm_set_pte_rmapp(struct kvm *kvm, struct 
kvm_rmap_head *rmap_head,
}
}
 
+   if (need_flush && kvm_available_flush_tlb_with_range()) {
+   kvm_flush_remote_tlbs_with_address(kvm, gfn, 1);
+   return 0;
+   }
+
return need_flush;
 }
 
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 11/15] KVM/MMU: Replace tlb flush function with range list flush function

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

This patch is to use range list flush function in the
mmu_sync_children(), kvm_mmu_commit_zap_page() and
FNAME(sync_page)().

Signed-off-by: Lan Tianyu 
---
 arch/x86/kvm/mmu.c | 26 +++---
 arch/x86/kvm/paging_tmpl.h |  5 -
 2 files changed, 27 insertions(+), 4 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index 393f4048dd7a..69e4cff1115d 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1100,6 +1100,13 @@ static void update_gfn_disallow_lpage_count(struct 
kvm_memory_slot *slot,
}
 }
 
+static void kvm_mmu_queue_flush_request(struct kvm_mmu_page *sp,
+   struct list_head *flush_list)
+{
+   if (sp->sptep && is_last_spte(*sp->sptep, sp->role.level))
+   list_add(>flush_link, flush_list);
+}
+
 void kvm_mmu_gfn_disallow_lpage(struct kvm_memory_slot *slot, gfn_t gfn)
 {
update_gfn_disallow_lpage_count(slot, gfn, 1);
@@ -2372,12 +2379,16 @@ static void mmu_sync_children(struct kvm_vcpu *vcpu,
 
while (mmu_unsync_walk(parent, )) {
bool protected = false;
+   LIST_HEAD(flush_list);
 
-   for_each_sp(pages, sp, parents, i)
+   for_each_sp(pages, sp, parents, i) {
protected |= rmap_write_protect(vcpu, sp->gfn);
+   kvm_mmu_queue_flush_request(sp, _list);
+   }
 
if (protected) {
-   kvm_flush_remote_tlbs(vcpu->kvm);
+   kvm_flush_remote_tlbs_with_list(vcpu->kvm,
+   _list);
flush = false;
}
 
@@ -2713,6 +2724,7 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
struct list_head *invalid_list)
 {
struct kvm_mmu_page *sp, *nsp;
+   LIST_HEAD(flush_list);
 
if (list_empty(invalid_list))
return;
@@ -2726,7 +2738,15 @@ static void kvm_mmu_commit_zap_page(struct kvm *kvm,
 * In addition, kvm_flush_remote_tlbs waits for all vcpus to exit
 * guest mode and/or lockless shadow page table walks.
 */
-   kvm_flush_remote_tlbs(kvm);
+   if (kvm_available_flush_tlb_with_range()) {
+   list_for_each_entry(sp, invalid_list, link)
+   kvm_mmu_queue_flush_request(sp, _list);
+
+   if (!list_empty(_list))
+   kvm_flush_remote_tlbs_with_list(kvm, _list);
+   } else {
+   kvm_flush_remote_tlbs(kvm);
+   }
 
list_for_each_entry_safe(sp, nsp, invalid_list, link) {
WARN_ON(!sp->role.invalid || sp->root_count);
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 833e8855bbc9..e44737ce6bad 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -973,6 +973,7 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, struct 
kvm_mmu_page *sp)
bool host_writable;
gpa_t first_pte_gpa;
int set_spte_ret = 0;
+   LIST_HEAD(flush_list);
 
/* direct kvm_mmu_page can not be unsync. */
BUG_ON(sp->role.direct);
@@ -1033,10 +1034,12 @@ static int FNAME(sync_page)(struct kvm_vcpu *vcpu, 
struct kvm_mmu_page *sp)
 pte_access, PT_PAGE_TABLE_LEVEL,
 gfn, spte_to_pfn(sp->spt[i]),
 true, false, host_writable);
+   if (set_spte_ret && kvm_available_flush_tlb_with_range())
+   kvm_mmu_queue_flush_request(sp, _list);
}
 
if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH)
-   kvm_flush_remote_tlbs(vcpu->kvm);
+   kvm_flush_remote_tlbs_with_list(vcpu->kvm, _list);
 
return nr_present;
 }
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 4/15] KVM: Make kvm_set_spte_hva() return int

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

The patch is to make kvm_set_spte_hva() return int and caller can
check return value to determine flush tlb or not.

Signed-off-by: Lan Tianyu 
---
 arch/arm/include/asm/kvm_host.h | 2 +-
 arch/arm64/include/asm/kvm_host.h   | 2 +-
 arch/mips/include/asm/kvm_host.h| 2 +-
 arch/mips/kvm/mmu.c | 3 ++-
 arch/powerpc/include/asm/kvm_host.h | 2 +-
 arch/powerpc/kvm/book3s.c   | 3 ++-
 arch/powerpc/kvm/e500_mmu_host.c| 3 ++-
 arch/x86/include/asm/kvm_host.h | 2 +-
 arch/x86/kvm/mmu.c  | 3 ++-
 virt/kvm/arm/mmu.c  | 6 --
 10 files changed, 17 insertions(+), 11 deletions(-)

diff --git a/arch/arm/include/asm/kvm_host.h b/arch/arm/include/asm/kvm_host.h
index 3ad482d2f1eb..efb820bdad2c 100644
--- a/arch/arm/include/asm/kvm_host.h
+++ b/arch/arm/include/asm/kvm_host.h
@@ -225,7 +225,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm,
unsigned long start, unsigned long end);
-void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
 unsigned long kvm_arm_num_regs(struct kvm_vcpu *vcpu);
 int kvm_arm_copy_reg_indices(struct kvm_vcpu *vcpu, u64 __user *indices);
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 3d6d7336f871..2e506c0b3eb7 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -358,7 +358,7 @@ int __kvm_arm_vcpu_set_events(struct kvm_vcpu *vcpu,
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm,
unsigned long start, unsigned long end);
-void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
diff --git a/arch/mips/include/asm/kvm_host.h b/arch/mips/include/asm/kvm_host.h
index 2c1c53d12179..71c3f21d80d5 100644
--- a/arch/mips/include/asm/kvm_host.h
+++ b/arch/mips/include/asm/kvm_host.h
@@ -933,7 +933,7 @@ enum kvm_mips_fault_result kvm_trap_emul_gva_fault(struct 
kvm_vcpu *vcpu,
 #define KVM_ARCH_WANT_MMU_NOTIFIER
 int kvm_unmap_hva_range(struct kvm *kvm,
unsigned long start, unsigned long end);
-void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end);
 int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
 
diff --git a/arch/mips/kvm/mmu.c b/arch/mips/kvm/mmu.c
index d8dcdb350405..97e538a8c1be 100644
--- a/arch/mips/kvm/mmu.c
+++ b/arch/mips/kvm/mmu.c
@@ -551,7 +551,7 @@ static int kvm_set_spte_handler(struct kvm *kvm, gfn_t gfn, 
gfn_t gfn_end,
   (pte_dirty(old_pte) && !pte_dirty(hva_pte));
 }
 
-void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
+int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
 {
unsigned long end = hva + PAGE_SIZE;
int ret;
@@ -559,6 +559,7 @@ void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, 
pte_t pte)
ret = handle_hva_to_gpa(kvm, hva, end, _set_spte_handler, );
if (ret)
kvm_mips_callbacks->flush_shadow_all(kvm);
+   return 0;
 }
 
 static int kvm_age_hva_handler(struct kvm *kvm, gfn_t gfn, gfn_t gfn_end,
diff --git a/arch/powerpc/include/asm/kvm_host.h 
b/arch/powerpc/include/asm/kvm_host.h
index fac6f631ed29..ab23379c53a9 100644
--- a/arch/powerpc/include/asm/kvm_host.h
+++ b/arch/powerpc/include/asm/kvm_host.h
@@ -72,7 +72,7 @@ extern int kvm_unmap_hva_range(struct kvm *kvm,
   unsigned long start, unsigned long end);
 extern int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long 
end);
 extern int kvm_test_age_hva(struct kvm *kvm, unsigned long hva);
-extern void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
+extern int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte);
 
 #define HPTEG_CACHE_NUM(1 << 15)
 #define HPTEG_HASH_BITS_PTE13
diff --git a/arch/powerpc/kvm/book3s.c b/arch/powerpc/kvm/book3s.c
index fd9893bc7aa1..437613bb609a 100644
--- a/arch/powerpc/kvm/book3s.c
+++ b/arch/powerpc/kvm/book3s.c
@@ -850,9 +850,10 @@ int kvm_test_age_hva(struct kvm *kvm, unsigned long hva)
return kvm->arch.kvm_ops->test_age_hva(kvm, hva);
 }
 
-void kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
+int kvm_set_spte_hva(struct kvm *kvm, unsigned long hva, pte_t pte)
 {
kvm->arch.kvm_ops->set_spte_hva(kvm, hva, pte);
+   return 0;
 }
 
 void kvmppc_mmu_destroy(struct kvm_vcpu *vcpu)
diff --git a/arch/powerpc/kvm/e500_mmu_host.c 

[PATCH V4 1/15] KVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

Add flush range call back in the kvm_x86_ops and platform can use it
to register its associated function. The parameter "kvm_tlb_range"
accepts a single range and flush list which contains a list of ranges.

Signed-off-by: Lan Tianyu 
---
Change since v1:
   Change "end_gfn" to "pages" to aviod confusion as to whether
"end_gfn" is inclusive or exlusive.
---
 arch/x86/include/asm/kvm_host.h | 8 
 1 file changed, 8 insertions(+)

diff --git a/arch/x86/include/asm/kvm_host.h b/arch/x86/include/asm/kvm_host.h
index 4b09d4aa9bf4..fea95aa77319 100644
--- a/arch/x86/include/asm/kvm_host.h
+++ b/arch/x86/include/asm/kvm_host.h
@@ -439,6 +439,12 @@ struct kvm_mmu {
u64 pdptrs[4]; /* pae */
 };
 
+struct kvm_tlb_range {
+   u64 start_gfn;
+   u64 pages;
+   struct list_head *flush_list;
+};
+
 enum pmc_type {
KVM_PMC_GP = 0,
KVM_PMC_FIXED,
@@ -1039,6 +1045,8 @@ struct kvm_x86_ops {
 
void (*tlb_flush)(struct kvm_vcpu *vcpu, bool invalidate_gpa);
int  (*tlb_remote_flush)(struct kvm *kvm);
+   int  (*tlb_remote_flush_with_range)(struct kvm *kvm,
+   struct kvm_tlb_range *range);
 
/*
 * Flush any TLB entries associated with the given GVA.
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 2/15] KVM/MMU: Add tlb flush with range helper function

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

This patch is to add wrapper functions for tlb_remote_flush_with_range
callback.

Signed-off-by: Lan Tianyu 
---
Change sicne V3:
   Remove code of updating "tlbs_dirty"
Change since V2:
   Fix comment in the kvm_flush_remote_tlbs_with_range()
---
 arch/x86/kvm/mmu.c | 40 
 1 file changed, 40 insertions(+)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index c73d9f650de7..ff656d85903a 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -264,6 +264,46 @@ static void mmu_spte_set(u64 *sptep, u64 spte);
 static union kvm_mmu_page_role
 kvm_mmu_calc_root_page_role(struct kvm_vcpu *vcpu);
 
+
+static inline bool kvm_available_flush_tlb_with_range(void)
+{
+   return kvm_x86_ops->tlb_remote_flush_with_range;
+}
+
+static void kvm_flush_remote_tlbs_with_range(struct kvm *kvm,
+   struct kvm_tlb_range *range)
+{
+   int ret = -ENOTSUPP;
+
+   if (range && kvm_x86_ops->tlb_remote_flush_with_range)
+   ret = kvm_x86_ops->tlb_remote_flush_with_range(kvm, range);
+
+   if (ret)
+   kvm_flush_remote_tlbs(kvm);
+}
+
+static void kvm_flush_remote_tlbs_with_list(struct kvm *kvm,
+   struct list_head *flush_list)
+{
+   struct kvm_tlb_range range;
+
+   range.flush_list = flush_list;
+
+   kvm_flush_remote_tlbs_with_range(kvm, );
+}
+
+static void kvm_flush_remote_tlbs_with_address(struct kvm *kvm,
+   u64 start_gfn, u64 pages)
+{
+   struct kvm_tlb_range range;
+
+   range.start_gfn = start_gfn;
+   range.pages = pages;
+   range.flush_list = NULL;
+
+   kvm_flush_remote_tlbs_with_range(kvm, );
+}
+
 void kvm_mmu_set_mmio_spte_mask(u64 mmio_mask, u64 mmio_value)
 {
BUG_ON((mmio_mask & mmio_value) != mmio_value);
-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH V4 3/15] KVM: Replace old tlb flush function with new one to flush a specified range.

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

This patch is to replace kvm_flush_remote_tlbs() with kvm_flush_
remote_tlbs_with_address() in some functions without logic change.

Signed-off-by: Lan Tianyu 
---
 arch/x86/kvm/mmu.c | 31 +--
 arch/x86/kvm/paging_tmpl.h |  3 ++-
 2 files changed, 23 insertions(+), 11 deletions(-)

diff --git a/arch/x86/kvm/mmu.c b/arch/x86/kvm/mmu.c
index ff656d85903a..9b9db36df103 100644
--- a/arch/x86/kvm/mmu.c
+++ b/arch/x86/kvm/mmu.c
@@ -1490,8 +1490,12 @@ static bool __drop_large_spte(struct kvm *kvm, u64 
*sptep)
 
 static void drop_large_spte(struct kvm_vcpu *vcpu, u64 *sptep)
 {
-   if (__drop_large_spte(vcpu->kvm, sptep))
-   kvm_flush_remote_tlbs(vcpu->kvm);
+   if (__drop_large_spte(vcpu->kvm, sptep)) {
+   struct kvm_mmu_page *sp = page_header(__pa(sptep));
+
+   kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn,
+   KVM_PAGES_PER_HPAGE(sp->role.level));
+   }
 }
 
 /*
@@ -1959,7 +1963,8 @@ static void rmap_recycle(struct kvm_vcpu *vcpu, u64 
*spte, gfn_t gfn)
rmap_head = gfn_to_rmap(vcpu->kvm, gfn, sp);
 
kvm_unmap_rmapp(vcpu->kvm, rmap_head, NULL, gfn, sp->role.level, 0);
-   kvm_flush_remote_tlbs(vcpu->kvm);
+   kvm_flush_remote_tlbs_with_address(vcpu->kvm, sp->gfn,
+   KVM_PAGES_PER_HPAGE(sp->role.level));
 }
 
 int kvm_age_hva(struct kvm *kvm, unsigned long start, unsigned long end)
@@ -2475,7 +2480,7 @@ static struct kvm_mmu_page *kvm_mmu_get_page(struct 
kvm_vcpu *vcpu,
account_shadowed(vcpu->kvm, sp);
if (level == PT_PAGE_TABLE_LEVEL &&
  rmap_write_protect(vcpu, gfn))
-   kvm_flush_remote_tlbs(vcpu->kvm);
+   kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn, 1);
 
if (level > PT_PAGE_TABLE_LEVEL && need_sync)
flush |= kvm_sync_pages(vcpu, gfn, _list);
@@ -2595,7 +2600,7 @@ static void validate_direct_spte(struct kvm_vcpu *vcpu, 
u64 *sptep,
return;
 
drop_parent_pte(child, sptep);
-   kvm_flush_remote_tlbs(vcpu->kvm);
+   kvm_flush_remote_tlbs_with_address(vcpu->kvm, child->gfn, 1);
}
 }
 
@@ -3019,8 +3024,10 @@ static int mmu_set_spte(struct kvm_vcpu *vcpu, u64 
*sptep, unsigned pte_access,
ret = RET_PF_EMULATE;
kvm_make_request(KVM_REQ_TLB_FLUSH, vcpu);
}
+
if (set_spte_ret & SET_SPTE_NEED_REMOTE_TLB_FLUSH || flush)
-   kvm_flush_remote_tlbs(vcpu->kvm);
+   kvm_flush_remote_tlbs_with_address(vcpu->kvm, gfn,
+   KVM_PAGES_PER_HPAGE(level));
 
if (unlikely(is_mmio_spte(*sptep)))
ret = RET_PF_EMULATE;
@@ -5695,7 +5702,8 @@ void kvm_mmu_slot_remove_write_access(struct kvm *kvm,
 * on PT_WRITABLE_MASK anymore.
 */
if (flush)
-   kvm_flush_remote_tlbs(kvm);
+   kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn,
+   memslot->npages);
 }
 
 static bool kvm_mmu_zap_collapsible_spte(struct kvm *kvm,
@@ -5759,7 +5767,8 @@ void kvm_mmu_slot_leaf_clear_dirty(struct kvm *kvm,
 * dirty_bitmap.
 */
if (flush)
-   kvm_flush_remote_tlbs(kvm);
+   kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn,
+   memslot->npages);
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_slot_leaf_clear_dirty);
 
@@ -5777,7 +5786,8 @@ void kvm_mmu_slot_largepage_remove_write_access(struct 
kvm *kvm,
lockdep_assert_held(>slots_lock);
 
if (flush)
-   kvm_flush_remote_tlbs(kvm);
+   kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn,
+   memslot->npages);
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_slot_largepage_remove_write_access);
 
@@ -5794,7 +5804,8 @@ void kvm_mmu_slot_set_dirty(struct kvm *kvm,
 
/* see kvm_mmu_slot_leaf_clear_dirty */
if (flush)
-   kvm_flush_remote_tlbs(kvm);
+   kvm_flush_remote_tlbs_with_address(kvm, memslot->base_gfn,
+   memslot->npages);
 }
 EXPORT_SYMBOL_GPL(kvm_mmu_slot_set_dirty);
 
diff --git a/arch/x86/kvm/paging_tmpl.h b/arch/x86/kvm/paging_tmpl.h
index 7cf2185b7eb5..6bdca39829bc 100644
--- a/arch/x86/kvm/paging_tmpl.h
+++ b/arch/x86/kvm/paging_tmpl.h
@@ -894,7 +894,8 @@ static void FNAME(invlpg)(struct kvm_vcpu *vcpu, gva_t gva, 
hpa_t root_hpa)
pte_gpa += (sptep - sp->spt) * sizeof(pt_element_t);
 
if (mmu_page_zap_pte(vcpu->kvm, sp, sptep))
-   kvm_flush_remote_tlbs(vcpu->kvm);
+   kvm_flush_remote_tlbs_with_address(vcpu->kvm,
+   sp->gfn, 
KVM_PAGES_PER_HPAGE(sp->role.level));
 

[PATCH V4 00/15] x86/KVM/Hyper-v: Add HV ept tlb range flush hypercall support in KVM

2018-10-13 Thread lantianyu1986
From: Lan Tianyu 

For nested memory virtualization, Hyper-v doesn't set write-protect
L1 hypervisor EPT page directory and page table node to track changes 
while it relies on guest to tell it changes via HvFlushGuestAddressLlist
hypercall. HvFlushGuestAddressLlist hypercall provides a way to flush
EPT page table with ranges which are specified by L1 hypervisor.

If L1 hypervisor uses INVEPT or HvFlushGuestAddressSpace hypercall to
flush EPT tlb, Hyper-V will invalidate associated EPT shadow page table
and sync L1's EPT table when next EPT page fault is triggered.
HvFlushGuestAddressLlist hypercall helps to avoid such redundant EPT
page fault and synchronization of shadow page table.


Change since v3:
1) Remove code of updating "tlbs_dirty" in 
kvm_flush_remote_tlbs_with_range()
2) Remove directly tlb flush in the kvm_handle_hva_range()
3) Move tlb flush in kvm_set_pte_rmapp() to 
kvm_mmu_notifier_change_pte()
4) Combine Vitaly's "don't pass EPT configuration info to
vmx_hv_remote_flush_tlb()" fix

Change since v2:
   1) Fix comment in the kvm_flush_remote_tlbs_with_range()
   2) Move HV_MAX_FLUSH_PAGES and HV_MAX_FLUSH_REP_COUNT to
hyperv-tlfs.h.
   3) Calculate HV_MAX_FLUSH_REP_COUNT in the macro definition
   4) Use HV_MAX_FLUSH_REP_COUNT to define length of gpa_list in
struct hv_guest_mapping_flush_list.

Change since v1:
   1) Convert "end_gfn" of struct kvm_tlb_range to "pages" in order
  to avoid confusion as to whether "end_gfn" is inclusive or exlusive.
   2) Add hyperv tlb range struct and replace kvm tlb range struct
  with new struct in order to avoid using kvm struct in the hyperv
  code directly.



Lan Tianyu (15):
  KVM: Add tlb_remote_flush_with_range callback in kvm_x86_ops
  KVM/MMU: Add tlb flush with range helper function
  KVM: Replace old tlb flush function with new one to flush a specified
range.
  KVM: Make kvm_set_spte_hva() return int
  KVM/MMU: Move tlb flush in kvm_set_pte_rmapp() to
kvm_mmu_notifier_change_pte()
  KVM/MMU: Flush tlb directly in the kvm_set_pte_rmapp()
  KVM/MMU: Flush tlb directly in the kvm_zap_gfn_range()
  KVM/MMU: Flush tlb directly in kvm_mmu_zap_collapsible_spte()
  KVM: Add flush_link and parent_pte in the struct kvm_mmu_page
  KVM: Add spte's point in the struct kvm_mmu_page
  KVM/MMU: Replace tlb flush function with range list flush function
  x86/hyper-v: Add HvFlushGuestAddressList hypercall support
  x86/Hyper-v: Add trace in the
hyperv_nested_flush_guest_mapping_range()
  KVM/VMX: Change hv flush logic when ept tables are mismatched.
  KVM/VMX: Add hv tlb range flush support

 arch/arm/include/asm/kvm_host.h |   2 +-
 arch/arm64/include/asm/kvm_host.h   |   2 +-
 arch/mips/include/asm/kvm_host.h|   2 +-
 arch/mips/kvm/mmu.c |   3 +-
 arch/powerpc/include/asm/kvm_host.h |   2 +-
 arch/powerpc/kvm/book3s.c   |   3 +-
 arch/powerpc/kvm/e500_mmu_host.c|   3 +-
 arch/x86/hyperv/nested.c|  85 ++
 arch/x86/include/asm/hyperv-tlfs.h  |  32 +
 arch/x86/include/asm/kvm_host.h |  12 +++-
 arch/x86/include/asm/mshyperv.h |  16 +
 arch/x86/include/asm/trace/hyperv.h |  14 
 arch/x86/kvm/mmu.c  | 138 ++--
 arch/x86/kvm/paging_tmpl.h  |  10 ++-
 arch/x86/kvm/vmx.c  |  70 +++---
 virt/kvm/arm/mmu.c  |   6 +-
 virt/kvm/kvm_main.c |   5 +-
 17 files changed, 360 insertions(+), 45 deletions(-)

-- 
2.14.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[PATCH v2] vmbus/ring_buffer: remove some redundant helper function.

2018-01-25 Thread lantianyu1986
From: Tianyu Lan 

Some hv_get/set** helper functions in ring_buffer code are
only called once or not used. This patch is to clear up these codes.

Signed-off-by: Tianyu Lan 
---
Change since v1:
Clear up more hv_get/set** functions.
---
 drivers/hv/ring_buffer.c | 49 
 1 file changed, 4 insertions(+), 45 deletions(-)

diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
index 12eb8ca..af3001f 100644
--- a/drivers/hv/ring_buffer.c
+++ b/drivers/hv/ring_buffer.c
@@ -78,46 +78,6 @@ static void hv_signal_on_write(u32 old_write, struct 
vmbus_channel *channel)
vmbus_setevent(channel);
 }
 
-/* Get the next write location for the specified ring buffer. */
-static inline u32
-hv_get_next_write_location(struct hv_ring_buffer_info *ring_info)
-{
-   u32 next = ring_info->ring_buffer->write_index;
-
-   return next;
-}
-
-/* Set the next write location for the specified ring buffer. */
-static inline void
-hv_set_next_write_location(struct hv_ring_buffer_info *ring_info,
-u32 next_write_location)
-{
-   ring_info->ring_buffer->write_index = next_write_location;
-}
-
-/* Set the next read location for the specified ring buffer. */
-static inline void
-hv_set_next_read_location(struct hv_ring_buffer_info *ring_info,
-   u32 next_read_location)
-{
-   ring_info->ring_buffer->read_index = next_read_location;
-   ring_info->priv_read_index = next_read_location;
-}
-
-/* Get the size of the ring buffer. */
-static inline u32
-hv_get_ring_buffersize(const struct hv_ring_buffer_info *ring_info)
-{
-   return ring_info->ring_datasize;
-}
-
-/* Get the read and write indices as u64 of the specified ring buffer. */
-static inline u64
-hv_get_ring_bufferindices(struct hv_ring_buffer_info *ring_info)
-{
-   return (u64)ring_info->ring_buffer->write_index << 32;
-}
-
 /*
  * Helper routine to copy from source to ring buffer.
  * Assume there is enough room. Handles wrap-around in dest case only!!
@@ -129,7 +89,7 @@ static u32 hv_copyto_ringbuffer(
u32 srclen)
 {
void *ring_buffer = hv_get_ring_buffer(ring_info);
-   u32 ring_buffer_size = hv_get_ring_buffersize(ring_info);
+   u32 ring_buffer_size = ring_info->ring_datasize;
 
memcpy(ring_buffer + start_write_offset, src, srclen);
 
@@ -252,8 +212,7 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
}
 
/* Write to the ring buffer */
-   next_write_location = hv_get_next_write_location(outring_info);
-
+   next_write_location = outring_info->ring_buffer->write_index;
old_write = next_write_location;
 
for (i = 0; i < kv_count; i++) {
@@ -264,7 +223,7 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
}
 
/* Set previous packet start */
-   prev_indices = hv_get_ring_bufferindices(outring_info);
+   prev_indices = (u64)outring_info->ring_buffer->write_index << 32;
 
next_write_location = hv_copyto_ringbuffer(outring_info,
 next_write_location,
@@ -275,7 +234,7 @@ int hv_ringbuffer_write(struct vmbus_channel *channel,
virt_mb();
 
/* Now, update the write location */
-   hv_set_next_write_location(outring_info, next_write_location);
+   outring_info->ring_buffer->write_index = next_write_location;
 
 
spin_unlock_irqrestore(_info->ring_lock, flags);
-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel


[Patch] vmbus: Simply hv_get_next_write_location() function

2018-01-23 Thread lantianyu1986
From: Tianyu Lan 

The "next" variable is redundant in hv_get_next_write_location().
This patch is to remove it and return write_index directly.

Signed-off-by: Tianyu Lan 
---
 drivers/hv/ring_buffer.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
index 12eb8ca..71558e7 100644
--- a/drivers/hv/ring_buffer.c
+++ b/drivers/hv/ring_buffer.c
@@ -82,9 +82,7 @@ static void hv_signal_on_write(u32 old_write, struct 
vmbus_channel *channel)
 static inline u32
 hv_get_next_write_location(struct hv_ring_buffer_info *ring_info)
 {
-   u32 next = ring_info->ring_buffer->write_index;
-
-   return next;
+   return ring_info->ring_buffer->write_index;
 }
 
 /* Set the next write location for the specified ring buffer. */
-- 
2.7.4

___
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel