Re: [PATCH 1/2] kvm: arm: Clean up the checking for huge mapping

2019-04-11 Thread Suzuki K Poulose

On 04/11/2019 02:48 AM, Zenghui Yu wrote:


On 2019/4/10 23:23, Suzuki K Poulose wrote:

If we are checking whether the stage2 can map PAGE_SIZE,
we don't have to do the boundary checks as both the host
VMA and the guest memslots are page aligned. Bail the case
easily.

Cc: Christoffer Dall 
Cc: Marc Zyngier 
Signed-off-by: Suzuki K Poulose 
---
  virt/kvm/arm/mmu.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index a39dcfd..6d73322 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1624,6 +1624,10 @@ static bool 
fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,

  hva_t uaddr_start, uaddr_end;
  size_t size;
+    /* The memslot and the VMA are guaranteed to be aligned to 
PAGE_SIZE */

+    if (map_size == PAGE_SIZE)
+    return true;
+
  size = memslot->npages * PAGE_SIZE;
  gpa_start = memslot->base_gfn << PAGE_SHIFT;


We can do a comment clean up as well in this patch.

s/<< PAGE_SIZE/<< PAGE_SHIFT/


Sure, I missed that. Will fix it in the next version.

Cheers
Suzuki
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [PATCH 1/2] kvm: arm: Clean up the checking for huge mapping

2019-04-10 Thread Zenghui Yu



On 2019/4/10 23:23, Suzuki K Poulose wrote:

If we are checking whether the stage2 can map PAGE_SIZE,
we don't have to do the boundary checks as both the host
VMA and the guest memslots are page aligned. Bail the case
easily.

Cc: Christoffer Dall 
Cc: Marc Zyngier 
Signed-off-by: Suzuki K Poulose 
---
  virt/kvm/arm/mmu.c | 4 
  1 file changed, 4 insertions(+)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index a39dcfd..6d73322 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1624,6 +1624,10 @@ static bool fault_supports_stage2_huge_mapping(struct 
kvm_memory_slot *memslot,
hva_t uaddr_start, uaddr_end;
size_t size;
  
+	/* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */

+   if (map_size == PAGE_SIZE)
+   return true;
+
size = memslot->npages * PAGE_SIZE;
  
  	gpa_start = memslot->base_gfn << PAGE_SHIFT;



We can do a comment clean up as well in this patch.

s/<< PAGE_SIZE/<< PAGE_SHIFT/


thanks,
zenghui

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


[PATCH 1/2] kvm: arm: Clean up the checking for huge mapping

2019-04-10 Thread Suzuki K Poulose
If we are checking whether the stage2 can map PAGE_SIZE,
we don't have to do the boundary checks as both the host
VMA and the guest memslots are page aligned. Bail the case
easily.

Cc: Christoffer Dall 
Cc: Marc Zyngier 
Signed-off-by: Suzuki K Poulose 
---
 virt/kvm/arm/mmu.c | 4 
 1 file changed, 4 insertions(+)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index a39dcfd..6d73322 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1624,6 +1624,10 @@ static bool fault_supports_stage2_huge_mapping(struct 
kvm_memory_slot *memslot,
hva_t uaddr_start, uaddr_end;
size_t size;
 
+   /* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */
+   if (map_size == PAGE_SIZE)
+   return true;
+
size = memslot->npages * PAGE_SIZE;
 
gpa_start = memslot->base_gfn << PAGE_SHIFT;
-- 
2.7.4

___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm