On 04/11/2019 02:48 AM, Zenghui Yu wrote:

On 2019/4/10 23:23, Suzuki K Poulose wrote:
If we are checking whether the stage2 can map PAGE_SIZE,
we don't have to do the boundary checks as both the host
VMA and the guest memslots are page aligned. Bail the case
easily.

Cc: Christoffer Dall <christoffer.d...@arm.com>
Cc: Marc Zyngier <marc.zyng...@arm.com>
Signed-off-by: Suzuki K Poulose <suzuki.poul...@arm.com>
---
  virt/kvm/arm/mmu.c | 4 ++++
  1 file changed, 4 insertions(+)

diff --git a/virt/kvm/arm/mmu.c b/virt/kvm/arm/mmu.c
index a39dcfd..6d73322 100644
--- a/virt/kvm/arm/mmu.c
+++ b/virt/kvm/arm/mmu.c
@@ -1624,6 +1624,10 @@ static bool fault_supports_stage2_huge_mapping(struct kvm_memory_slot *memslot,
      hva_t uaddr_start, uaddr_end;
      size_t size;
+    /* The memslot and the VMA are guaranteed to be aligned to PAGE_SIZE */
+    if (map_size == PAGE_SIZE)
+        return true;
+
      size = memslot->npages * PAGE_SIZE;
      gpa_start = memslot->base_gfn << PAGE_SHIFT;

We can do a comment clean up as well in this patch.

s/<< PAGE_SIZE/<< PAGE_SHIFT/

Sure, I missed that. Will fix it in the next version.

Cheers
Suzuki
_______________________________________________
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to