Hi Marc,

On 7/26/21 4:35 PM, Marc Zyngier wrote:
> When mapping a THP, we are guaranteed that the page isn't reserved,
> and we can safely avoid the kvm_is_reserved_pfn() call.
>
> Replace kvm_get_pfn() with get_page(pfn_to_page()).
>
> Signed-off-by: Marc Zyngier <[email protected]>
> ---
>  arch/arm64/kvm/mmu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index ebb28dd4f2c9..b303aa143592 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -840,7 +840,7 @@ transparent_hugepage_adjust(struct kvm *kvm, struct 
> kvm_memory_slot *memslot,
>               *ipap &= PMD_MASK;
>               kvm_release_pfn_clean(pfn);
>               pfn &= ~(PTRS_PER_PMD - 1);
> -             kvm_get_pfn(pfn);
> +             get_page(pfn_to_page(pfn));
>               *pfnp = pfn;
>  
>               return PMD_SIZE;

I am not very familiar with the mm subsystem, but I did my best to review this 
change.

kvm_get_pfn() uses get_page(pfn) if !PageReserved(pfn_to_page(pfn)). I looked at
the documentation for the PG_reserved page flag, and for normal memory, what
looked to me like the most probable situation where that can be set for a
transparent hugepage was for the zero page. Looked at mm/huge_memory.c, and
huge_zero_pfn is allocated via alloc_pages(__GFP_ZERO) (and other flags), which
doesn't call SetPageReserved().

I looked at how a huge page can be mapped from handle_mm_fault and from
khugepaged, and it also looks to like both are using using alloc_pages() to
allocate a new hugepage.

I also did a grep for SetPageReserved(), and there are very few places where 
that
is called, and none looked like they have anything to do with hugepages.

As far as I can tell, this change is correct, but I think someone who is 
familiar
with mm would be better suited for reviewing this patch.

_______________________________________________
kvmarm mailing list
[email protected]
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm

Reply via email to