Christophe Leroy <christophe.le...@csgroup.eu> writes:

> Nicholas Piggin <npig...@gmail.com> a écrit :
>
>> real_vmalloc_addr() does not currently work for huge vmalloc, which is
>> what the reverse map can be allocated with for radix host, hash guest.
>>
>> Add huge page awareness to the function.
>>
>> Fixes: 8abddd968a30 ("powerpc/64s/radix: Enable huge vmalloc mappings")
>> Signed-off-by: Nicholas Piggin <npig...@gmail.com>
>> ---
>>  arch/powerpc/kvm/book3s_hv_rm_mmu.c | 17 ++++++++++++-----
>>  1 file changed, 12 insertions(+), 5 deletions(-)
>>
>> diff --git a/arch/powerpc/kvm/book3s_hv_rm_mmu.c  
>> b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
>> index 7af7c70f1468..5f68cb5cc009 100644
>> --- a/arch/powerpc/kvm/book3s_hv_rm_mmu.c
>> +++ b/arch/powerpc/kvm/book3s_hv_rm_mmu.c
>> @@ -26,16 +26,23 @@
>>  static void *real_vmalloc_addr(void *x)
>>  {
>>      unsigned long addr = (unsigned long) x;
>> +    unsigned long mask;
>> +    int shift;
>>      pte_t *p;
>> +
>>      /*
>> -     * assume we don't have huge pages in vmalloc space...
>> -     * So don't worry about THP collapse/split. Called
>> -     * Only in realmode with MSR_EE = 0, hence won't need irq_save/restore.
>> +     * This is called only in realmode with MSR_EE = 0, hence won't need
>> +     * irq_save/restore around find_init_mm_pte.
>>       */
>> -    p = find_init_mm_pte(addr, NULL);
>> +    p = find_init_mm_pte(addr, &shift);
>>      if (!p || !pte_present(*p))
>>              return NULL;
>> -    addr = (pte_pfn(*p) << PAGE_SHIFT) | (addr & ~PAGE_MASK);
>> +    if (!shift)
>> +            shift = PAGE_SHIFT;
>> +
>> +    mask = (1UL << shift) - 1;
>> +    addr = (pte_pfn(*p) << PAGE_SHIFT) | (addr & mask);
>
> Looks strange, before we have ~MASK now we have mask without the ~

#define PAGE_MASK      (~((1 << PAGE_SHIFT) - 1))

-aneesh

Reply via email to