On 24 Apr 2026, at 8:47, David Hildenbrand (Arm) wrote:

> On 4/24/26 04:49, Zi Yan wrote:
>> Remove READ_ONLY_THP_FOR_FS and khugepaged for file-backed pmd-sized
>> hugepages are enabled by the global transparent hugepage control.
>> khugepaged can still be enabled by per-size control for anon and shmem when
>> the global control is off.
>>
>> Add shmem_hpage_pmd_enabled() stub for !CONFIG_SHMEM to remove
>> IS_ENABLED(SHMEM) in hugepage_enabled().
>>
>> Clean up hugepage_enabled() by moving anon code to anon_hpage_enabled().
>>
>> Signed-off-by: Zi Yan <[email protected]>
>> Reviewed-by: Baolin Wang <[email protected]>
>> ---
>>  include/linux/shmem_fs.h |  2 +-
>>  mm/khugepaged.c          | 26 ++++++++++++++++----------
>>  2 files changed, 17 insertions(+), 11 deletions(-)
>>
>> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
>> index 93a0ba872ebe..acb8dd961b45 100644
>> --- a/include/linux/shmem_fs.h
>> +++ b/include/linux/shmem_fs.h
>> @@ -127,7 +127,7 @@ int shmem_writeout(struct folio *folio, struct swap_iocb 
>> **plug,
>>  void shmem_truncate_range(struct inode *inode, loff_t start, uoff_t end);
>>  int shmem_unuse(unsigned int type);
>>
>> -#ifdef CONFIG_TRANSPARENT_HUGEPAGE
>> +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SHMEM)
>>  unsigned long shmem_allowable_huge_orders(struct inode *inode,
>>                              struct vm_area_struct *vma, pgoff_t index,
>>                              loff_t write_end, bool shmem_huge_force);
>> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
>> index 726f8ace01af..cdd4b37e4a68 100644
>> --- a/mm/khugepaged.c
>> +++ b/mm/khugepaged.c
>> @@ -524,26 +524,32 @@ static inline int collapse_test_exit_or_disable(struct 
>> mm_struct *mm)
>>              mm_flags_test(MMF_DISABLE_THP_COMPLETELY, mm);
>>  }
>>
>> +static inline bool anon_hpage_enabled(void)
>> +{
>> +    if (READ_ONCE(huge_anon_orders_always))
>> +            return true;
>> +    if (READ_ONCE(huge_anon_orders_madvise))
>> +            return true;
>> +    if (READ_ONCE(huge_anon_orders_inherit) &&
>> +        hugepage_global_enabled())
>> +            return true;
>> +    return false;
>> +}
>
> Ah, that is based on Nicos work, right?

Yes, since mm-new has Nico’s patchset now.

>
>> +
>>  static bool hugepage_enabled(void)
>>  {
>>      /*
>>       * We cover the anon, shmem and the file-backed case here; file-backed
>> -     * hugepages, when configured in, are determined by the global control.
>> +     * hugepages are determined by the global control.
>>       * Anon hugepages are determined by its per-size mTHP control.
>>       * Shmem pmd-sized hugepages are also determined by its pmd-size 
>> control,
>>       * except when the global shmem_huge is set to SHMEM_HUGE_DENY.
>>       */
>> -    if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>> -        hugepage_global_enabled())
>> -            return true;
>> -    if (READ_ONCE(huge_anon_orders_always))
>> +    if (hugepage_global_enabled())
>>              return true;
>> -    if (READ_ONCE(huge_anon_orders_madvise))
>> -            return true;
>> -    if (READ_ONCE(huge_anon_orders_inherit) &&
>> -        hugepage_global_enabled())
>> +    if (anon_hpage_enabled())
>>              return true;
>> -    if (IS_ENABLED(CONFIG_SHMEM) && shmem_hpage_pmd_enabled())
>> +    if (shmem_hpage_pmd_enabled())
>>              return true;
>>      return false;
>>  }
>
> Acked-by: David Hildenbrand (Arm) <[email protected]>
>
> -- 
> Cheers,
>
> David


Best Regards,
Yan, Zi

Reply via email to