On 2025/5/15 11:22, Nico Pache wrote:
The following series provides khugepaged and madvise collapse with the capability to collapse anonymous memory regions to mTHPs. To achieve this we generalize the khugepaged functions to no longer depend on PMD_ORDER. Then during the PMD scan, we keep track of chunks of pages (defined by KHUGEPAGED_MTHP_MIN_ORDER) that are utilized. This info is tracked using a bitmap. After the PMD scan is done, we do binary recursion on the bitmap to find the optimal mTHP sizes for the PMD range. The restriction on max_ptes_none is removed during the scan, to make sure we account for the whole PMD range. When no mTHP size is enabled, the legacy behavior of khugepaged is maintained. max_ptes_none will be scaled by the attempted collapse order to determine how full a THP must be to be eligible. If a mTHP collapse is attempted, but contains swapped out, or shared pages, we dont perform the collapse. With the default max_ptes_none=511, the code should keep its most of its original behavior. To exercise mTHP collapse we need to set max_ptes_none<=255. With max_ptes_none > HPAGE_PMD_NR/2 you will experience collapse "creep" and constantly promote mTHPs to the next available size. This is due the fact that it will introduce at least 2x the number of pages, and on a future scan will satisfy that condition once again. Patch 1: Refactor/rename hpage_collapse Patch 2: Some refactoring to combine madvise_collapse and khugepaged Patch 3-5: Generalize khugepaged functions for arbitrary orders Patch 6-9: The mTHP patches Patch 10-11: Tracing/stats Patch 12: Documentation
When I tested 64K mTHP collapse and disabled PMD-sized THP, I found that khugepaged couldn't scan and collapse 64K mTHP. I send out two fix patches[1], and with these patches applied, 64K mTHP collapse works well. I hope my two patches can be folded into your next version series if you think there are no issues. Thanks.
[1] https://lore.kernel.org/all/ac9ed6d71b439611f9c94b3506a8ce975d4636e9.1748435162.git.baolin.w...@linux.alibaba.com/