On Fri, Nov 21, 2025 at 12:42:32AM +1100, Balbir Singh wrote:
>Code refactoring of __folio_split() via helper
>__folio_freeze_and_split_unmapped() caused a regression with clang-20
>with CONFIG_SHMEM=n, the compiler was not able to optimize away the
>call to shmem_uncharge() due to changes in nr_shmem_dropped.
>Fix this by adding a stub function for shmem_uncharge when
>CONFIG_SHMEM is not defined.
>
>smatch also complained about parameter end being used without
>initialization, which is a false positive, but keep the tool happy
>by sending in initialized parameters. end is initialized to 0.
>smatch still complains about mapping being NULL and nr_shmem_dropped
>may not be 0, but that is not true prior to or after the changes.
>
>Add detailed documentation comments for folio_split_unmapped()
>
>Cc: Andrew Morton <[email protected]>
>Cc: David Hildenbrand <[email protected]>
>Cc: Zi Yan <[email protected]>
>Cc: Joshua Hahn <[email protected]>
>Cc: Rakie Kim <[email protected]>
>Cc: Byungchul Park <[email protected]>
>Cc: Gregory Price <[email protected]>
>Cc: Ying Huang <[email protected]>
>Cc: Alistair Popple <[email protected]>
>Cc: Oscar Salvador <[email protected]>
>Cc: Lorenzo Stoakes <[email protected]>
>Cc: Baolin Wang <[email protected]>
>Cc: "Liam R. Howlett" <[email protected]>
>Cc: Nico Pache <[email protected]>
>Cc: Ryan Roberts <[email protected]>
>Cc: Dev Jain <[email protected]>
>Cc: Barry Song <[email protected]>
>Cc: Lyude Paul <[email protected]>
>Cc: Danilo Krummrich <[email protected]>
>Cc: David Airlie <[email protected]>
>Cc: Simona Vetter <[email protected]>
>Cc: Ralph Campbell <[email protected]>
>Cc: Mika Penttilä <[email protected]>
>Cc: Matthew Brost <[email protected]>
>Cc: Francois Dugast <[email protected]>
>
>Suggested-by: David Hildenbrand <[email protected]>
>Signed-off-by: Balbir Singh <[email protected]>
>---
>This fixup should be squashed into the patch "mm/huge_memory.c:
>introduce folio_split_unmapped" in mm/mm-unstable
>
> include/linux/shmem_fs.h |  6 +++++-
> mm/huge_memory.c         | 30 +++++++++++++++++++++---------
> 2 files changed, 26 insertions(+), 10 deletions(-)
>
>diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
>index 5b368f9549d6..7a412dd6eb4f 100644
>--- a/include/linux/shmem_fs.h
>+++ b/include/linux/shmem_fs.h
>@@ -136,11 +136,16 @@ static inline bool shmem_hpage_pmd_enabled(void)
> 
> #ifdef CONFIG_SHMEM
> extern unsigned long shmem_swap_usage(struct vm_area_struct *vma);
>+extern void shmem_uncharge(struct inode *inode, long pages);
> #else
> static inline unsigned long shmem_swap_usage(struct vm_area_struct *vma)
> {
>       return 0;
> }
>+
>+static void shmem_uncharge(struct inode *inode, long pages)
>+{
>+}
> #endif
> extern unsigned long shmem_partial_swap_usage(struct address_space *mapping,
>                                               pgoff_t start, pgoff_t end);
>@@ -194,7 +199,6 @@ static inline pgoff_t shmem_fallocend(struct inode *inode, 
>pgoff_t eof)
> }
> 
> extern bool shmem_charge(struct inode *inode, long pages);
>-extern void shmem_uncharge(struct inode *inode, long pages);
> 
> #ifdef CONFIG_USERFAULTFD
> #ifdef CONFIG_SHMEM
>diff --git a/mm/huge_memory.c b/mm/huge_memory.c
>index 78a31a476ad3..18c12876f5e8 100644
>--- a/mm/huge_memory.c
>+++ b/mm/huge_memory.c
>@@ -3751,6 +3751,7 @@ static int __folio_freeze_and_split_unmapped(struct 
>folio *folio, unsigned int n
>       int ret = 0;
>       struct deferred_split *ds_queue;
> 
>+      VM_WARN_ON_ONCE(!mapping && end);
>       /* Prevent deferred_split_scan() touching ->_refcount */
>       ds_queue = folio_split_queue_lock(folio);
>       if (folio_ref_freeze(folio, 1 + extra_pins)) {
>@@ -3919,7 +3920,7 @@ static int __folio_split(struct folio *folio, unsigned 
>int new_order,
>       int nr_shmem_dropped = 0;
>       int remap_flags = 0;
>       int extra_pins, ret;
>-      pgoff_t end;
>+      pgoff_t end = 0;
>       bool is_hzp;
> 
>       VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio);
>@@ -4092,16 +4093,27 @@ static int __folio_split(struct folio *folio, unsigned 
>int new_order,
>       return ret;
> }
> 
>-/*
>- * This function is a helper for splitting folios that have already been 
>unmapped.
>- * The use case is that the device or the CPU can refuse to migrate THP pages 
>in
>- * the middle of migration, due to allocation issues on either side
>+/**
>+ * folio_split_unmapped() - split a large anon folio that is already unmapped
>+ * @folio: folio to split
>+ * @new_order: the order of folios after split
>+ *
>+ * This function is a helper for splitting folios that have already been
>+ * unmapped. The use case is that the device or the CPU can refuse to migrate
>+ * THP pages in the middle of migration, due to allocation issues on either
>+ * side.
>+ *
>+ * anon_vma_lock is not required to be held, mmap_read_lock() or
>+ * mmap_write_lock() should be held. @folio is expected to be locked by the

Took a look into its caller:

  __migrate_device_pages()
    migrate_vma_split_unmapped_folio()
      folio_split_unmapped()

I don't see where get the folio lock.

Would you mind giving me a hint where we toke the lock? Seems I missed that.

>+ * caller. device-private and non device-private folios are supported along
>+ * with folios that are in the swapcache. @folio should also be unmapped and
>+ * isolated from LRU (if applicable)
>  *
>- * The high level code is copied from __folio_split, since the pages are 
>anonymous
>- * and are already isolated from the LRU, the code has been simplified to not
>- * burden __folio_split with unmapped sprinkled into the code.
>+ * Upon return, the folio is not remapped, split folios are not added to LRU,
>+ * free_folio_and_swap_cache() is not called, and new folios remain locked.
>  *
>- * None of the split folios are unlocked
>+ * Return: 0 on success, -EAGAIN if the folio cannot be split (e.g., due to
>+ *         insufficient reference count or extra pins).
>  */
> int folio_split_unmapped(struct folio *folio, unsigned int new_order)
> {
>-- 
>2.51.1
>

-- 
Wei Yang
Help you, Help me

Reply via email to