On 11/20/25 04:07, Balbir Singh wrote:
Code refactoring of __folio_split() via helper
__folio_freeze_and_split_unmapped() caused a regression with clang-20
with CONFIG_SHMEM=n, the compiler was not able to optimize away the
call to shmem_uncharge() due to changes in nr_shmem_dropped.
Fix this by checking for shmem_mapping() prior to calling
shmem_uncharge(), shmem_mapping() returns false when CONFIG_SHMEM=n.

smatch also complained about parameter end being used without
initialization, which is a false positive, but keep the tool happy
by sending in initialized parameters. end is initialized to 0.

Add detailed documentation comments for folio_split_unmapped()

Cc: Andrew Morton <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Joshua Hahn <[email protected]>
Cc: Rakie Kim <[email protected]>
Cc: Byungchul Park <[email protected]>
Cc: Gregory Price <[email protected]>
Cc: Ying Huang <[email protected]>
Cc: Alistair Popple <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Lorenzo Stoakes <[email protected]>
Cc: Baolin Wang <[email protected]>
Cc: "Liam R. Howlett" <[email protected]>
Cc: Nico Pache <[email protected]>
Cc: Ryan Roberts <[email protected]>
Cc: Dev Jain <[email protected]>
Cc: Barry Song <[email protected]>
Cc: Lyude Paul <[email protected]>
Cc: Danilo Krummrich <[email protected]>
Cc: David Airlie <[email protected]>
Cc: Simona Vetter <[email protected]>
Cc: Ralph Campbell <[email protected]>
Cc: Mika Penttilä <[email protected]>
Cc: Matthew Brost <[email protected]>
Cc: Francois Dugast <[email protected]>

Signed-off-by: Balbir Singh <[email protected]>
---
  mm/huge_memory.c | 32 ++++++++++++++++++++++----------
  1 file changed, 22 insertions(+), 10 deletions(-)

diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 78a31a476ad3..c4267a0f74df 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3751,6 +3751,7 @@ static int __folio_freeze_and_split_unmapped(struct folio 
*folio, unsigned int n
        int ret = 0;
        struct deferred_split *ds_queue;
+ VM_WARN_ON_ONCE(!mapping && end != 0);

You could drop the "!= 0"

        /* Prevent deferred_split_scan() touching ->_refcount */
        ds_queue = folio_split_queue_lock(folio);
        if (folio_ref_freeze(folio, 1 + extra_pins)) {
@@ -3919,7 +3920,7 @@ static int __folio_split(struct folio *folio, unsigned 
int new_order,
        int nr_shmem_dropped = 0;
        int remap_flags = 0;
        int extra_pins, ret;
-       pgoff_t end;
+       pgoff_t end = 0;
        bool is_hzp;
VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio);
@@ -4049,7 +4050,7 @@ static int __folio_split(struct folio *folio, unsigned 
int new_order,
local_irq_enable(); - if (nr_shmem_dropped)
+       if (mapping && shmem_mapping(mapping) && nr_shmem_dropped)
                shmem_uncharge(mapping->host, nr_shmem_dropped);

That looks questionable. We shouldn't add runtime check to handle buildtime things.

Likely what you want is instead

if (IS_ENABLED(CONFIG_SHMEM) && nr_shmem_dropped)
        shmem_uncharge()

if (!ret && is_anon && !folio_is_device_private(folio))
@@ -4092,16 +4093,27 @@ static int __folio_split(struct folio *folio, unsigned 
int new_order,
        return ret;
  }
-/*
- * This function is a helper for splitting folios that have already been 
unmapped.
- * The use case is that the device or the CPU can refuse to migrate THP pages 
in
- * the middle of migration, due to allocation issues on either side
+/**
+ * folio_split_unmapped() - split a large anon folio that is already unmapped
+ * @folio: folio to split
+ * @new_order: the order of folios after split
+ *
+ * This function is a helper for splitting folios that have already been
+ * unmapped. The use case is that the device or the CPU can refuse to migrate
+ * THP pages in the middle of migration, due to allocation issues on either
+ * side.
+ *
+ * anon_vma_lock is not required to be held, mmap_read_lock() or
+ * mmap_write_lock() should be held. @folio is expected to be locked by the
+ * caller. device-private and non device-private folios are supported along
+ * with folios that are in the swapcache. @folio should also be unmapped and
+ * isolated from LRU (if applicable)
   *
- * The high level code is copied from __folio_split, since the pages are 
anonymous
- * and are already isolated from the LRU, the code has been simplified to not
- * burden __folio_split with unmapped sprinkled into the code.
+ * Upon return, the folio is not remapped, split folios are not added to LRU,
+ * free_folio_and_swap_cache() is not called, and new folios remain locked.
   *
- * None of the split folios are unlocked
+ * Return: 0 on success, -EAGAIN if the folio cannot be split (e.g., due to
+ *         insufficient reference count or extra pins).

Sounds much better to me, thanks.

--
Cheers

David

Reply via email to