On 11/20/25 10:25, Balbir Singh wrote:
On 11/20/25 20:09, David Hildenbrand (Red Hat) wrote:
On 11/20/25 04:07, Balbir Singh wrote:
Code refactoring of __folio_split() via helper
__folio_freeze_and_split_unmapped() caused a regression with clang-20
with CONFIG_SHMEM=n, the compiler was not able to optimize away the
call to shmem_uncharge() due to changes in nr_shmem_dropped.
Fix this by checking for shmem_mapping() prior to calling
shmem_uncharge(), shmem_mapping() returns false when CONFIG_SHMEM=n.
smatch also complained about parameter end being used without
initialization, which is a false positive, but keep the tool happy
by sending in initialized parameters. end is initialized to 0.
Add detailed documentation comments for folio_split_unmapped()
Cc: Andrew Morton <[email protected]>
Cc: David Hildenbrand <[email protected]>
Cc: Zi Yan <[email protected]>
Cc: Joshua Hahn <[email protected]>
Cc: Rakie Kim <[email protected]>
Cc: Byungchul Park <[email protected]>
Cc: Gregory Price <[email protected]>
Cc: Ying Huang <[email protected]>
Cc: Alistair Popple <[email protected]>
Cc: Oscar Salvador <[email protected]>
Cc: Lorenzo Stoakes <[email protected]>
Cc: Baolin Wang <[email protected]>
Cc: "Liam R. Howlett" <[email protected]>
Cc: Nico Pache <[email protected]>
Cc: Ryan Roberts <[email protected]>
Cc: Dev Jain <[email protected]>
Cc: Barry Song <[email protected]>
Cc: Lyude Paul <[email protected]>
Cc: Danilo Krummrich <[email protected]>
Cc: David Airlie <[email protected]>
Cc: Simona Vetter <[email protected]>
Cc: Ralph Campbell <[email protected]>
Cc: Mika Penttilä <[email protected]>
Cc: Matthew Brost <[email protected]>
Cc: Francois Dugast <[email protected]>
Signed-off-by: Balbir Singh <[email protected]>
---
mm/huge_memory.c | 32 ++++++++++++++++++++++----------
1 file changed, 22 insertions(+), 10 deletions(-)
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 78a31a476ad3..c4267a0f74df 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -3751,6 +3751,7 @@ static int __folio_freeze_and_split_unmapped(struct folio
*folio, unsigned int n
int ret = 0;
struct deferred_split *ds_queue;
+ VM_WARN_ON_ONCE(!mapping && end != 0);
You could drop the "!= 0"
Ack
VM_WARN_ONE(!mapping && end);
/* Prevent deferred_split_scan() touching ->_refcount */
ds_queue = folio_split_queue_lock(folio);
if (folio_ref_freeze(folio, 1 + extra_pins)) {
@@ -3919,7 +3920,7 @@ static int __folio_split(struct folio *folio, unsigned
int new_order,
int nr_shmem_dropped = 0;
int remap_flags = 0;
int extra_pins, ret;
- pgoff_t end;
+ pgoff_t end = 0;
bool is_hzp;
VM_WARN_ON_ONCE_FOLIO(!folio_test_locked(folio), folio);
@@ -4049,7 +4050,7 @@ static int __folio_split(struct folio *folio, unsigned
int new_order,
local_irq_enable();
- if (nr_shmem_dropped)
+ if (mapping && shmem_mapping(mapping) && nr_shmem_dropped)
shmem_uncharge(mapping->host, nr_shmem_dropped);
That looks questionable. We shouldn't add runtime check to handle buildtime
things.
Likely what you want is instead
if (IS_ENABLED(CONFIG_SHMEM) && nr_shmem_dropped)
shmem_uncharge()
shmem_mapping() returns false for CONFIG_SHMEM=n and shmem_mapping() checks
that the mapping
is indeed for shmem ops before uncharging. Happy to change it if you like,
your version is more readable
Good point, but the questionable thing is that it looks like nr_shmem_dropped
could be set for non-shmem mappings, when it's really just a compiler thing.
What about handling it through a proper stub so we can keep this calling code
simple?
diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index 5b368f9549d67..e38cb01031200 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -136,11 +136,15 @@ static inline bool shmem_hpage_pmd_enabled(void)
#ifdef CONFIG_SHMEM
extern unsigned long shmem_swap_usage(struct vm_area_struct *vma);
+extern void shmem_uncharge(struct inode *inode, long pages);
#else
static inline unsigned long shmem_swap_usage(struct vm_area_struct *vma)
{
return 0;
}
+static inline void shmem_uncharge(struct inode *inode, long pages)
+{
+}
#endif
extern unsigned long shmem_partial_swap_usage(struct address_space *mapping,
pgoff_t start, pgoff_t end);
@@ -194,7 +198,6 @@ static inline pgoff_t shmem_fallocend(struct inode *inode,
pgoff_t eof)
}
extern bool shmem_charge(struct inode *inode, long pages);
-extern void shmem_uncharge(struct inode *inode, long pages);
#ifdef CONFIG_USERFAULTFD
#ifdef CONFIG_SHMEM
--
Cheers
David