When MADV_COLLAPSE is called on file-backed mappings (e.g., executable
text sections), the pages may still be dirty from recent writes and
cause collapse to fail with -EINVAL. This is particularly problematic
for freshly copied executables on filesystems, where page cache folios
remain dirty until background writeback completes.

The current code in collapse_file() triggers async writeback via
filemap_flush() and expects khugepaged to revisit the page later.
However, MADV_COLLAPSE is a synchronous operation where userspace
expects immediate results.

Perform synchronous writeback in madvise_collapse() before attempting
collapse to avoid failing on first attempt.

Reported-by: Branden Moore <[email protected]>
Closes: https://lore.kernel.org/all/[email protected]
Fixes: 34488399fa08 ("mm/madvise: add file and shmem support to MADV_COLLAPSE")
Suggested-by: David Hildenbrand <[email protected]>
Signed-off-by: Shivank Garg <[email protected]>
---
 mm/khugepaged.c | 26 ++++++++++++++++++++++++++
 1 file changed, 26 insertions(+)

diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index 97d1b2824386..066a332c76ad 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -22,6 +22,7 @@
 #include <linux/dax.h>
 #include <linux/ksm.h>
 #include <linux/pgalloc.h>
+#include <linux/backing-dev.h>
 
 #include <asm/tlb.h>
 #include "internal.h"
@@ -2784,6 +2785,31 @@ int madvise_collapse(struct vm_area_struct *vma, 
unsigned long start,
        hstart = (start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
        hend = end & HPAGE_PMD_MASK;
 
+       /*
+        * For file-backed VMAs, perform synchronous writeback to ensure
+        * dirty folios are flushed before attempting collapse. This avoids
+        * failing on the first attempt when freshly-written executable text
+        * is still dirty in the page cache.
+        */
+       if (!vma_is_anonymous(vma) && vma->vm_file) {
+               struct address_space *mapping = vma->vm_file->f_mapping;
+
+               if (mapping_can_writeback(mapping)) {
+                       pgoff_t pgoff_start = linear_page_index(vma, hstart);
+                       pgoff_t pgoff_end = linear_page_index(vma, hend);
+                       loff_t lstart = (loff_t)pgoff_start << PAGE_SHIFT;
+                       loff_t lend = ((loff_t)pgoff_end << PAGE_SHIFT) - 1;
+
+                       mmap_read_unlock(mm);
+                       mmap_locked = false;
+
+                       if (filemap_write_and_wait_range(mapping, lstart, 
lend)) {
+                               last_fail = SCAN_FAIL;
+                               goto out_maybelock;
+                       }
+               }
+       }
+
        for (addr = hstart; addr < hend; addr += HPAGE_PMD_SIZE) {
                int result = SCAN_FAIL;
 
-- 
2.43.0


Reply via email to