walk_pmd_range() splits a huge PMD when a page table walker with
pte_entry or install_pte callbacks needs PTE-level granularity. If
the split fails due to memory allocation failure in pte_alloc_one(),
walk_pte_range() would encounter a huge PMD instead of a PTE page
table.

Break out of the loop on split failure and return -ENOMEM to the
walker's caller. Callers that reach this path (those with pte_entry
or install_pte set) such as mincore, hmm_range_fault and
queue_pages_range already handle negative return values from
walk_page_range(). Similar approach is taken when __pte_alloc()
fails in walk_pmd_range().

Signed-off-by: Usama Arif <[email protected]>
---
 mm/pagewalk.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)

diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index a94c401ab2cfe..1ee9df7a4461d 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -147,9 +147,11 @@ static int walk_pmd_range(pud_t *pud, unsigned long addr, 
unsigned long end,
                                continue;
                }
 
-               if (walk->vma)
-                       split_huge_pmd(walk->vma, pmd, addr);
-               else if (pmd_leaf(*pmd) || !pmd_present(*pmd))
+               if (walk->vma) {
+                       err = split_huge_pmd(walk->vma, pmd, addr);
+                       if (err)
+                               break;
+               } else if (pmd_leaf(*pmd) || !pmd_present(*pmd))
                        continue; /* Nothing to do. */
 
                err = walk_pte_range(pmd, addr, next, walk);
-- 
2.47.3


Reply via email to