From: Yang Shi <[email protected]>
commit a7f40cfe3b7ada57af9b62fd28430eeb4a7cfcb7 upstream.
When MPOL_MF_STRICT was specified and an existing page was already on a
node that does not follow the policy, mbind() should return -EIO. But
commit 6f4576e3687b ("mempolicy: apply page table walker on
queue_pages_range()") broke the rule.
And commit c8633798497c ("mm: mempolicy: mbind and migrate_pages support
thp migration") didn't return the correct value for THP mbind() too.
If MPOL_MF_STRICT is set, ignore vma_migratable() to make sure it
reaches queue_pages_to_pte_range() or queue_pages_pmd() to check if an
existing page was already on a node that does not follow the policy.
And, non-migratable vma may be used, return -EIO too if MPOL_MF_MOVE or
MPOL_MF_MOVE_ALL was specified.
Tested with
https://github.com/metan-ucw/ltp/blob/master/testcases/kernel/syscalls/mbind/mbind02.c
[[email protected]: tweak code comment]
Link:
http://lkml.kernel.org/r/[email protected]
Fixes: 6f4576e3687b ("mempolicy: apply page table walker on
queue_pages_range()")
Signed-off-by: Yang Shi <[email protected]>
Signed-off-by: Oscar Salvador <[email protected]>
Reported-by: Cyril Hrubis <[email protected]>
Suggested-by: Kirill A. Shutemov <[email protected]>
Acked-by: Rafael Aquini <[email protected]>
Reviewed-by: Oscar Salvador <[email protected]>
Acked-by: David Rientjes <[email protected]>
Cc: Vlastimil Babka <[email protected]>
Cc: <[email protected]>
Signed-off-by: Andrew Morton <[email protected]>
Signed-off-by: Linus Torvalds <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
mm/mempolicy.c | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
--- a/mm/mempolicy.c
+++ b/mm/mempolicy.c
@@ -514,12 +514,16 @@ static int queue_pages_pte_range(pmd_t *
if (node_isset(nid, *qp->nmask) == !!(flags & MPOL_MF_INVERT))
continue;
- if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL))
+ if (flags & (MPOL_MF_MOVE | MPOL_MF_MOVE_ALL)) {
+ if (!vma_migratable(vma))
+ break;
migrate_page_add(page, qp->pagelist, flags);
+ } else
+ break;
}
pte_unmap_unlock(pte - 1, ptl);
cond_resched();
- return 0;
+ return addr != end ? -EIO : 0;
}
static int queue_pages_hugetlb(pte_t *pte, unsigned long hmask,