[PATCH AUTOSEL for 4.9 146/190] mm: hwpoison: call shake_page() after try_to_unmap() for mlocked page

2018-03-07 Thread Sasha Levin
From: Naoya Horiguchi 

[ Upstream commit 286c469a988fbaf68e3a97ddf1e6c245c1446968 ]

Memory error handler calls try_to_unmap() for error pages in various
states.  If the error page is a mlocked page, error handling could fail
with "still referenced by 1 users" message.  This is because the page is
linked to and stays in lru cache after the following call chain.

  try_to_unmap_one
page_remove_rmap
  clear_page_mlock
putback_lru_page
  lru_cache_add

memory_failure() calls shake_page() to hanlde the similar issue, but
current code doesn't cover because shake_page() is called only before
try_to_unmap().  So this patches adds shake_page().

Fixes: 23a003bfd23ea9ea0b7756b920e51f64b284b468 ("mm/madvise: pass return code 
of memory_failure() to userspace")
Link: http://lkml.kernel.org/r/20170417055948.GM31394@yexl-desktop
Link: 
http://lkml.kernel.org/r/1493197841-23986-3-git-send-email-n-horigu...@ah.jp.nec.com
Signed-off-by: Naoya Horiguchi 
Reported-by: kernel test robot 
Cc: Xiaolong Ye 
Cc: Chen Gong 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
Signed-off-by: Sasha Levin 
---
 mm/memory-failure.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 5aa71a82ca73..851efb004857 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -921,6 +921,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned 
long pfn,
int ret;
int kill = 1, forcekill;
struct page *hpage = *hpagep;
+   bool mlocked = PageMlocked(hpage);
 
/*
 * Here we are interested only in user-mapped pages, so skip any
@@ -984,6 +985,13 @@ static int hwpoison_user_mappings(struct page *p, unsigned 
long pfn,
pr_err("Memory failure: %#lx: failed to unmap page 
(mapcount=%d)\n",
   pfn, page_mapcount(hpage));
 
+   /*
+* try_to_unmap() might put mlocked page in lru cache, so call
+* shake_page() again to ensure that it's flushed.
+*/
+   if (mlocked)
+   shake_page(hpage, 0);
+
/*
 * Now that the dirty bit has been propagated to the
 * struct page and all unmaps done we can decide if
-- 
2.14.1


[PATCH AUTOSEL for 4.9 146/190] mm: hwpoison: call shake_page() after try_to_unmap() for mlocked page

2018-03-07 Thread Sasha Levin
From: Naoya Horiguchi 

[ Upstream commit 286c469a988fbaf68e3a97ddf1e6c245c1446968 ]

Memory error handler calls try_to_unmap() for error pages in various
states.  If the error page is a mlocked page, error handling could fail
with "still referenced by 1 users" message.  This is because the page is
linked to and stays in lru cache after the following call chain.

  try_to_unmap_one
page_remove_rmap
  clear_page_mlock
putback_lru_page
  lru_cache_add

memory_failure() calls shake_page() to hanlde the similar issue, but
current code doesn't cover because shake_page() is called only before
try_to_unmap().  So this patches adds shake_page().

Fixes: 23a003bfd23ea9ea0b7756b920e51f64b284b468 ("mm/madvise: pass return code 
of memory_failure() to userspace")
Link: http://lkml.kernel.org/r/20170417055948.GM31394@yexl-desktop
Link: 
http://lkml.kernel.org/r/1493197841-23986-3-git-send-email-n-horigu...@ah.jp.nec.com
Signed-off-by: Naoya Horiguchi 
Reported-by: kernel test robot 
Cc: Xiaolong Ye 
Cc: Chen Gong 
Signed-off-by: Andrew Morton 
Signed-off-by: Linus Torvalds 
Signed-off-by: Sasha Levin 
---
 mm/memory-failure.c | 8 
 1 file changed, 8 insertions(+)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 5aa71a82ca73..851efb004857 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -921,6 +921,7 @@ static int hwpoison_user_mappings(struct page *p, unsigned 
long pfn,
int ret;
int kill = 1, forcekill;
struct page *hpage = *hpagep;
+   bool mlocked = PageMlocked(hpage);
 
/*
 * Here we are interested only in user-mapped pages, so skip any
@@ -984,6 +985,13 @@ static int hwpoison_user_mappings(struct page *p, unsigned 
long pfn,
pr_err("Memory failure: %#lx: failed to unmap page 
(mapcount=%d)\n",
   pfn, page_mapcount(hpage));
 
+   /*
+* try_to_unmap() might put mlocked page in lru cache, so call
+* shake_page() again to ensure that it's flushed.
+*/
+   if (mlocked)
+   shake_page(hpage, 0);
+
/*
 * Now that the dirty bit has been propagated to the
 * struct page and all unmaps done we can decide if
-- 
2.14.1