Re: [PATCH v2] mm,hwpoison: return -EBUSY when page already poisoned

2021-03-12 Thread Jue Wang
I believe the mutex type patch has its own value in protecting 
memory_failure from other inherent races, e.g., races around 
split_huge_page where concurrent MCE happens to different 4k pages 
under the same THP.[1] This realistically can happen given the physical
locality clustering effect of memory errors.

Thanks,
-Jue

[1] The split fails due to a page reference taken by other concurrent 
calling into memory_failure on the same THP.



Re: [PATCH v2] mm,hwpoison: return -EBUSY when page already poisoned

2021-03-10 Thread 堀口 直也
On Tue, Mar 09, 2021 at 12:01:40PM -0800, Luck, Tony wrote:
> On Tue, Mar 09, 2021 at 08:28:24AM +, HORIGUCHI NAOYA(堀口 直也) wrote:
> > On Tue, Mar 09, 2021 at 02:35:34PM +0800, Aili Yao wrote:
> > > When the page is already poisoned, another memory_failure() call in the
> > > same page now return 0, meaning OK. For nested memory mce handling, this
> > > behavior may lead to mce looping, Example:
> > > 
> > > 1.When LCME is enabled, and there are two processes A && B running on
> > > different core X && Y separately, which will access one same page, then
> > > the page corrupted when process A access it, a MCE will be rasied to
> > > core X and the error process is just underway.
> > > 
> > > 2.Then B access the page and trigger another MCE to core Y, it will also
> > > do error process, it will see TestSetPageHWPoison be true, and 0 is
> > > returned.
> > > 
> > > 3.The kill_me_maybe will check the return:
> > > 
> > > 1244 static void kill_me_maybe(struct callback_head *cb)
> > > 1245 {
> > > 
> > > 1254 if (!memory_failure(p->mce_addr >> PAGE_SHIFT, flags) &&
> > > 1255 !(p->mce_kflags & MCE_IN_KERNEL_COPYIN)) {
> > > 1256 set_mce_nospec(p->mce_addr >> PAGE_SHIFT,
> > > p->mce_whole_page);
> > > 1257 sync_core();
> > > 1258 return;
> > > 1259 }
> > > 
> > > 1267 }
> > > 
> > > 4. The error process for B will end, and may nothing happened if
> > > kill-early is not set, The process B will re-excute instruction and get
> > > into mce again and then loop happens. And also the set_mce_nospec()
> > > here is not proper, may refer to commit fd0e786d9d09 ("x86/mm,
> > > mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages").
> > > 
> > > For other cases which care the return value of memory_failure() should
> > > check why they want to process a memory error which have already been
> > > processed. This behavior seems reasonable.
> > 
> > Other reviewers shared ideas about the returned value, but actually
> > I'm not sure which the best one is (EBUSY is not that bad).
> > What we need to fix the reported issue is to return non-zero value
> > for "already poisoned" case (the value itself is not so important). 
> > 
> > Other callers of memory_failure() (mostly test programs) could see
> > the change of return value, but they can already see EBUSY now and
> > anyway they should check dmesg for more detail about why failed,
> > so the impact of the change is not so big.
> > 
> > > 
> > > Signed-off-by: Aili Yao 
> > 
> > Reviewed-by: Naoya Horiguchi 
> 
> I think that both this and my "add a mutex" patch are both
> too simplistic for this complex problem :-(
> 
> When multiple CPUs race to call memory_failure() for the same
> page we need the following results:
> 
> 1) Poison page should be marked not-present in all tasks
>   I think the mutex patch achieves this as long as
>   memory_failure() doesn't hit an error[1].

My assumption is that reserved kernel pages is not supposed to be mapped to any
process, so once memory_failure() judges a page as such, we never mark any page
table entry to hwpoison entry, is that correct?  So my question is why some
user-mapped page was judged as "reserved kernel page".  Futex allows such a 
situation?

I personally tried some testcase crossing futex and hwpoison, but I can't
reproduced "reserved kernel page" case.  If possible, could you provide me
with a little more detail about your testcase?

> 
> 2) All tasks that were executing an instruction that was accessing
>the poison location should see a SIGBUS with virtual address and
>BUS_MCEERR_AR signature in siginfo.
>   Neither solution achieves this. The -EBUSY return ensures
>   that there is a SIGBUS for the tasks that get the -EBUSY
>   return, but no siginfo details.

Yes, that's not yet perfect but avoiding MCE loop is a progress.

>   Just the mutex patch *might* have BUS_MCEERR_AO signature
>   to the race losing tasks, but only if they have PF_MCE_EARLY
>   set (so says the comment in kill_proc() ... but I don't
>   see the code checking for that bit).

commit 30c9cf49270 might explain this, task_early_kill() got to call
find_early_kill_thread() (checking PF_MCE_EARLY) in this case.

> 
> #2 seems hard to achieve ... there are inherent races that mean the
> AO SIGBUS could have been queued to the task before it even hits
> the poison.

So I feel that we might want some change on memory_failure() to send
SIGBUS(BUS_MCEERR_AR) to "race losing tasks" within the new mutex.
I agree that how we find the error address it also a problem.
For now, I still have no better idea than page table walk.

> 
> Maybe should include a non-action:
> 
> 3) A task should only see one SIGBUS per poison?
>   Not sure if this is achievable either ... what if the task
>   has the same page mapped multiple times?

My thought is that hwpoison-aware applications could have dedlicated thread
for SIGBUS handling, 

Re: [PATCH v2] mm,hwpoison: return -EBUSY when page already poisoned

2021-03-10 Thread Aili Yao
On Tue, 9 Mar 2021 08:28:24 +
HORIGUCHI NAOYA(堀口 直也)  wrote:

> On Tue, Mar 09, 2021 at 02:35:34PM +0800, Aili Yao wrote:
> > When the page is already poisoned, another memory_failure() call in the
> > same page now return 0, meaning OK. For nested memory mce handling, this
> > behavior may lead to mce looping, Example:
> > 
> > 1.When LCME is enabled, and there are two processes A && B running on
> > different core X && Y separately, which will access one same page, then
> > the page corrupted when process A access it, a MCE will be rasied to
> > core X and the error process is just underway.
> > 
> > 2.Then B access the page and trigger another MCE to core Y, it will also
> > do error process, it will see TestSetPageHWPoison be true, and 0 is
> > returned.
> > 
> > 3.The kill_me_maybe will check the return:
> > 
> > 1244 static void kill_me_maybe(struct callback_head *cb)
> > 1245 {
> > 
> > 1254 if (!memory_failure(p->mce_addr >> PAGE_SHIFT, flags) &&
> > 1255 !(p->mce_kflags & MCE_IN_KERNEL_COPYIN)) {
> > 1256 set_mce_nospec(p->mce_addr >> PAGE_SHIFT,
> > p->mce_whole_page);
> > 1257 sync_core();
> > 1258 return;
> > 1259 }
> > 
> > 1267 }
> > 
> > 4. The error process for B will end, and may nothing happened if
> > kill-early is not set, The process B will re-excute instruction and get
> > into mce again and then loop happens. And also the set_mce_nospec()
> > here is not proper, may refer to commit fd0e786d9d09 ("x86/mm,
> > mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages").
> > 
> > For other cases which care the return value of memory_failure() should
> > check why they want to process a memory error which have already been
> > processed. This behavior seems reasonable.  
> 
> Other reviewers shared ideas about the returned value, but actually
> I'm not sure which the best one is (EBUSY is not that bad).
> What we need to fix the reported issue is to return non-zero value
> for "already poisoned" case (the value itself is not so important). 
> 
> Other callers of memory_failure() (mostly test programs) could see
> the change of return value, but they can already see EBUSY now and
> anyway they should check dmesg for more detail about why failed,
> so the impact of the change is not so big.
> 
> > 
> > Signed-off-by: Aili Yao   
> 
> Reviewed-by: Naoya Horiguchi 
> 
> Thanks,
> Naoya Horiguchi

Thanks!

And I found my mail was lost in mailist!

-- 
Thanks!
Aili Yao


Re: [PATCH v2] mm,hwpoison: return -EBUSY when page already poisoned

2021-03-09 Thread Luck, Tony
On Tue, Mar 09, 2021 at 08:28:24AM +, HORIGUCHI NAOYA(堀口 直也) wrote:
> On Tue, Mar 09, 2021 at 02:35:34PM +0800, Aili Yao wrote:
> > When the page is already poisoned, another memory_failure() call in the
> > same page now return 0, meaning OK. For nested memory mce handling, this
> > behavior may lead to mce looping, Example:
> > 
> > 1.When LCME is enabled, and there are two processes A && B running on
> > different core X && Y separately, which will access one same page, then
> > the page corrupted when process A access it, a MCE will be rasied to
> > core X and the error process is just underway.
> > 
> > 2.Then B access the page and trigger another MCE to core Y, it will also
> > do error process, it will see TestSetPageHWPoison be true, and 0 is
> > returned.
> > 
> > 3.The kill_me_maybe will check the return:
> > 
> > 1244 static void kill_me_maybe(struct callback_head *cb)
> > 1245 {
> > 
> > 1254 if (!memory_failure(p->mce_addr >> PAGE_SHIFT, flags) &&
> > 1255 !(p->mce_kflags & MCE_IN_KERNEL_COPYIN)) {
> > 1256 set_mce_nospec(p->mce_addr >> PAGE_SHIFT,
> > p->mce_whole_page);
> > 1257 sync_core();
> > 1258 return;
> > 1259 }
> > 
> > 1267 }
> > 
> > 4. The error process for B will end, and may nothing happened if
> > kill-early is not set, The process B will re-excute instruction and get
> > into mce again and then loop happens. And also the set_mce_nospec()
> > here is not proper, may refer to commit fd0e786d9d09 ("x86/mm,
> > mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages").
> > 
> > For other cases which care the return value of memory_failure() should
> > check why they want to process a memory error which have already been
> > processed. This behavior seems reasonable.
> 
> Other reviewers shared ideas about the returned value, but actually
> I'm not sure which the best one is (EBUSY is not that bad).
> What we need to fix the reported issue is to return non-zero value
> for "already poisoned" case (the value itself is not so important). 
> 
> Other callers of memory_failure() (mostly test programs) could see
> the change of return value, but they can already see EBUSY now and
> anyway they should check dmesg for more detail about why failed,
> so the impact of the change is not so big.
> 
> > 
> > Signed-off-by: Aili Yao 
> 
> Reviewed-by: Naoya Horiguchi 

I think that both this and my "add a mutex" patch are both
too simplistic for this complex problem :-(

When multiple CPUs race to call memory_failure() for the same
page we need the following results:

1) Poison page should be marked not-present in all tasks
I think the mutex patch achieves this as long as
memory_failure() doesn't hit an error[1].

2) All tasks that were executing an instruction that was accessing
   the poison location should see a SIGBUS with virtual address and
   BUS_MCEERR_AR signature in siginfo.
Neither solution achieves this. The -EBUSY return ensures
that there is a SIGBUS for the tasks that get the -EBUSY
return, but no siginfo details.
Just the mutex patch *might* have BUS_MCEERR_AO signature
to the race losing tasks, but only if they have PF_MCE_EARLY
set (so says the comment in kill_proc() ... but I don't
see the code checking for that bit).

#2 seems hard to achieve ... there are inherent races that mean the
AO SIGBUS could have been queued to the task before it even hits
the poison.

Maybe should include a non-action:

3) A task should only see one SIGBUS per poison?
Not sure if this is achievable either ... what if the task
has the same page mapped multiple times?

-Tony

[1] still looking at why my futex injection test ends with a "reserved
kernel page still referenced by 1 users"


Re: [PATCH v2] mm,hwpoison: return -EBUSY when page already poisoned

2021-03-09 Thread 堀口 直也
On Tue, Mar 09, 2021 at 02:35:34PM +0800, Aili Yao wrote:
> When the page is already poisoned, another memory_failure() call in the
> same page now return 0, meaning OK. For nested memory mce handling, this
> behavior may lead to mce looping, Example:
> 
> 1.When LCME is enabled, and there are two processes A && B running on
> different core X && Y separately, which will access one same page, then
> the page corrupted when process A access it, a MCE will be rasied to
> core X and the error process is just underway.
> 
> 2.Then B access the page and trigger another MCE to core Y, it will also
> do error process, it will see TestSetPageHWPoison be true, and 0 is
> returned.
> 
> 3.The kill_me_maybe will check the return:
> 
> 1244 static void kill_me_maybe(struct callback_head *cb)
> 1245 {
> 
> 1254 if (!memory_failure(p->mce_addr >> PAGE_SHIFT, flags) &&
> 1255 !(p->mce_kflags & MCE_IN_KERNEL_COPYIN)) {
> 1256 set_mce_nospec(p->mce_addr >> PAGE_SHIFT,
> p->mce_whole_page);
> 1257 sync_core();
> 1258 return;
> 1259 }
> 
> 1267 }
> 
> 4. The error process for B will end, and may nothing happened if
> kill-early is not set, The process B will re-excute instruction and get
> into mce again and then loop happens. And also the set_mce_nospec()
> here is not proper, may refer to commit fd0e786d9d09 ("x86/mm,
> mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages").
> 
> For other cases which care the return value of memory_failure() should
> check why they want to process a memory error which have already been
> processed. This behavior seems reasonable.

Other reviewers shared ideas about the returned value, but actually
I'm not sure which the best one is (EBUSY is not that bad).
What we need to fix the reported issue is to return non-zero value
for "already poisoned" case (the value itself is not so important). 

Other callers of memory_failure() (mostly test programs) could see
the change of return value, but they can already see EBUSY now and
anyway they should check dmesg for more detail about why failed,
so the impact of the change is not so big.

> 
> Signed-off-by: Aili Yao 

Reviewed-by: Naoya Horiguchi 

Thanks,
Naoya Horiguchi

[PATCH v2] mm,hwpoison: return -EBUSY when page already poisoned

2021-03-08 Thread Aili Yao
When the page is already poisoned, another memory_failure() call in the
same page now return 0, meaning OK. For nested memory mce handling, this
behavior may lead to mce looping, Example:

1.When LCME is enabled, and there are two processes A && B running on
different core X && Y separately, which will access one same page, then
the page corrupted when process A access it, a MCE will be rasied to
core X and the error process is just underway.

2.Then B access the page and trigger another MCE to core Y, it will also
do error process, it will see TestSetPageHWPoison be true, and 0 is
returned.

3.The kill_me_maybe will check the return:

1244 static void kill_me_maybe(struct callback_head *cb)
1245 {

1254 if (!memory_failure(p->mce_addr >> PAGE_SHIFT, flags) &&
1255 !(p->mce_kflags & MCE_IN_KERNEL_COPYIN)) {
1256 set_mce_nospec(p->mce_addr >> PAGE_SHIFT,
p->mce_whole_page);
1257 sync_core();
1258 return;
1259 }

1267 }

4. The error process for B will end, and may nothing happened if
kill-early is not set, The process B will re-excute instruction and get
into mce again and then loop happens. And also the set_mce_nospec()
here is not proper, may refer to commit fd0e786d9d09 ("x86/mm,
mm/hwpoison: Don't unconditionally unmap kernel 1:1 pages").

For other cases which care the return value of memory_failure() should
check why they want to process a memory error which have already been
processed. This behavior seems reasonable.

Signed-off-by: Aili Yao 
---
 mm/memory-failure.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/mm/memory-failure.c b/mm/memory-failure.c
index 24210c9bd843..b6bc77460ee1 100644
--- a/mm/memory-failure.c
+++ b/mm/memory-failure.c
@@ -1228,7 +1228,7 @@ static int memory_failure_hugetlb(unsigned long pfn, int 
flags)
if (TestSetPageHWPoison(head)) {
pr_err("Memory failure: %#lx: already hardware poisoned\n",
   pfn);
-   return 0;
+   return -EBUSY;
}
 
num_poisoned_pages_inc();
@@ -1430,7 +1430,7 @@ int memory_failure(unsigned long pfn, int flags)
if (TestSetPageHWPoison(p)) {
pr_err("Memory failure: %#lx: already hardware poisoned\n",
pfn);
-   return 0;
+   return -EBUSY;
}
 
orig_head = hpage = compound_head(p);
-- 
2.25.1