Re: [perf/x86] 75925e1ad7: BUG: unable to handle kernel paging request at 000045b8

2016-01-22 Thread Andi Kleen
On Fri, Jan 22, 2016 at 12:33:24PM +0800, kernel test robot wrote:
> Greetings,
> 
> 0day kernel testing robot got the below dmesg and the first bad commit is
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master

Thanks. I managed to break 32bit kernels. The appended patch should
fix it.



x86, perf: Fix perf user stack trace walking

Fix 75925e1ad7 (perf/x86: Optimize stack walk user accesses)
   
Replace the hard coded 64bit frame pointer sizes, with sizeof depending
on the size of unsigned long on the host.

This avoids a stack smash on 32bit kernels, which was dutifully reported
by the 0day kbuild robot.

Signed-off-by: Andi Kleen 

diff --git a/arch/x86/kernel/cpu/perf_event.c b/arch/x86/kernel/cpu/perf_event.c
index 1b443db..ea4eb5c 100644
--- a/arch/x86/kernel/cpu/perf_event.c
+++ b/arch/x86/kernel/cpu/perf_event.c
@@ -2328,13 +2328,16 @@ perf_callchain_user(struct perf_callchain_entry *entry, 
struct pt_regs *regs)
frame.next_frame = NULL;
frame.return_address = 0;
 
-   if (!access_ok(VERIFY_READ, fp, 16))
+   if (!access_ok(VERIFY_READ, fp, sizeof(frame)))
break;
 
-   bytes = __copy_from_user_nmi(&frame.next_frame, fp, 8);
+   bytes = __copy_from_user_nmi(&frame.next_frame, fp,
+   sizeof(frame.next_frame));
if (bytes != 0)
break;
-   bytes = __copy_from_user_nmi(&frame.return_address, fp+8, 8);
+   bytes = __copy_from_user_nmi(&frame.return_address,
+   fp + sizeof(frame.next_frame),
+   sizeof(frame.return_address));
if (bytes != 0)
break;
 


Re: [perf/x86] 75925e1ad7: BUG: unable to handle kernel paging request at 000045b8

2016-01-21 Thread Peter Zijlstra
On Fri, Jan 22, 2016 at 12:33:24PM +0800, kernel test robot wrote:
> Greetings,
> 
> 0day kernel testing robot got the below dmesg and the first bad commit is
> 
> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master
> 
> commit 75925e1ad7f5a4e867bd14ff8e7f114ea1596434
> Author: Andi Kleen 
> AuthorDate: Thu Oct 22 15:07:21 2015 -0700
> Commit: Ingo Molnar 
> CommitDate: Mon Nov 23 09:58:25 2015 +0100
> 
> perf/x86: Optimize stack walk user accesses
> 
> Change the perf user stack walking to use the new
> __copy_from_user_nmi(), and split each access into word sized transfer
> sizes. This allows to inline the complete access and optimize it all
> into a single load.

Andi, please have a look at this. Also note that x86_64
__copy_from_user_nocheck() actually supports .size=16.