Re: [lkp] [mm, kasan] 7392becb25: BUG: KASAN: slab-out-of-bounds in bucket_table_alloc+0x79/0x1a0 at addr ffff88003e400000

2016-07-13 Thread Alexander Potapenko
Hello there,

I've built my kernel with the supplied config, but haven't managed to
reproduce the failure.
The test prints the following log:

[2.554919] Testing concurrent rhashtable access from 10 threads
[3.295575]   thread[4]: rhashtable_insert_fast failed
[3.296065]   thread[9]: rhashtable_insert_fast failed
[3.296491]   thread[0]: rhashtable_insert_fast failed
[3.296948] Test failed: thread 0 returned: -12
[3.297375]   thread[5]: rhashtable_insert_fast failed
[7.843544] Test failed: thread 4 returned: -12
[7.844341] Test failed: thread 5 returned: -12
[7.859334] Test failed: thread 9 returned: -12
[7.859772] Started 10 threads, 4 failed

Soon after that the kernel panics for an unrelated reason:

[   75.812970] Kernel panic - not syncing: No working init found.  Try
passing init= option to kernel. See Linux Documentation/init.txt for
guidance.
[   75.814048] CPU: 0 PID: 1 Comm: swapper Not tainted
4.7.0-rc7-00020-g4543d2b #1091
[   75.814749] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
BIOS Bochs 01/01/2011
[   75.815465]   8810fe60 812cd2d4
8810ff28
[   75.816140]  811741f5 41b58ab3 8292ea1d
811740ba
[   75.816787]  8800170e615c 0008 8810ff38
8810fed0
[   75.817426] Call Trace:
[   75.817659]  [] dump_stack+0x19/0x1b
[   75.818126]  [] panic+0x13b/0x27a
[   75.818517]  [] ? phys_to_pfn_t+0x1d/0x1d
[   75.818960]  [] kernel_init+0xf4/0xfb
[   75.819376]  [] ret_from_fork+0x1f/0x40
[   75.819804]  [] ? rest_init+0x13d/0x13d
[   75.820008] Kernel Offset: disabled
[   75.820008] ---[ end Kernel panic - not syncing: No working init
found.  Try passing init= option to kernel. See Linux
Documentation/init.txt for guidance.

I'm using the following commandline to run QEMU:

  $ sudo qemu-system-x86_64 -hda ${THISDIR}/wheezy.img -m 500M -smp 2
-net user,hostfwd=tcp:127.0.0.1:10025-:22 -net nic \
   -kernel $KASAN_SRC_DIR/arch/x86/boot/bzImage \
   -append "console=ttyS0 root=/dev/sda debug
earlyprintk=serial slub_debug=FPZU" \
   -nographic-pidfile vm_pid  -enable-kvm  -s # -S
 -gdb unix:gdb,server,nowait

I have also built test_rhashtable.c as module and tried to load/unload
it many times in a row, but the report didn't reproduce for me either.

If it's still reproducible, may I ask you to run the output through
https://github.com/google/sanitizers/blob/master/address-sanitizer/tools/kasan_symbolize.py
?

TIA,
Alex

On Wed, Jul 13, 2016 at 3:29 AM, kernel test robot
 wrote:
>
> FYI, we noticed the following commit:
>
> https://github.com/0day-ci/linux 
> Alexander-Potapenko/mm-kasan-switch-SLUB-to-stackdepot-enable-memory-quarantine-for-SLUB/20160708-183858
> commit 7392becb255cd6c0e7bedaabd58f638b732772f2 ("mm, kasan: switch SLUB to 
> stackdepot, enable memory quarantine for SLUB")
>
> in testcase: boot
>
> on test machine: 2 threads qemu-system-x86_64 -enable-kvm -cpu 
> Haswell,+smep,+smap with 1G memory
>
> caused below changes:
>
>
> +-+--++
> | 
> | v4.7-rc6 | 7392becb25 |
> +-+--++
> | boot_successes  
> | 0| 0  |
> | boot_failures   
> | 61   | 36 |
> | BUG:workqueue_lockup-pool   
> | 58   | 14 |
> | 
> BUG:workqueue_lockup-pool_cpus=#cpus=#node=#node=#flags=#nice=#flags=#nice=#stuck_for#s
>  | 58   | 14 |
> | BUG:workqueue_lockup-pool_cpus=#cpus=#flags=#nice=#flags=#nice=#stuck_for#s 
> | 12   | 1  |
> | Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode=   
> | 9||
> | BUG:KASAN:slab-out-of-bounds_in_bucket_table_alloc_at_addr  
> | 0| 22 |
> | backtrace:threadfunc
> | 0| 22 |
> | BUG:KASAN:slab-out-of-bounds_in 
> | 0| 1  |
> +-+--++
>
>
>
> [   22.095742] Testing concurrent rhashtable access from 10 threads
> [   22.756188] 
> ==
> [   22.756188] 
> ==
> [   22.759097] BUG: KASAN: slab-out-of-bounds in 
> bucket_table_alloc+0x79/0x1a0 at

Re: [lkp] [mm, kasan] 7392becb25: BUG: KASAN: slab-out-of-bounds in bucket_table_alloc+0x79/0x1a0 at addr ffff88003e400000

2016-07-13 Thread Alexander Potapenko
Andrey, Joonsoo: FYI

On Wed, Jul 13, 2016 at 10:57 AM, Alexander Potapenko  wrote:
> Hello there,
>
> I've built my kernel with the supplied config, but haven't managed to
> reproduce the failure.
> The test prints the following log:
>
> [2.554919] Testing concurrent rhashtable access from 10 threads
> [3.295575]   thread[4]: rhashtable_insert_fast failed
> [3.296065]   thread[9]: rhashtable_insert_fast failed
> [3.296491]   thread[0]: rhashtable_insert_fast failed
> [3.296948] Test failed: thread 0 returned: -12
> [3.297375]   thread[5]: rhashtable_insert_fast failed
> [7.843544] Test failed: thread 4 returned: -12
> [7.844341] Test failed: thread 5 returned: -12
> [7.859334] Test failed: thread 9 returned: -12
> [7.859772] Started 10 threads, 4 failed
>
> Soon after that the kernel panics for an unrelated reason:
>
> [   75.812970] Kernel panic - not syncing: No working init found.  Try
> passing init= option to kernel. See Linux Documentation/init.txt for
> guidance.
> [   75.814048] CPU: 0 PID: 1 Comm: swapper Not tainted
> 4.7.0-rc7-00020-g4543d2b #1091
> [   75.814749] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
> BIOS Bochs 01/01/2011
> [   75.815465]   8810fe60 812cd2d4
> 8810ff28
> [   75.816140]  811741f5 41b58ab3 8292ea1d
> 811740ba
> [   75.816787]  8800170e615c 0008 8810ff38
> 8810fed0
> [   75.817426] Call Trace:
> [   75.817659]  [] dump_stack+0x19/0x1b
> [   75.818126]  [] panic+0x13b/0x27a
> [   75.818517]  [] ? phys_to_pfn_t+0x1d/0x1d
> [   75.818960]  [] kernel_init+0xf4/0xfb
> [   75.819376]  [] ret_from_fork+0x1f/0x40
> [   75.819804]  [] ? rest_init+0x13d/0x13d
> [   75.820008] Kernel Offset: disabled
> [   75.820008] ---[ end Kernel panic - not syncing: No working init
> found.  Try passing init= option to kernel. See Linux
> Documentation/init.txt for guidance.
>
> I'm using the following commandline to run QEMU:
>
>   $ sudo qemu-system-x86_64 -hda ${THISDIR}/wheezy.img -m 500M -smp 2
> -net user,hostfwd=tcp:127.0.0.1:10025-:22 -net nic \
>-kernel $KASAN_SRC_DIR/arch/x86/boot/bzImage \
>-append "console=ttyS0 root=/dev/sda debug
> earlyprintk=serial slub_debug=FPZU" \
>-nographic-pidfile vm_pid  -enable-kvm  -s # -S
>  -gdb unix:gdb,server,nowait
>
> I have also built test_rhashtable.c as module and tried to load/unload
> it many times in a row, but the report didn't reproduce for me either.
>
> If it's still reproducible, may I ask you to run the output through
> https://github.com/google/sanitizers/blob/master/address-sanitizer/tools/kasan_symbolize.py
> ?
>
> TIA,
> Alex
>
> On Wed, Jul 13, 2016 at 3:29 AM, kernel test robot
>  wrote:
>>
>> FYI, we noticed the following commit:
>>
>> https://github.com/0day-ci/linux 
>> Alexander-Potapenko/mm-kasan-switch-SLUB-to-stackdepot-enable-memory-quarantine-for-SLUB/20160708-183858
>> commit 7392becb255cd6c0e7bedaabd58f638b732772f2 ("mm, kasan: switch SLUB to 
>> stackdepot, enable memory quarantine for SLUB")
>>
>> in testcase: boot
>>
>> on test machine: 2 threads qemu-system-x86_64 -enable-kvm -cpu 
>> Haswell,+smep,+smap with 1G memory
>>
>> caused below changes:
>>
>>
>> +-+--++
>> |
>>  | v4.7-rc6 | 7392becb25 |
>> +-+--++
>> | boot_successes 
>>  | 0| 0  |
>> | boot_failures  
>>  | 61   | 36 |
>> | BUG:workqueue_lockup-pool  
>>  | 58   | 14 |
>> | 
>> BUG:workqueue_lockup-pool_cpus=#cpus=#node=#node=#flags=#nice=#flags=#nice=#stuck_for#s
>>  | 58   | 14 |
>> | 
>> BUG:workqueue_lockup-pool_cpus=#cpus=#flags=#nice=#flags=#nice=#stuck_for#s  
>>| 12   | 1  |
>> | Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode=  
>>  | 9||
>> | BUG:KASAN:slab-out-of-bounds_in_bucket_table_alloc_at_addr 
>>  | 0| 22 |
>> | backtrace:threadfunc   
>>  | 0| 22 |
>> | BUG:KASAN:slab-out-of-bounds_in
>>  | 0| 1  |
>> +-+--++
>>
>>
>>
>> [   22.095742] Testing concurrent rhashtable access from 10 threads
>> [  

[lkp] [mm, kasan] 7392becb25: BUG: KASAN: slab-out-of-bounds in bucket_table_alloc+0x79/0x1a0 at addr ffff88003e400000

2016-07-12 Thread kernel test robot

FYI, we noticed the following commit:

https://github.com/0day-ci/linux 
Alexander-Potapenko/mm-kasan-switch-SLUB-to-stackdepot-enable-memory-quarantine-for-SLUB/20160708-183858
commit 7392becb255cd6c0e7bedaabd58f638b732772f2 ("mm, kasan: switch SLUB to 
stackdepot, enable memory quarantine for SLUB")

in testcase: boot

on test machine: 2 threads qemu-system-x86_64 -enable-kvm -cpu 
Haswell,+smep,+smap with 1G memory

caused below changes:


+-+--++
|   
  | v4.7-rc6 | 7392becb25 |
+-+--++
| boot_successes
  | 0| 0  |
| boot_failures 
  | 61   | 36 |
| BUG:workqueue_lockup-pool 
  | 58   | 14 |
| 
BUG:workqueue_lockup-pool_cpus=#cpus=#node=#node=#flags=#nice=#flags=#nice=#stuck_for#s
 | 58   | 14 |
| BUG:workqueue_lockup-pool_cpus=#cpus=#flags=#nice=#flags=#nice=#stuck_for#s   
  | 12   | 1  |
| Kernel_panic-not_syncing:Attempted_to_kill_init!exitcode= 
  | 9||
| BUG:KASAN:slab-out-of-bounds_in_bucket_table_alloc_at_addr
  | 0| 22 |
| backtrace:threadfunc  
  | 0| 22 |
| BUG:KASAN:slab-out-of-bounds_in   
  | 0| 1  |
+-+--++



[   22.095742] Testing concurrent rhashtable access from 10 threads
[   22.756188] 
==
[   22.756188] 
==
[   22.759097] BUG: KASAN: slab-out-of-bounds in bucket_table_alloc+0x79/0x1a0 
at addr 88003e40
[   22.759097] BUG: KASAN: slab-out-of-bounds in bucket_table_alloc+0x79/0x1a0 
at addr 88003e40
[   22.762225] Write of size 4 by task rhashtable_thra/165
[   22.762225] Write of size 4 by task rhashtable_thra/165
[   22.764303] CPU: 0 PID: 165 Comm: rhashtable_thra Not tainted 
4.7.0-rc6-1-g7392bec #1
[   22.764303] CPU: 0 PID: 165 Comm: rhashtable_thra Not tainted 
4.7.0-rc6-1-g7392bec #1
[   22.766875] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Debian-1.8.2-1 04/01/2014
[   22.766875] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
Debian-1.8.2-1 04/01/2014
[   22.769722]  
[   22.769722]   8800165f7be8 8800165f7be8 
812cd64c 812cd64c 8800165f7c58 8800165f7c58

[   22.772033]  811c4b96
[   22.772033]  811c4b96 812ec3c8 812ec3c8 
0246 0246 88082300 88082300

[   22.774265]  02089220
[   22.774265]  02089220 02089220 02089220 
8800165f7c68 8800165f7c68 811c2379 811c2379

[   22.776571] Call Trace:
[   22.776571] Call Trace:
[   22.777355]  [] dump_stack+0x19/0x1b
[   22.777355]  [] dump_stack+0x19/0x1b
[   22.779220]  [] kasan_report+0x2d7/0x4ed
[   22.779220]  [] kasan_report+0x2d7/0x4ed
[   22.780862]  [] ? bucket_table_alloc+0x79/0x1a0
[   22.780862]  [] ? bucket_table_alloc+0x79/0x1a0
[   22.782668]  [] ? __kmalloc+0x177/0x1b0
[   22.782668]  [] ? __kmalloc+0x177/0x1b0
[   22.784273]  [] __asan_store4+0x6e/0x70
[   22.784273]  [] __asan_store4+0x6e/0x70
[   22.785885]  [] bucket_table_alloc+0x79/0x1a0
[   22.785885]  [] bucket_table_alloc+0x79/0x1a0
[   22.787660]  [] rhashtable_insert_rehash+0xc0/0x13f
[   22.787660]  [] rhashtable_insert_rehash+0xc0/0x13f
[   22.789577]  [] insert_retry+0x2fa/0x5bc
[   22.789577]  [] insert_retry+0x2fa/0x5bc
[   22.791705]  [] ? trace_hardirqs_on+0xd/0xf
[   22.791705]  [] ? trace_hardirqs_on+0xd/0xf
[   22.793425]  [] threadfunc+0xc8/0x68c
[   22.793425]  [] threadfunc+0xc8/0x68c
[   22.794987]  [] ? __schedule+0x5fe/0x73f
[   22.794987]  [] ? __schedule+0x5fe/0x73f
[   22.796629]  [] ? insert_retry+0x5bc/0x5bc
[   22.796629]  [] ? insert_retry+0x5bc/0x5bc
[   22.798810]  [] kthread+0x18d/0x19c
[   22.798810]  [] kthread+0x18d/0x19c
[   22.800319]  [] ? __kthread_parkme+0xb0/0xb0
[   22.800319]  [] ? __kthread_parkme+0xb0/0xb0
[   22.802048]  [] ? finish_task_switch+0x1ac/0x224
[   22.802048]  [] ? finish_task_switch+0x1ac/0x224
[   22.804976]  [] ret_from_fork+0x1f/0x40
[   22.804976]  [] ret_from_fork+0x1f/0x40
[   22.807662]  [] ? __kthread_parkme+0xb0/0xb0
[   2