On 06/16/2017 11:22 PM, Laurent Dufour wrote:
> kworker/32:1/819 is trying to acquire lock:
>  (&vma->vm_sequence){+.+...}, at: [<c0000000002f20e0>]
> zap_page_range_single+0xd0/0x1a0
> 
> but task is already holding lock:
>  (&mapping->i_mmap_rwsem){++++..}, at: [<c0000000002f229c>]
> unmap_mapping_range+0x7c/0x160
> 
> which lock already depends on the new lock.
> 
> the existing dependency chain (in reverse order) is:
> 
> -> #2 (&mapping->i_mmap_rwsem){++++..}:
>        down_write+0x84/0x130
>        __vma_adjust+0x1f4/0xa80
>        __split_vma.isra.2+0x174/0x290
>        do_munmap+0x13c/0x4e0
>        vm_munmap+0x64/0xb0
>        elf_map+0x11c/0x130
>        load_elf_binary+0x6f0/0x15f0
>        search_binary_handler+0xe0/0x2a0
>        do_execveat_common.isra.14+0x7fc/0xbe0
>        call_usermodehelper_exec_async+0x14c/0x1d0
>        ret_from_kernel_thread+0x5c/0x68
> 
> -> #1 (&vma->vm_sequence/1){+.+...}:
>        __vma_adjust+0x124/0xa80
>        __split_vma.isra.2+0x174/0x290
>        do_munmap+0x13c/0x4e0
>        vm_munmap+0x64/0xb0
>        elf_map+0x11c/0x130
>        load_elf_binary+0x6f0/0x15f0
>        search_binary_handler+0xe0/0x2a0
>        do_execveat_common.isra.14+0x7fc/0xbe0
>        call_usermodehelper_exec_async+0x14c/0x1d0
>        ret_from_kernel_thread+0x5c/0x68
> 
> -> #0 (&vma->vm_sequence){+.+...}:
>        lock_acquire+0xf4/0x310
>        unmap_page_range+0xcc/0xfa0
>        zap_page_range_single+0xd0/0x1a0
>        unmap_mapping_range+0x138/0x160
>        truncate_pagecache+0x50/0xa0
>        put_aio_ring_file+0x48/0xb0
>        aio_free_ring+0x40/0x1b0
>        free_ioctx+0x38/0xc0
>        process_one_work+0x2cc/0x8a0
>        worker_thread+0xac/0x580
>        kthread+0x164/0x1b0
>        ret_from_kernel_thread+0x5c/0x68
> 
> other info that might help us debug this:
> 
> Chain exists of:
>   &vma->vm_sequence --> &vma->vm_sequence/1 --> &mapping->i_mmap_rwsem
> 
>  Possible unsafe locking scenario:
> 
>        CPU0                    CPU1
>        ----                    ----
>   lock(&mapping->i_mmap_rwsem);
>                                lock(&vma->vm_sequence/1);
>                                lock(&mapping->i_mmap_rwsem);
>   lock(&vma->vm_sequence);
> 
>  *** DEADLOCK ***
> 
> To fix that we must grab the vm_sequence lock after any mapping one in
> __vma_adjust().
> 
> Signed-off-by: Laurent Dufour <lduf...@linux.vnet.ibm.com>

Should not this be folded back into the previous patch ? It fixes an
issue introduced by the previous one.

Reply via email to