[btrfs] lockdep splat

2017-02-17 Thread Mike Galbraith
Greetings,

Running ltp on master.today, I received the splat (from hell) below.

[ 5015.128458] =
[ 5015.128458] [ INFO: possible irq lock inversion dependency detected ]
[ 5015.128458] 4.10.0-default #119 Tainted: GE  
[ 5015.128458] -
[ 5015.128458] khugepaged/896 just changed the state of lock:
[ 5015.128458]  (_node->mutex){+.+.-.}, at: [] 
__btrfs_release_delayed_node+0x41/0x2d0 [btrfs]
[ 5015.128458] but this lock took another, RECLAIM_FS-unsafe lock in the past:
[ 5015.128458]  (pcpu_alloc_mutex){+.+.+.}
[ 5015.128458] 
[ 5015.128458] 
[ 5015.128458] and interrupts could create inverse lock ordering between them.
[ 5015.128458] 
[ 5015.128458] 
[ 5015.128458] other info that might help us debug this:
[ 5015.128458] Chain exists of:
[ 5015.128458]   _node->mutex --> >groups_sem --> 
pcpu_alloc_mutex
[ 5015.128458] 
[ 5015.128458]  Possible interrupt unsafe locking scenario:
[ 5015.128458] 
[ 5015.128458]CPU0CPU1
[ 5015.128458]
[ 5015.128458]   lock(pcpu_alloc_mutex);
[ 5015.128458]local_irq_disable();
[ 5015.128458]lock(_node->mutex);
[ 5015.128458]lock(>groups_sem);
[ 5015.128458]   
[ 5015.128458] lock(_node->mutex);
[ 5015.128458] 
[ 5015.128458]  *** DEADLOCK ***
[ 5015.128458] 
[ 5015.128458] 2 locks held by khugepaged/896:
[ 5015.128458]  #0:  (shrinker_rwsem){..}, at: [] 
shrink_slab+0x7d/0x650
[ 5015.128458]  #1:  (>s_umount_key#26){..}, at: [] 
trylock_super+0x1b/0x50
[ 5015.128458] 
[ 5015.128458] the shortest dependencies between 2nd lock and 1st lock:
[ 5015.128458]-> (pcpu_alloc_mutex){+.+.+.} ops: 4652 {
[ 5015.128458]   HARDIRQ-ON-W at:
[ 5015.128458]   __lock_acquire+0x8e6/0x1550
[ 5015.128458]   lock_acquire+0xbd/0x220
[ 5015.128458]   mutex_lock_nested+0x67/0x6a0
[ 5015.128458]   pcpu_alloc+0x1c0/0x600
[ 5015.128458]   __alloc_percpu+0x15/0x20
[ 5015.128458]   alloc_kmem_cache_cpus.isra.56+0x2b/0xa0
[ 5015.128458]   __do_tune_cpucache+0x30/0x210
[ 5015.128458]   do_tune_cpucache+0x2a/0xd0
[ 5015.128458]   enable_cpucache+0x61/0x110
[ 5015.128458]   kmem_cache_init_late+0x41/0x76
[ 5015.128458]   start_kernel+0x352/0x4cd
[ 5015.128458]   x86_64_start_reservations+0x2a/0x2c
[ 5015.128458]   x86_64_start_kernel+0x13d/0x14c
[ 5015.128458]   verify_cpu+0x0/0xfc
[ 5015.128458]   SOFTIRQ-ON-W at:
[ 5015.128458]   __lock_acquire+0x283/0x1550
[ 5015.128458]   lock_acquire+0xbd/0x220
[ 5015.128458]   mutex_lock_nested+0x67/0x6a0
[ 5015.128458]   pcpu_alloc+0x1c0/0x600
[ 5015.128458]   __alloc_percpu+0x15/0x20
[ 5015.128458]   alloc_kmem_cache_cpus.isra.56+0x2b/0xa0
[ 5015.128458]   __do_tune_cpucache+0x30/0x210
[ 5015.128458]   do_tune_cpucache+0x2a/0xd0
[ 5015.128458]   enable_cpucache+0x61/0x110
[ 5015.128458]   kmem_cache_init_late+0x41/0x76
[ 5015.128458]   start_kernel+0x352/0x4cd
[ 5015.128458]   x86_64_start_reservations+0x2a/0x2c
[ 5015.128458]   x86_64_start_kernel+0x13d/0x14c
[ 5015.128458]   verify_cpu+0x0/0xfc
[ 5015.128458]   RECLAIM_FS-ON-W at:
[ 5015.128458]  mark_held_locks+0x66/0x90
[ 5015.128458]  lockdep_trace_alloc+0x6f/0xd0
[ 5015.128458]  __alloc_pages_nodemask+0x81/0x370
[ 5015.128458]  pcpu_populate_chunk+0xac/0x340
[ 5015.128458]  pcpu_alloc+0x4f8/0x600
[ 5015.128458]  __alloc_percpu+0x15/0x20
[ 5015.128458]  perf_pmu_register+0xc6/0x3c0
[ 5015.128458]  init_hw_perf_events+0x513/0x56d
[ 5015.128458]  do_one_initcall+0x51/0x1c0
[ 5015.128458]  kernel_init_freeable+0x146/0x28e
[ 5015.128458]  kernel_init+0xe/0x110
[ 5015.128458]  ret_from_fork+0x31/0x40
[ 5015.128458]   INITIAL USE at:
[ 5015.128458]  __lock_acquire+0x2ce/0x1550
[ 5015.128458]  lock_acquire+0xbd/0x220
[ 5015.128458]  

[btrfs] lockdep splat

2017-02-17 Thread Mike Galbraith
Greetings,

Running ltp on master.today, I received the splat (from hell) below.

[ 5015.128458] =
[ 5015.128458] [ INFO: possible irq lock inversion dependency detected ]
[ 5015.128458] 4.10.0-default #119 Tainted: GE  
[ 5015.128458] -
[ 5015.128458] khugepaged/896 just changed the state of lock:
[ 5015.128458]  (_node->mutex){+.+.-.}, at: [] 
__btrfs_release_delayed_node+0x41/0x2d0 [btrfs]
[ 5015.128458] but this lock took another, RECLAIM_FS-unsafe lock in the past:
[ 5015.128458]  (pcpu_alloc_mutex){+.+.+.}
[ 5015.128458] 
[ 5015.128458] 
[ 5015.128458] and interrupts could create inverse lock ordering between them.
[ 5015.128458] 
[ 5015.128458] 
[ 5015.128458] other info that might help us debug this:
[ 5015.128458] Chain exists of:
[ 5015.128458]   _node->mutex --> >groups_sem --> 
pcpu_alloc_mutex
[ 5015.128458] 
[ 5015.128458]  Possible interrupt unsafe locking scenario:
[ 5015.128458] 
[ 5015.128458]CPU0CPU1
[ 5015.128458]
[ 5015.128458]   lock(pcpu_alloc_mutex);
[ 5015.128458]local_irq_disable();
[ 5015.128458]lock(_node->mutex);
[ 5015.128458]lock(>groups_sem);
[ 5015.128458]   
[ 5015.128458] lock(_node->mutex);
[ 5015.128458] 
[ 5015.128458]  *** DEADLOCK ***
[ 5015.128458] 
[ 5015.128458] 2 locks held by khugepaged/896:
[ 5015.128458]  #0:  (shrinker_rwsem){..}, at: [] 
shrink_slab+0x7d/0x650
[ 5015.128458]  #1:  (>s_umount_key#26){..}, at: [] 
trylock_super+0x1b/0x50
[ 5015.128458] 
[ 5015.128458] the shortest dependencies between 2nd lock and 1st lock:
[ 5015.128458]-> (pcpu_alloc_mutex){+.+.+.} ops: 4652 {
[ 5015.128458]   HARDIRQ-ON-W at:
[ 5015.128458]   __lock_acquire+0x8e6/0x1550
[ 5015.128458]   lock_acquire+0xbd/0x220
[ 5015.128458]   mutex_lock_nested+0x67/0x6a0
[ 5015.128458]   pcpu_alloc+0x1c0/0x600
[ 5015.128458]   __alloc_percpu+0x15/0x20
[ 5015.128458]   alloc_kmem_cache_cpus.isra.56+0x2b/0xa0
[ 5015.128458]   __do_tune_cpucache+0x30/0x210
[ 5015.128458]   do_tune_cpucache+0x2a/0xd0
[ 5015.128458]   enable_cpucache+0x61/0x110
[ 5015.128458]   kmem_cache_init_late+0x41/0x76
[ 5015.128458]   start_kernel+0x352/0x4cd
[ 5015.128458]   x86_64_start_reservations+0x2a/0x2c
[ 5015.128458]   x86_64_start_kernel+0x13d/0x14c
[ 5015.128458]   verify_cpu+0x0/0xfc
[ 5015.128458]   SOFTIRQ-ON-W at:
[ 5015.128458]   __lock_acquire+0x283/0x1550
[ 5015.128458]   lock_acquire+0xbd/0x220
[ 5015.128458]   mutex_lock_nested+0x67/0x6a0
[ 5015.128458]   pcpu_alloc+0x1c0/0x600
[ 5015.128458]   __alloc_percpu+0x15/0x20
[ 5015.128458]   alloc_kmem_cache_cpus.isra.56+0x2b/0xa0
[ 5015.128458]   __do_tune_cpucache+0x30/0x210
[ 5015.128458]   do_tune_cpucache+0x2a/0xd0
[ 5015.128458]   enable_cpucache+0x61/0x110
[ 5015.128458]   kmem_cache_init_late+0x41/0x76
[ 5015.128458]   start_kernel+0x352/0x4cd
[ 5015.128458]   x86_64_start_reservations+0x2a/0x2c
[ 5015.128458]   x86_64_start_kernel+0x13d/0x14c
[ 5015.128458]   verify_cpu+0x0/0xfc
[ 5015.128458]   RECLAIM_FS-ON-W at:
[ 5015.128458]  mark_held_locks+0x66/0x90
[ 5015.128458]  lockdep_trace_alloc+0x6f/0xd0
[ 5015.128458]  __alloc_pages_nodemask+0x81/0x370
[ 5015.128458]  pcpu_populate_chunk+0xac/0x340
[ 5015.128458]  pcpu_alloc+0x4f8/0x600
[ 5015.128458]  __alloc_percpu+0x15/0x20
[ 5015.128458]  perf_pmu_register+0xc6/0x3c0
[ 5015.128458]  init_hw_perf_events+0x513/0x56d
[ 5015.128458]  do_one_initcall+0x51/0x1c0
[ 5015.128458]  kernel_init_freeable+0x146/0x28e
[ 5015.128458]  kernel_init+0xe/0x110
[ 5015.128458]  ret_from_fork+0x31/0x40
[ 5015.128458]   INITIAL USE at:
[ 5015.128458]  __lock_acquire+0x2ce/0x1550
[ 5015.128458]  lock_acquire+0xbd/0x220
[ 5015.128458]  

Re: master - btrfs lockdep splat

2017-01-12 Thread Mike Galbraith
On Thu, 2017-01-12 at 10:44 -0800, Liu Bo wrote:
> On Thu, Jan 12, 2017 at 07:12:12PM +0100, Mike Galbraith wrote:
> > Greetings,
> > 
> > I wanted to do some -rt testing, but seems non-rt kernels aren't
> > lockdep clean with btrfs /, making -rt testing a bit premature.
> > 
> > (hm, 28a235931 Btrfs: fix lockdep warning on deadlock against an inode's 
> > log mutex)
> 
> It's rather a false-positive lockdep warning than a real deadlock, and a
> patch[1] has been queued to fix it.

(yeah, just stops lockdep from perhaps finding something real)

> [1]: https://patchwork.kernel.org/patch/9473431/

Yup, virtual box seems to be a happy camper now.  Thanks.

-Mike



Re: master - btrfs lockdep splat

2017-01-12 Thread Mike Galbraith
On Thu, 2017-01-12 at 10:44 -0800, Liu Bo wrote:
> On Thu, Jan 12, 2017 at 07:12:12PM +0100, Mike Galbraith wrote:
> > Greetings,
> > 
> > I wanted to do some -rt testing, but seems non-rt kernels aren't
> > lockdep clean with btrfs /, making -rt testing a bit premature.
> > 
> > (hm, 28a235931 Btrfs: fix lockdep warning on deadlock against an inode's 
> > log mutex)
> 
> It's rather a false-positive lockdep warning than a real deadlock, and a
> patch[1] has been queued to fix it.

(yeah, just stops lockdep from perhaps finding something real)

> [1]: https://patchwork.kernel.org/patch/9473431/

Yup, virtual box seems to be a happy camper now.  Thanks.

-Mike



Re: master - btrfs lockdep splat

2017-01-12 Thread Liu Bo
On Thu, Jan 12, 2017 at 07:12:12PM +0100, Mike Galbraith wrote:
> Greetings,
> 
> I wanted to do some -rt testing, but seems non-rt kernels aren't
> lockdep clean with btrfs /, making -rt testing a bit premature.
> 
> (hm, 28a235931 Btrfs: fix lockdep warning on deadlock against an inode's log 
> mutex)

It's rather a false-positive lockdep warning than a real deadlock, and a
patch[1] has been queued to fix it.

[1]: https://patchwork.kernel.org/patch/9473431/

Thanks,

-liubo
>  
> [  876.622587] =
> [  876.622588] [ INFO: possible recursive locking detected ]
> [  876.622589] 4.10.0-master #36 Tainted: GE  
> [  876.622590] -
> [  876.622591] vi/3364 is trying to acquire lock:
> [  876.622592]  (>log_mutex){+.+...}, at: [] 
> btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622628] but task is already holding lock:
> [  876.622629]  (>log_mutex){+.+...}, at: [] 
> btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622641] other info that might help us debug this:
> [  876.622642]  Possible unsafe locking scenario:
> [  876.622643]CPU0
> [  876.622644]
> [  876.622644]   lock(>log_mutex);
> [  876.622648]   lock(>log_mutex);
> [  876.622649] *** DEADLOCK ***
> [  876.622650]  May be due to missing lock nesting notation
> [  876.622651] 3 locks held by vi/3364:
> [  876.622651]  #0:  (>s_type->i_mutex_key#11){+.+.+.}, at: 
> [] btrfs_sync_file+0x154/0x480 [btrfs]
> [  876.622664]  #1:  (sb_internal){.+.+..}, at: [] 
> start_transaction+0x2a7/0x540 [btrfs]
> [  876.622674]  #2:  (>log_mutex){+.+...}, at: [] 
> btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622685] stack backtrace:
> [  876.622687] CPU: 3 PID: 3364 Comm: vi Tainted: GE   
> 4.10.0-master #36
> [  876.622688] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
> rel-1.8.1-0-g4adadbd-20161202_174313-build11a 04/01/2014
> [  876.622689] Call Trace:
> [  876.622698]  dump_stack+0x85/0xc9
> [  876.622704]  __lock_acquire+0x9f9/0x1550
> [  876.622715]  ? __btrfs_release_delayed_node+0x79/0x2d0 [btrfs]
> [  876.622717]  lock_acquire+0xbd/0x200
> [  876.622726]  ? btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622732]  mutex_lock_nested+0x69/0x660
> [  876.622741]  ? btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622750]  ? __btrfs_release_delayed_node+0x79/0x2d0 [btrfs]
> [  876.622759]  ? btrfs_commit_inode_delayed_inode+0xeb/0x130 [btrfs]
> [  876.622767]  btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622771]  ? __might_sleep+0x4a/0x90
> [  876.622781]  ? btrfs_i_callback+0x20/0x20 [btrfs]
> [  876.622791]  ? free_extent_buffer+0x4b/0x90 [btrfs]
> [  876.622799]  btrfs_log_inode+0x572/0xbd0 [btrfs]
> [  876.622808]  btrfs_log_inode_parent+0x26a/0x9b0 [btrfs]
> [  876.622812]  ? dget_parent+0x77/0x170
> [  876.622821]  btrfs_log_dentry_safe+0x62/0x80 [btrfs]
> [  876.622830]  btrfs_sync_file+0x2eb/0x480 [btrfs]
> [  876.622834]  vfs_fsync_range+0x3d/0xb0
> [  876.622836]  ? trace_hardirqs_on_caller+0xf9/0x1c0
> [  876.622837]  do_fsync+0x3d/0x70
> [  876.622839]  SyS_fsync+0x10/0x20
> [  876.622840]  entry_SYSCALL_64_fastpath+0x1f/0xc2
> [  876.622842] RIP: 0033:0x7f7fbe3da290
> [  876.622843] RSP: 002b:7ffe2778f0b8 EFLAGS: 0246 ORIG_RAX: 
> 004a
> [  876.622844] RAX: ffda RBX: 0003 RCX: 
> 7f7fbe3da290
> [  876.622845] RDX: 103d RSI: 0143e5d0 RDI: 
> 0003
> [  876.622846] RBP: 01285f10 R08: 0143e5d0 R09: 
> 
> [  876.622847] R10:  R11: 0246 R12: 
> 
> [  876.622847] R13: 2000 R14: 0001 R15: 
> 012821a0
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: master - btrfs lockdep splat

2017-01-12 Thread Liu Bo
On Thu, Jan 12, 2017 at 07:12:12PM +0100, Mike Galbraith wrote:
> Greetings,
> 
> I wanted to do some -rt testing, but seems non-rt kernels aren't
> lockdep clean with btrfs /, making -rt testing a bit premature.
> 
> (hm, 28a235931 Btrfs: fix lockdep warning on deadlock against an inode's log 
> mutex)

It's rather a false-positive lockdep warning than a real deadlock, and a
patch[1] has been queued to fix it.

[1]: https://patchwork.kernel.org/patch/9473431/

Thanks,

-liubo
>  
> [  876.622587] =
> [  876.622588] [ INFO: possible recursive locking detected ]
> [  876.622589] 4.10.0-master #36 Tainted: GE  
> [  876.622590] -
> [  876.622591] vi/3364 is trying to acquire lock:
> [  876.622592]  (>log_mutex){+.+...}, at: [] 
> btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622628] but task is already holding lock:
> [  876.622629]  (>log_mutex){+.+...}, at: [] 
> btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622641] other info that might help us debug this:
> [  876.622642]  Possible unsafe locking scenario:
> [  876.622643]CPU0
> [  876.622644]
> [  876.622644]   lock(>log_mutex);
> [  876.622648]   lock(>log_mutex);
> [  876.622649] *** DEADLOCK ***
> [  876.622650]  May be due to missing lock nesting notation
> [  876.622651] 3 locks held by vi/3364:
> [  876.622651]  #0:  (>s_type->i_mutex_key#11){+.+.+.}, at: 
> [] btrfs_sync_file+0x154/0x480 [btrfs]
> [  876.622664]  #1:  (sb_internal){.+.+..}, at: [] 
> start_transaction+0x2a7/0x540 [btrfs]
> [  876.622674]  #2:  (>log_mutex){+.+...}, at: [] 
> btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622685] stack backtrace:
> [  876.622687] CPU: 3 PID: 3364 Comm: vi Tainted: GE   
> 4.10.0-master #36
> [  876.622688] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
> rel-1.8.1-0-g4adadbd-20161202_174313-build11a 04/01/2014
> [  876.622689] Call Trace:
> [  876.622698]  dump_stack+0x85/0xc9
> [  876.622704]  __lock_acquire+0x9f9/0x1550
> [  876.622715]  ? __btrfs_release_delayed_node+0x79/0x2d0 [btrfs]
> [  876.622717]  lock_acquire+0xbd/0x200
> [  876.622726]  ? btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622732]  mutex_lock_nested+0x69/0x660
> [  876.622741]  ? btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622750]  ? __btrfs_release_delayed_node+0x79/0x2d0 [btrfs]
> [  876.622759]  ? btrfs_commit_inode_delayed_inode+0xeb/0x130 [btrfs]
> [  876.622767]  btrfs_log_inode+0x13c/0xbd0 [btrfs]
> [  876.622771]  ? __might_sleep+0x4a/0x90
> [  876.622781]  ? btrfs_i_callback+0x20/0x20 [btrfs]
> [  876.622791]  ? free_extent_buffer+0x4b/0x90 [btrfs]
> [  876.622799]  btrfs_log_inode+0x572/0xbd0 [btrfs]
> [  876.622808]  btrfs_log_inode_parent+0x26a/0x9b0 [btrfs]
> [  876.622812]  ? dget_parent+0x77/0x170
> [  876.622821]  btrfs_log_dentry_safe+0x62/0x80 [btrfs]
> [  876.622830]  btrfs_sync_file+0x2eb/0x480 [btrfs]
> [  876.622834]  vfs_fsync_range+0x3d/0xb0
> [  876.622836]  ? trace_hardirqs_on_caller+0xf9/0x1c0
> [  876.622837]  do_fsync+0x3d/0x70
> [  876.622839]  SyS_fsync+0x10/0x20
> [  876.622840]  entry_SYSCALL_64_fastpath+0x1f/0xc2
> [  876.622842] RIP: 0033:0x7f7fbe3da290
> [  876.622843] RSP: 002b:7ffe2778f0b8 EFLAGS: 0246 ORIG_RAX: 
> 004a
> [  876.622844] RAX: ffda RBX: 0003 RCX: 
> 7f7fbe3da290
> [  876.622845] RDX: 103d RSI: 0143e5d0 RDI: 
> 0003
> [  876.622846] RBP: 01285f10 R08: 0143e5d0 R09: 
> 
> [  876.622847] R10:  R11: 0246 R12: 
> 
> [  876.622847] R13: 2000 R14: 0001 R15: 
> 012821a0
> --
> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


master - btrfs lockdep splat

2017-01-12 Thread Mike Galbraith
Greetings,

I wanted to do some -rt testing, but seems non-rt kernels aren't
lockdep clean with btrfs /, making -rt testing a bit premature.

(hm, 28a235931 Btrfs: fix lockdep warning on deadlock against an inode's log 
mutex)
 
[  876.622587] =
[  876.622588] [ INFO: possible recursive locking detected ]
[  876.622589] 4.10.0-master #36 Tainted: GE  
[  876.622590] -
[  876.622591] vi/3364 is trying to acquire lock:
[  876.622592]  (>log_mutex){+.+...}, at: [] 
btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622628] but task is already holding lock:
[  876.622629]  (>log_mutex){+.+...}, at: [] 
btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622641] other info that might help us debug this:
[  876.622642]  Possible unsafe locking scenario:
[  876.622643]CPU0
[  876.622644]
[  876.622644]   lock(>log_mutex);
[  876.622648]   lock(>log_mutex);
[  876.622649] *** DEADLOCK ***
[  876.622650]  May be due to missing lock nesting notation
[  876.622651] 3 locks held by vi/3364:
[  876.622651]  #0:  (>s_type->i_mutex_key#11){+.+.+.}, at: 
[] btrfs_sync_file+0x154/0x480 [btrfs]
[  876.622664]  #1:  (sb_internal){.+.+..}, at: [] 
start_transaction+0x2a7/0x540 [btrfs]
[  876.622674]  #2:  (>log_mutex){+.+...}, at: [] 
btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622685] stack backtrace:
[  876.622687] CPU: 3 PID: 3364 Comm: vi Tainted: GE   
4.10.0-master #36
[  876.622688] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
rel-1.8.1-0-g4adadbd-20161202_174313-build11a 04/01/2014
[  876.622689] Call Trace:
[  876.622698]  dump_stack+0x85/0xc9
[  876.622704]  __lock_acquire+0x9f9/0x1550
[  876.622715]  ? __btrfs_release_delayed_node+0x79/0x2d0 [btrfs]
[  876.622717]  lock_acquire+0xbd/0x200
[  876.622726]  ? btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622732]  mutex_lock_nested+0x69/0x660
[  876.622741]  ? btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622750]  ? __btrfs_release_delayed_node+0x79/0x2d0 [btrfs]
[  876.622759]  ? btrfs_commit_inode_delayed_inode+0xeb/0x130 [btrfs]
[  876.622767]  btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622771]  ? __might_sleep+0x4a/0x90
[  876.622781]  ? btrfs_i_callback+0x20/0x20 [btrfs]
[  876.622791]  ? free_extent_buffer+0x4b/0x90 [btrfs]
[  876.622799]  btrfs_log_inode+0x572/0xbd0 [btrfs]
[  876.622808]  btrfs_log_inode_parent+0x26a/0x9b0 [btrfs]
[  876.622812]  ? dget_parent+0x77/0x170
[  876.622821]  btrfs_log_dentry_safe+0x62/0x80 [btrfs]
[  876.622830]  btrfs_sync_file+0x2eb/0x480 [btrfs]
[  876.622834]  vfs_fsync_range+0x3d/0xb0
[  876.622836]  ? trace_hardirqs_on_caller+0xf9/0x1c0
[  876.622837]  do_fsync+0x3d/0x70
[  876.622839]  SyS_fsync+0x10/0x20
[  876.622840]  entry_SYSCALL_64_fastpath+0x1f/0xc2
[  876.622842] RIP: 0033:0x7f7fbe3da290
[  876.622843] RSP: 002b:7ffe2778f0b8 EFLAGS: 0246 ORIG_RAX: 
004a
[  876.622844] RAX: ffda RBX: 0003 RCX: 7f7fbe3da290
[  876.622845] RDX: 103d RSI: 0143e5d0 RDI: 0003
[  876.622846] RBP: 01285f10 R08: 0143e5d0 R09: 
[  876.622847] R10:  R11: 0246 R12: 
[  876.622847] R13: 2000 R14: 0001 R15: 012821a0


master - btrfs lockdep splat

2017-01-12 Thread Mike Galbraith
Greetings,

I wanted to do some -rt testing, but seems non-rt kernels aren't
lockdep clean with btrfs /, making -rt testing a bit premature.

(hm, 28a235931 Btrfs: fix lockdep warning on deadlock against an inode's log 
mutex)
 
[  876.622587] =
[  876.622588] [ INFO: possible recursive locking detected ]
[  876.622589] 4.10.0-master #36 Tainted: GE  
[  876.622590] -
[  876.622591] vi/3364 is trying to acquire lock:
[  876.622592]  (>log_mutex){+.+...}, at: [] 
btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622628] but task is already holding lock:
[  876.622629]  (>log_mutex){+.+...}, at: [] 
btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622641] other info that might help us debug this:
[  876.622642]  Possible unsafe locking scenario:
[  876.622643]CPU0
[  876.622644]
[  876.622644]   lock(>log_mutex);
[  876.622648]   lock(>log_mutex);
[  876.622649] *** DEADLOCK ***
[  876.622650]  May be due to missing lock nesting notation
[  876.622651] 3 locks held by vi/3364:
[  876.622651]  #0:  (>s_type->i_mutex_key#11){+.+.+.}, at: 
[] btrfs_sync_file+0x154/0x480 [btrfs]
[  876.622664]  #1:  (sb_internal){.+.+..}, at: [] 
start_transaction+0x2a7/0x540 [btrfs]
[  876.622674]  #2:  (>log_mutex){+.+...}, at: [] 
btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622685] stack backtrace:
[  876.622687] CPU: 3 PID: 3364 Comm: vi Tainted: GE   
4.10.0-master #36
[  876.622688] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS 
rel-1.8.1-0-g4adadbd-20161202_174313-build11a 04/01/2014
[  876.622689] Call Trace:
[  876.622698]  dump_stack+0x85/0xc9
[  876.622704]  __lock_acquire+0x9f9/0x1550
[  876.622715]  ? __btrfs_release_delayed_node+0x79/0x2d0 [btrfs]
[  876.622717]  lock_acquire+0xbd/0x200
[  876.622726]  ? btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622732]  mutex_lock_nested+0x69/0x660
[  876.622741]  ? btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622750]  ? __btrfs_release_delayed_node+0x79/0x2d0 [btrfs]
[  876.622759]  ? btrfs_commit_inode_delayed_inode+0xeb/0x130 [btrfs]
[  876.622767]  btrfs_log_inode+0x13c/0xbd0 [btrfs]
[  876.622771]  ? __might_sleep+0x4a/0x90
[  876.622781]  ? btrfs_i_callback+0x20/0x20 [btrfs]
[  876.622791]  ? free_extent_buffer+0x4b/0x90 [btrfs]
[  876.622799]  btrfs_log_inode+0x572/0xbd0 [btrfs]
[  876.622808]  btrfs_log_inode_parent+0x26a/0x9b0 [btrfs]
[  876.622812]  ? dget_parent+0x77/0x170
[  876.622821]  btrfs_log_dentry_safe+0x62/0x80 [btrfs]
[  876.622830]  btrfs_sync_file+0x2eb/0x480 [btrfs]
[  876.622834]  vfs_fsync_range+0x3d/0xb0
[  876.622836]  ? trace_hardirqs_on_caller+0xf9/0x1c0
[  876.622837]  do_fsync+0x3d/0x70
[  876.622839]  SyS_fsync+0x10/0x20
[  876.622840]  entry_SYSCALL_64_fastpath+0x1f/0xc2
[  876.622842] RIP: 0033:0x7f7fbe3da290
[  876.622843] RSP: 002b:7ffe2778f0b8 EFLAGS: 0246 ORIG_RAX: 
004a
[  876.622844] RAX: ffda RBX: 0003 RCX: 7f7fbe3da290
[  876.622845] RDX: 103d RSI: 0143e5d0 RDI: 0003
[  876.622846] RBP: 01285f10 R08: 0143e5d0 R09: 
[  876.622847] R10:  R11: 0246 R12: 
[  876.622847] R13: 2000 R14: 0001 R15: 012821a0