** Changed in: linux-linaro
       Status: New => Incomplete

** Project changed: linux-linaro => linaro-landing-team-ti

-- 
You received this bug notification because you are a member of TI OMAP
Developers, which is subscribed to linaro-landing-team-ti.
https://bugs.launchpad.net/bugs/862002

Title:
  Inconsistent lock state

Status in Linaro Texas Instruments Landing Team:
  Incomplete

Bug description:
  I'm seeing this when trying to download a tarball of a rootfs and
  write it to sd card in several lava tests.  This is on the master
  image (the one that we hope is stable) not the test image, so it never
  makes it through deployment.  It *seems* that maybe it happens more
  frequently with larger images such as ubuntu-desktop.

  
  root@master:~# [rc=0]: wget -qO- 
http://192.168.1.10/images/tmp/tmpvNO4wc/root.tgz |tar --numeric-owner -C 
/mnt/root -xzf -
  wget -qO- http://192.168.1.10/images/tmp/tmpvNO4wc/root.
  tgz |tar --numeric-owner -C /mnt/root -xzf -
  [ 9251.570739]
  [ 9251.570770] =================================
  [ 9251.584320] [ INFO: inconsistent lock state ]
  [ 9251.592620] 3.0.0-1005-linaro-omap #7~ppa~natty-Ubuntu
  [ 9251.601745] ---------------------------------
  [ 9251.609985] inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
  [ 9251.620513] kswapd0/428 [HC0[0]:SC0[0]:HE1:SE1] takes:
  [ 9251.629516] (&sb->s_type->i_mutex_key#10){+.+.?.}, at: [<80104f39>] 
ext4_evict_inode+0x61/0x264
  [ 9251.645996] {RECLAIM_FS-ON-W} state was registered at:
  [ 9251.655059] [<8005c239>] mark_held_locks+0x45/0x5c
  [ 9251.664001] [<8005c693>] lockdep_trace_alloc+0x73/0x9c
  [ 9251.673248] [<800adaaf>] kmem_cache_alloc+0x1f/0xfc
  [ 9251.682128] [<800bd8c5>] d_alloc+0x1d/0x13c
  [ 9251.690185] [<800b67b7>] d_alloc_and_lookup+0x19/0x44
  [ 9251.699096] [<800b7b73>] do_lookup+0x127/0x1d0
  [ 9251.707336] [<800b829f>] path_lookupat+0xab/0x416
  [ 9251.715759] [<800b8625>] do_path_lookup+0x1b/0x46
  [ 9251.724121] [<800b88e3>] kern_path+0x1f/0x32
  [ 9251.731994] [<800c3b1f>] do_mount+0x55/0x176
  [ 9251.739807] [<800c3e73>] sys_mount+0x57/0x80
  [ 9251.747619] [<8000c2c1>] ret_fast_syscall+0x1/0x50
  [ 9251.756011] irq event stamp: 291334
  [ 9251.762725] hardirqs last enabled at (291334): [<80073651>] 
__call_rcu+0xcf/0xe2
  [ 9251.776763] hardirqs last disabled at (291333): [<800735a5>] 
__call_rcu+0x23/0xe2
  [ 9251.790832] softirqs last enabled at (288839): [<8003b2e7>] 
irq_exit+0x43/0x84
  [ 9251.805114] softirqs last disabled at (288830): [<8003b2e7>] 
irq_exit+0x43/0x84
  [ 9251.819641]
  [ 9251.819671] other info that might help us debug this:
  [ 9251.833282] Possible unsafe locking scenario:
  [ 9251.833282]
  [ 9251.846191] CPU0
  [ 9251.851959] ----
  [ 9251.857574] lock(&sb->s_type->i_mutex_key);
  [ 9251.865203] <Interrupt>
  [ 9251.870880] lock(&sb->s_type->i_mutex_key);
  [ 9251.878631]
  [ 9251.878631] *** DEADLOCK ***
  [ 9251.878631]
  [ 9251.893402] 2 locks held by kswapd0/428:
  [ 9251.900360] #0: (shrinker_rwsem){++++..}, at: [<80095a47>] 
shrink_slab+0x1f/0x140
  [ 9251.914398] #1: (iprune_sem){.+.+.-}, at: [<800bfc5d>] 
prune_icache+0x23/0x1e6
  [ 9251.928527]
  [ 9251.928558] stack backtrace:
  [ 9251.939361] [<800110ad>] (unwind_backtrace+0x1/0x90) from [<8005adc7>] 
(print_usage_bug+0x10f/0x150)
  [ 9251.955535] [<8005adc7>] (print_usage_bug+0x10f/0x150) from [<8005ae9d>] 
(mark_lock_irq+0x95/0x1b8)
  [ 9251.971710] [<8005ae9d>] (mark_lock_irq+0x95/0x1b8) from [<8005b195>] 
(mark_lock+0x1d5/0x2ac)
  [ 9251.987335] [<8005b195>] (mark_lock+0x1d5/0x2ac) from [<8005b345>] 
(mark_irqflags+0xd9/0xe8)
  [ 9252.002868] [<8005b345>] (mark_irqflags+0xd9/0xe8) from [<8005b78b>] 
(__lock_acquire+0x437/0x590)
  [ 9252.018890] [<8005b78b>] (__lock_acquire+0x437/0x590) from [<8005bcf5>] 
(lock_acquire+0xad/0xcc)
  [ 9252.035186] [<8005bcf5>] (lock_acquire+0xad/0xcc) from [<803e15bf>] 
(mutex_lock_nested+0x4b/0x280)
  [ 9252.051971] [<803e15bf>] (mutex_lock_nested+0x4b/0x280) from [<80104f39>] 
(ext4_evict_inode+0x61/0x264)
  [ 9252.069763] [<80104f39>] (ext4_evict_inode+0x61/0x264) from [<800bf7a1>] 
(evict+0x65/0xec)
  [ 9252.086761] [<800bf7a1>] (evict+0x65/0xec) from [<800bf851>] 
(dispose_list+0x29/0x30)
  [ 9252.103576] [<800bf851>] (dispose_list+0x29/0x30) from [<800bfdf9>] 
(prune_icache+0x1bf/0x1e6)
  [ 9252.121765] [<800bfdf9>] (prune_icache+0x1bf/0x1e6) from [<800bfe39>] 
(shrink_icache_memory+0x19/0x38)
  [ 9252.141052] [<800bfe39>] (shrink_icache_memory+0x19/0x38) from 
[<80095ac7>] (shrink_slab+0x9f/0x140)
  [ 9252.160156] [<80095ac7>] (shrink_slab+0x9f/0x140) from [<800970b9>] 
(balance_pgdat+0x1bd/0x3dc)
  [ 9252.178955] [<800970b9>] (balance_pgdat+0x1bd/0x3dc) from [<8009740f>] 
(kswapd+0x137/0x150)
  [ 9252.197753] [<8009740f>] (kswapd+0x137/0x150) from [<8004b823>] 
(kthread+0x57/0x68)
  [ 9252.215850] [<8004b823>] (kthread+0x57/0x68) from [<8000ccc5>] 
(kernel_thread_exit+0x1/0x6)

  
  If you want to see the full log from one of these, check here: 
http://validation.linaro.org/lava-server/dashboard/streams/anonymous/lava-daily/bundles/74516364b89d23b7701af1b326dc6757d5a26b94/706a02b8-e8d6-11e0-9729-68b599be548c/attachments/11507/

To manage notifications about this bug go to:
https://bugs.launchpad.net/linaro-landing-team-ti/+bug/862002/+subscriptions

_______________________________________________
Mailing list: https://launchpad.net/~tiomap-dev
Post to     : tiomap-dev@lists.launchpad.net
Unsubscribe : https://launchpad.net/~tiomap-dev
More help   : https://help.launchpad.net/ListHelp

Reply via email to