Hello

Lockdep is nagging about lock inconsistenty on 2.6.32.  I am
testing a swap driver under memory pressure. The test application
just allocates 120% of available free memory and writes to random
pages.  Rootfs is JFS on a VirtualBox machine.

Jarkko Lavinen

=================================
[ INFO: inconsistent lock state ]
2.6.32 #23
---------------------------------
inconsistent {RECLAIM_FS-ON-W} -> {IN-RECLAIM_FS-W} usage.
sh/1956 [HC0[0]:SC0[0]:HE1:SE1] takes:
 (&jfs_ip->rdwrlock#2){++++?.}, at: [<c1150dad>] jfs_get_block+0x3c/0x239
{RECLAIM_FS-ON-W} state was registered at:
  [<c105408f>] mark_held_locks+0x43/0x5b
  [<c1054123>] lockdep_trace_alloc+0x7c/0x91
  [<c10b3fd4>] kmem_cache_alloc+0x24/0x12a
  [<c11b01e8>] radix_tree_preload+0x27/0x63
  [<c108f628>] add_to_page_cache_locked+0x1b/0xa7
  [<c108f6da>] add_to_page_cache_lru+0x26/0x58
  [<c108f7f3>] read_cache_page_async+0x5c/0xf5
  [<c108f898>] read_cache_page+0xc/0x3f
  [<c11656e6>] __get_metapage+0xbc/0x1f2
  [<c115967d>] diWrite+0x116/0x47e
  [<c116862a>] txCommit+0x1cc/0xd09
  [<c1150cf7>] jfs_truncate_nolock+0xbf/0xf1
  [<c1150d62>] jfs_truncate+0x39/0x48
  [<c1096937>] vmtruncate+0x49/0x54
  [<c10ca376>] inode_setattr+0x51/0x122
  [<c10ca5fb>] notify_change+0x1b4/0x2a6
  [<c10b8bf2>] do_truncate+0x6b/0x84
  [<c10c2467>] may_open+0x195/0x19b
  [<c10c29fa>] do_filp_open+0x3c7/0x758
  [<c10b7fd9>] do_sys_open+0x4a/0xe2
  [<c10b80b3>] sys_open+0x1e/0x26
  [<c1002b15>] syscall_call+0x7/0xb
irq event stamp: 9783
hardirqs last  enabled at (9783): [<c14ac00b>] 
_write_unlock_irqrestore+0x36/0x3c
hardirqs last disabled at (9782): [<c14ac343>] _write_lock_irqsave+0xf/0x34
softirqs last  enabled at (9594): [<c1036860>] __do_softirq+0x158/0x160
softirqs last disabled at (9581): [<c1036893>] do_softirq+0x2b/0x43

other info that might help us debug this:
1 lock held by sh/1956:
 #0:  (&mm->mmap_sem){++++++}, at: [<c14adf19>] do_page_fault+0x175/0x30f

stack backtrace:
Pid: 1956, comm: sh Not tainted 2.6.32 #23
Call Trace:
 [<c14aa167>] ? printk+0xf/0x11
 [<c1053e4f>] valid_state+0x130/0x143
 [<c1053f53>] mark_lock+0xf1/0x1ea
 [<c1054814>] ? check_usage_forwards+0x0/0x68
 [<c10550fb>] __lock_acquire+0x368/0xbe1
 [<c10559fd>] lock_acquire+0x89/0xa0
 [<c1150dad>] ? jfs_get_block+0x3c/0x239
 [<c1049179>] down_write_nested+0x34/0x52
 [<c1150dad>] ? jfs_get_block+0x3c/0x239
 [<c1150dad>] jfs_get_block+0x3c/0x239
 [<c10d706a>] __block_write_full_page+0x136/0x2cc
 [<c1150d71>] ? jfs_get_block+0x0/0x239
 [<c10d7298>] block_write_full_page_endio+0x98/0xa3
 [<c10d6028>] ? end_buffer_async_write+0x0/0x129
 [<c1150d71>] ? jfs_get_block+0x0/0x239
 [<c10d72b0>] block_write_full_page+0xd/0xf
 [<c10d6028>] ? end_buffer_async_write+0x0/0x129
 [<c1150b4a>] jfs_writepage+0xf/0x11
 [<c1097972>] shrink_page_list+0x387/0x641
 [<c1097f73>] shrink_list+0x347/0x576
 [<c10983ad>] shrink_zone+0x20b/0x2aa
 [<c14aa272>] ? io_schedule_timeout+0x8a/0xad
 [<c1045f13>] ? autoremove_wake_function+0x0/0x33
 [<c1098f5a>] try_to_free_pages+0x1e4/0x2f9
 [<c1096a8c>] ? isolate_pages_global+0x0/0x193
 [<c1093f41>] __alloc_pages_nodemask+0x2fa/0x4cf
 [<c10a34ff>] handle_mm_fault+0x1ea/0x717
 [<c1049254>] ? down_read_trylock+0x39/0x43
 [<c14ae09d>] do_page_fault+0x2f9/0x30f
 [<c14adda4>] ? do_page_fault+0x0/0x30f
 [<c14ac59b>] error_code+0x6b/0x70
 [<c14adda4>] ? do_page_fault+0x0/0x30f

------------------------------------------------------------------------------
The Planet: dedicated and managed hosting, cloud storage, colocation
Stay online with enterprise data centers and the best network in the business
Choose flexible plans and management services without long-term contracts
Personal 24x7 support from experience hosting pros just a phone call away.
http://p.sf.net/sfu/theplanet-com
_______________________________________________
Jfs-discussion mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/jfs-discussion

Reply via email to