Re: MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-19 Thread Dave Jones
On Fri, Oct 19, 2012 at 02:49:32PM +0200, Peter Zijlstra wrote:

 > Of course, if you do run out of lock classes, the next thing to do is
 > to find the offending lock classes.  First, the following command gives
 > you the number of lock classes currently in use along with the maximum:
 > 
 > grep "lock-classes" /proc/lockdep_stats
 > 
 > This command produces the following output on a modest system:
 > 
 >  lock-classes:  748 [max: 8191]

After the BUG gets hit..

 lock-classes: 1726 [max: 8191]

 > If the number allocated (748 above) increases continually over time,
 > then there is likely a leak.  The following command can be used to
 > identify the leaking lock classes:
 > 
 > grep "BD" /proc/lockdep
 > 
 > Run the command and save the output, then compare against the output from
 > a later run of this command to identify the leakers.  This same output
 > can also help you find situations where runtime lock initialization has
 > been omitted.

I've not had chance to do this, because after the BUG, lockdep turns itself off,
and I've not rebooted. I'm probably not going to get to this until after the 
weekend.

There's just a *lot* of dependancies.

Here's the full output http://codemonkey.org.uk/junk/lockdep

the top few backwards deps..

81c8f218 FD:1 BD: 1201 -.-.-.: pool_lock
82ae1210 FD:2 BD: 1200 -.-.-.: _hash[i].lock
820677c1 FD:1 BD: 1131 -.-.-.: _rq->rt_runtime_lock
82066949 FD:3 BD: 1131 -.-.-.: _base->lock
820221c0 FD:1 BD: 1130 -.-.-.: >cputimer.lock
820677b8 FD:5 BD: 1129 -.-.-.: _b->rt_runtime_lock
82067675 FD:3 BD: 1129 ..-.-.: >lock/1
82067674 FD:8 BD: 1128 -.-.-.: >lock
8298bbd0 FD:1 BD: 1006 -.-.-.: &(>list_lock)->rlock

Dave

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-19 Thread Peter Zijlstra
On Fri, 2012-10-19 at 01:21 -0400, Dave Jones wrote:
>  > Not sure why you are CC'ing a call site, rather than the maintainers of
>  > the code. Just looks like lockdep is using too small a static value.
>  > Though it is pretty darn large...
> 
> You're right, it's a huge chunk of memory.
> It looks like I can trigger this from multiple callsites..
> Another different trace below.
> 
> Not sure why this suddenly got a lot worse in 3.7 

Did we add a static array of structures with locks in somewhere? Doing
that is a great way of blowing up the number of lock classes and the
resulting amount of lock dependency chains.

>From Documentation/lockdep-design.txt; it talks about overflowing
MAX_LOCKDEP_KEYS, but I suppose its a good starts for overflowing the
dependency entries too, more classes means more dependencies after all.

---
Of course, if you do run out of lock classes, the next thing to do is
to find the offending lock classes.  First, the following command gives
you the number of lock classes currently in use along with the maximum:

grep "lock-classes" /proc/lockdep_stats

This command produces the following output on a modest system:

 lock-classes:  748 [max: 8191]

If the number allocated (748 above) increases continually over time,
then there is likely a leak.  The following command can be used to
identify the leaking lock classes:

grep "BD" /proc/lockdep

Run the command and save the output, then compare against the output from
a later run of this command to identify the leakers.  This same output
can also help you find situations where runtime lock initialization has
been omitted.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-19 Thread Peter Zijlstra
On Fri, 2012-10-19 at 01:21 -0400, Dave Jones wrote:
   Not sure why you are CC'ing a call site, rather than the maintainers of
   the code. Just looks like lockdep is using too small a static value.
   Though it is pretty darn large...
 
 You're right, it's a huge chunk of memory.
 It looks like I can trigger this from multiple callsites..
 Another different trace below.
 
 Not sure why this suddenly got a lot worse in 3.7 

Did we add a static array of structures with locks in somewhere? Doing
that is a great way of blowing up the number of lock classes and the
resulting amount of lock dependency chains.

From Documentation/lockdep-design.txt; it talks about overflowing
MAX_LOCKDEP_KEYS, but I suppose its a good starts for overflowing the
dependency entries too, more classes means more dependencies after all.

---
Of course, if you do run out of lock classes, the next thing to do is
to find the offending lock classes.  First, the following command gives
you the number of lock classes currently in use along with the maximum:

grep lock-classes /proc/lockdep_stats

This command produces the following output on a modest system:

 lock-classes:  748 [max: 8191]

If the number allocated (748 above) increases continually over time,
then there is likely a leak.  The following command can be used to
identify the leaking lock classes:

grep BD /proc/lockdep

Run the command and save the output, then compare against the output from
a later run of this command to identify the leakers.  This same output
can also help you find situations where runtime lock initialization has
been omitted.

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-19 Thread Dave Jones
On Fri, Oct 19, 2012 at 02:49:32PM +0200, Peter Zijlstra wrote:

  Of course, if you do run out of lock classes, the next thing to do is
  to find the offending lock classes.  First, the following command gives
  you the number of lock classes currently in use along with the maximum:
  
  grep lock-classes /proc/lockdep_stats
  
  This command produces the following output on a modest system:
  
   lock-classes:  748 [max: 8191]

After the BUG gets hit..

 lock-classes: 1726 [max: 8191]

  If the number allocated (748 above) increases continually over time,
  then there is likely a leak.  The following command can be used to
  identify the leaking lock classes:
  
  grep BD /proc/lockdep
  
  Run the command and save the output, then compare against the output from
  a later run of this command to identify the leakers.  This same output
  can also help you find situations where runtime lock initialization has
  been omitted.

I've not had chance to do this, because after the BUG, lockdep turns itself off,
and I've not rebooted. I'm probably not going to get to this until after the 
weekend.

There's just a *lot* of dependancies.

Here's the full output http://codemonkey.org.uk/junk/lockdep

the top few backwards deps..

81c8f218 FD:1 BD: 1201 -.-.-.: pool_lock
82ae1210 FD:2 BD: 1200 -.-.-.: obj_hash[i].lock
820677c1 FD:1 BD: 1131 -.-.-.: rt_rq-rt_runtime_lock
82066949 FD:3 BD: 1131 -.-.-.: cpu_base-lock
820221c0 FD:1 BD: 1130 -.-.-.: sig-cputimer.lock
820677b8 FD:5 BD: 1129 -.-.-.: rt_b-rt_runtime_lock
82067675 FD:3 BD: 1129 ..-.-.: rq-lock/1
82067674 FD:8 BD: 1128 -.-.-.: rq-lock
8298bbd0 FD:1 BD: 1006 -.-.-.: (n-list_lock)-rlock

Dave

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-18 Thread Dave Jones
On Thu, Oct 18, 2012 at 07:53:08AM +0200, Jens Axboe wrote:
 > On 2012-10-18 03:53, Dave Jones wrote:
 > > Triggered while fuzz testing..
 > > 
 > > 
 > > BUG: MAX_LOCKDEP_ENTRIES too low!
 > > turning off the locking correctness validator.
 > > Pid: 22788, comm: kworker/2:1 Not tainted 3.7.0-rc1+ #34
 > > Call Trace:
 > >  [] add_lock_to_list.isra.29.constprop.45+0xdd/0xf0
 > >  [] __lock_acquire+0x1121/0x1ba0
 > >  [] lock_acquire+0xa2/0x220
 > >  [] ? free_one_page+0x32/0x450
 > >  [] ? sub_preempt_count+0x79/0xd0
 > >  [] _raw_spin_lock+0x40/0x80
 > >  [] ? free_one_page+0x32/0x450
 > >  [] free_one_page+0x32/0x450
 > >  [] ? __free_pages_ok.part.58+0x51/0x110
 > >  [] __free_pages_ok.part.58+0xac/0x110
 > >  [] __free_pages+0x73/0x90
 > >  [] __free_slab+0xd3/0x1b0
 > >  [] discard_slab+0x39/0x50
 > >  [] __slab_free+0x378/0x3a3
 > >  [] ? ioc_release_fn+0x99/0xe0
 > >  [] ? ioc_release_fn+0x99/0xe0
 > >  [] kmem_cache_free+0x2f2/0x320
 > >  [] ? sub_preempt_count+0x79/0xd0
 > >  [] ioc_release_fn+0x99/0xe0
 > >  [] process_one_work+0x207/0x780
 > >  [] ? process_one_work+0x197/0x780
 > >  [] ? get_io_context+0x20/0x20
 > >  [] worker_thread+0x15e/0x440
 > >  [] ? rescuer_thread+0x240/0x240
 > >  [] kthread+0xed/0x100
 > >  [] ? put_lock_stats.isra.25+0xe/0x40
 > >  [] ? kthread_create_on_node+0x160/0x160
 > >  [] ret_from_fork+0x7c/0xb0
 > >  [] ? kthread_create_on_node+0x160/0x160
 > 
 > Not sure why you are CC'ing a call site, rather than the maintainers of
 > the code. Just looks like lockdep is using too small a static value.
 > Though it is pretty darn large...

You're right, it's a huge chunk of memory.
It looks like I can trigger this from multiple callsites..
Another different trace below.

Not sure why this suddenly got a lot worse in 3.7

Peter, Ingo ?

Dave

BUG: MAX_LOCKDEP_ENTRIES too low!
turning off the locking correctness validator.
Pid: 22350, comm: trinity-child0 Not tainted 3.7.0-rc1+ #36
Call Trace:
 [] add_lock_to_list.isra.29.constprop.45+0xdd/0xf0
 [] __lock_acquire+0x1121/0x1ba0
 [] ? local_clock+0x89/0xa0
 [] ? __swap_duplicate+0xb5/0x190
 [] ? trace_hardirqs_off_caller+0x28/0xd0
 [] lock_acquire+0xa2/0x220
 [] ? __add_to_swap_cache+0x6d/0x180
 [] ? _raw_spin_lock_irq+0x29/0x90
 [] _raw_spin_lock_irq+0x56/0x90
 [] ? __add_to_swap_cache+0x6d/0x180
 [] __add_to_swap_cache+0x6d/0x180
 [] read_swap_cache_async+0xb5/0x220
 [] swapin_readahead+0x9e/0xf0
 [] handle_pte_fault+0x6d6/0xae0
 [] ? sub_preempt_count+0x79/0xd0
 [] ? delay_tsc+0xae/0x120
 [] ? __const_udelay+0x28/0x30
 [] handle_mm_fault+0x289/0x350
 [] __do_page_fault+0x18e/0x530
 [] ? local_clock+0x89/0xa0
 [] ? __slab_free+0x32e/0x3a3
 [] ? rcu_user_exit+0xc9/0xf0
 [] ? 0xa01f
 [] ? 0xa01f
 [] do_page_fault+0x2b/0x50
 [] page_fault+0x28/0x30
 [] ? 0xa01f
 [] ? strncpy_from_user+0x6c/0x120
 [] ? 0xa01f
 [] setxattr+0x6f/0x1d0
 [] ? sub_preempt_count+0x79/0xd0
 [] ? __percpu_counter_add+0x75/0xc0
 [] ? __sb_start_write+0x101/0x1d0
 [] ? mnt_want_write+0x24/0x50
 [] ? mnt_want_write+0x24/0x50
 [] ? get_parent_ip+0x11/0x50
 [] ? sub_preempt_count+0x79/0xd0
 [] ? __mnt_want_write+0x60/0xa0
 [] ? 0xa01f
 [] sys_setxattr+0x95/0xb0
 [] tracesys+0xe1/0xe6
 [] ? 0xa01f

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-18 Thread Dave Jones
On Thu, Oct 18, 2012 at 07:53:08AM +0200, Jens Axboe wrote:
  On 2012-10-18 03:53, Dave Jones wrote:
   Triggered while fuzz testing..
   
   
   BUG: MAX_LOCKDEP_ENTRIES too low!
   turning off the locking correctness validator.
   Pid: 22788, comm: kworker/2:1 Not tainted 3.7.0-rc1+ #34
   Call Trace:
[810decdd] add_lock_to_list.isra.29.constprop.45+0xdd/0xf0
[810e2871] __lock_acquire+0x1121/0x1ba0
[810e3a12] lock_acquire+0xa2/0x220
[8117bad2] ? free_one_page+0x32/0x450
[816c5a59] ? sub_preempt_count+0x79/0xd0
[816c0800] _raw_spin_lock+0x40/0x80
[8117bad2] ? free_one_page+0x32/0x450
[8117bad2] free_one_page+0x32/0x450
[8117bf41] ? __free_pages_ok.part.58+0x51/0x110
[8117bf9c] __free_pages_ok.part.58+0xac/0x110
[8117cd73] __free_pages+0x73/0x90
[811cb4f3] __free_slab+0xd3/0x1b0
[811cb609] discard_slab+0x39/0x50
[816b77db] __slab_free+0x378/0x3a3
[81341289] ? ioc_release_fn+0x99/0xe0
[81341289] ? ioc_release_fn+0x99/0xe0
[811cd4e2] kmem_cache_free+0x2f2/0x320
[816c5a59] ? sub_preempt_count+0x79/0xd0
[81341289] ioc_release_fn+0x99/0xe0
[81095a37] process_one_work+0x207/0x780
[810959c7] ? process_one_work+0x197/0x780
[813411f0] ? get_io_context+0x20/0x20
[8109638e] worker_thread+0x15e/0x440
[81096230] ? rescuer_thread+0x240/0x240
[8109d0cd] kthread+0xed/0x100
[810de02e] ? put_lock_stats.isra.25+0xe/0x40
[8109cfe0] ? kthread_create_on_node+0x160/0x160
[816c9dac] ret_from_fork+0x7c/0xb0
[8109cfe0] ? kthread_create_on_node+0x160/0x160
  
  Not sure why you are CC'ing a call site, rather than the maintainers of
  the code. Just looks like lockdep is using too small a static value.
  Though it is pretty darn large...

You're right, it's a huge chunk of memory.
It looks like I can trigger this from multiple callsites..
Another different trace below.

Not sure why this suddenly got a lot worse in 3.7

Peter, Ingo ?

Dave

BUG: MAX_LOCKDEP_ENTRIES too low!
turning off the locking correctness validator.
Pid: 22350, comm: trinity-child0 Not tainted 3.7.0-rc1+ #36
Call Trace:
 [810decdd] add_lock_to_list.isra.29.constprop.45+0xdd/0xf0
 [810e2871] __lock_acquire+0x1121/0x1ba0
 [810b6f89] ? local_clock+0x89/0xa0
 [811b84a5] ? __swap_duplicate+0xb5/0x190
 [810dea58] ? trace_hardirqs_off_caller+0x28/0xd0
 [810e3a12] lock_acquire+0xa2/0x220
 [811b663d] ? __add_to_swap_cache+0x6d/0x180
 [816c0f79] ? _raw_spin_lock_irq+0x29/0x90
 [816c0fa6] _raw_spin_lock_irq+0x56/0x90
 [811b663d] ? __add_to_swap_cache+0x6d/0x180
 [811b663d] __add_to_swap_cache+0x6d/0x180
 [811b6c45] read_swap_cache_async+0xb5/0x220
 [811b6e4e] swapin_readahead+0x9e/0xf0
 [811a2c86] handle_pte_fault+0x6d6/0xae0
 [816c5ad9] ? sub_preempt_count+0x79/0xd0
 [8136d37e] ? delay_tsc+0xae/0x120
 [8136d268] ? __const_udelay+0x28/0x30
 [811a4919] handle_mm_fault+0x289/0x350
 [816c538e] __do_page_fault+0x18e/0x530
 [810b6f89] ? local_clock+0x89/0xa0
 [816b77e1] ? __slab_free+0x32e/0x3a3
 [8112d2e9] ? rcu_user_exit+0xc9/0xf0
 [a020] ? 0xa01f
 [a020] ? 0xa01f
 [816c575b] do_page_fault+0x2b/0x50
 [816c1e38] page_fault+0x28/0x30
 [a020] ? 0xa01f
 [813885ac] ? strncpy_from_user+0x6c/0x120
 [a020] ? 0xa01f
 [8120e94f] setxattr+0x6f/0x1d0
 [816c5ad9] ? sub_preempt_count+0x79/0xd0
 [8137e8b5] ? __percpu_counter_add+0x75/0xc0
 [811e92d1] ? __sb_start_write+0x101/0x1d0
 [81209d94] ? mnt_want_write+0x24/0x50
 [81209d94] ? mnt_want_write+0x24/0x50
 [810b0de1] ? get_parent_ip+0x11/0x50
 [816c5ad9] ? sub_preempt_count+0x79/0xd0
 [81209d30] ? __mnt_want_write+0x60/0xa0
 [a020] ? 0xa01f
 [8120ecd5] sys_setxattr+0x95/0xb0
 [816ca108] tracesys+0xe1/0xe6
 [a020] ? 0xa01f

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-17 Thread Jens Axboe
On 2012-10-18 03:53, Dave Jones wrote:
> Triggered while fuzz testing..
> 
> 
> BUG: MAX_LOCKDEP_ENTRIES too low!
> turning off the locking correctness validator.
> Pid: 22788, comm: kworker/2:1 Not tainted 3.7.0-rc1+ #34
> Call Trace:
>  [] add_lock_to_list.isra.29.constprop.45+0xdd/0xf0
>  [] __lock_acquire+0x1121/0x1ba0
>  [] lock_acquire+0xa2/0x220
>  [] ? free_one_page+0x32/0x450
>  [] ? sub_preempt_count+0x79/0xd0
>  [] _raw_spin_lock+0x40/0x80
>  [] ? free_one_page+0x32/0x450
>  [] free_one_page+0x32/0x450
>  [] ? __free_pages_ok.part.58+0x51/0x110
>  [] __free_pages_ok.part.58+0xac/0x110
>  [] __free_pages+0x73/0x90
>  [] __free_slab+0xd3/0x1b0
>  [] discard_slab+0x39/0x50
>  [] __slab_free+0x378/0x3a3
>  [] ? ioc_release_fn+0x99/0xe0
>  [] ? ioc_release_fn+0x99/0xe0
>  [] kmem_cache_free+0x2f2/0x320
>  [] ? sub_preempt_count+0x79/0xd0
>  [] ioc_release_fn+0x99/0xe0
>  [] process_one_work+0x207/0x780
>  [] ? process_one_work+0x197/0x780
>  [] ? get_io_context+0x20/0x20
>  [] worker_thread+0x15e/0x440
>  [] ? rescuer_thread+0x240/0x240
>  [] kthread+0xed/0x100
>  [] ? put_lock_stats.isra.25+0xe/0x40
>  [] ? kthread_create_on_node+0x160/0x160
>  [] ret_from_fork+0x7c/0xb0
>  [] ? kthread_create_on_node+0x160/0x160

Not sure why you are CC'ing a call site, rather than the maintainers of
the code. Just looks like lockdep is using too small a static value.
Though it is pretty darn large...

-- 
Jens Axboe

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-17 Thread Dave Jones
Triggered while fuzz testing..


BUG: MAX_LOCKDEP_ENTRIES too low!
turning off the locking correctness validator.
Pid: 22788, comm: kworker/2:1 Not tainted 3.7.0-rc1+ #34
Call Trace:
 [] add_lock_to_list.isra.29.constprop.45+0xdd/0xf0
 [] __lock_acquire+0x1121/0x1ba0
 [] lock_acquire+0xa2/0x220
 [] ? free_one_page+0x32/0x450
 [] ? sub_preempt_count+0x79/0xd0
 [] _raw_spin_lock+0x40/0x80
 [] ? free_one_page+0x32/0x450
 [] free_one_page+0x32/0x450
 [] ? __free_pages_ok.part.58+0x51/0x110
 [] __free_pages_ok.part.58+0xac/0x110
 [] __free_pages+0x73/0x90
 [] __free_slab+0xd3/0x1b0
 [] discard_slab+0x39/0x50
 [] __slab_free+0x378/0x3a3
 [] ? ioc_release_fn+0x99/0xe0
 [] ? ioc_release_fn+0x99/0xe0
 [] kmem_cache_free+0x2f2/0x320
 [] ? sub_preempt_count+0x79/0xd0
 [] ioc_release_fn+0x99/0xe0
 [] process_one_work+0x207/0x780
 [] ? process_one_work+0x197/0x780
 [] ? get_io_context+0x20/0x20
 [] worker_thread+0x15e/0x440
 [] ? rescuer_thread+0x240/0x240
 [] kthread+0xed/0x100
 [] ? put_lock_stats.isra.25+0xe/0x40
 [] ? kthread_create_on_node+0x160/0x160
 [] ret_from_fork+0x7c/0xb0
 [] ? kthread_create_on_node+0x160/0x160

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-17 Thread Dave Jones
Triggered while fuzz testing..


BUG: MAX_LOCKDEP_ENTRIES too low!
turning off the locking correctness validator.
Pid: 22788, comm: kworker/2:1 Not tainted 3.7.0-rc1+ #34
Call Trace:
 [810decdd] add_lock_to_list.isra.29.constprop.45+0xdd/0xf0
 [810e2871] __lock_acquire+0x1121/0x1ba0
 [810e3a12] lock_acquire+0xa2/0x220
 [8117bad2] ? free_one_page+0x32/0x450
 [816c5a59] ? sub_preempt_count+0x79/0xd0
 [816c0800] _raw_spin_lock+0x40/0x80
 [8117bad2] ? free_one_page+0x32/0x450
 [8117bad2] free_one_page+0x32/0x450
 [8117bf41] ? __free_pages_ok.part.58+0x51/0x110
 [8117bf9c] __free_pages_ok.part.58+0xac/0x110
 [8117cd73] __free_pages+0x73/0x90
 [811cb4f3] __free_slab+0xd3/0x1b0
 [811cb609] discard_slab+0x39/0x50
 [816b77db] __slab_free+0x378/0x3a3
 [81341289] ? ioc_release_fn+0x99/0xe0
 [81341289] ? ioc_release_fn+0x99/0xe0
 [811cd4e2] kmem_cache_free+0x2f2/0x320
 [816c5a59] ? sub_preempt_count+0x79/0xd0
 [81341289] ioc_release_fn+0x99/0xe0
 [81095a37] process_one_work+0x207/0x780
 [810959c7] ? process_one_work+0x197/0x780
 [813411f0] ? get_io_context+0x20/0x20
 [8109638e] worker_thread+0x15e/0x440
 [81096230] ? rescuer_thread+0x240/0x240
 [8109d0cd] kthread+0xed/0x100
 [810de02e] ? put_lock_stats.isra.25+0xe/0x40
 [8109cfe0] ? kthread_create_on_node+0x160/0x160
 [816c9dac] ret_from_fork+0x7c/0xb0
 [8109cfe0] ? kthread_create_on_node+0x160/0x160

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: MAX_LOCKDEP_ENTRIES too low (called from ioc_release_fn)

2012-10-17 Thread Jens Axboe
On 2012-10-18 03:53, Dave Jones wrote:
 Triggered while fuzz testing..
 
 
 BUG: MAX_LOCKDEP_ENTRIES too low!
 turning off the locking correctness validator.
 Pid: 22788, comm: kworker/2:1 Not tainted 3.7.0-rc1+ #34
 Call Trace:
  [810decdd] add_lock_to_list.isra.29.constprop.45+0xdd/0xf0
  [810e2871] __lock_acquire+0x1121/0x1ba0
  [810e3a12] lock_acquire+0xa2/0x220
  [8117bad2] ? free_one_page+0x32/0x450
  [816c5a59] ? sub_preempt_count+0x79/0xd0
  [816c0800] _raw_spin_lock+0x40/0x80
  [8117bad2] ? free_one_page+0x32/0x450
  [8117bad2] free_one_page+0x32/0x450
  [8117bf41] ? __free_pages_ok.part.58+0x51/0x110
  [8117bf9c] __free_pages_ok.part.58+0xac/0x110
  [8117cd73] __free_pages+0x73/0x90
  [811cb4f3] __free_slab+0xd3/0x1b0
  [811cb609] discard_slab+0x39/0x50
  [816b77db] __slab_free+0x378/0x3a3
  [81341289] ? ioc_release_fn+0x99/0xe0
  [81341289] ? ioc_release_fn+0x99/0xe0
  [811cd4e2] kmem_cache_free+0x2f2/0x320
  [816c5a59] ? sub_preempt_count+0x79/0xd0
  [81341289] ioc_release_fn+0x99/0xe0
  [81095a37] process_one_work+0x207/0x780
  [810959c7] ? process_one_work+0x197/0x780
  [813411f0] ? get_io_context+0x20/0x20
  [8109638e] worker_thread+0x15e/0x440
  [81096230] ? rescuer_thread+0x240/0x240
  [8109d0cd] kthread+0xed/0x100
  [810de02e] ? put_lock_stats.isra.25+0xe/0x40
  [8109cfe0] ? kthread_create_on_node+0x160/0x160
  [816c9dac] ret_from_fork+0x7c/0xb0
  [8109cfe0] ? kthread_create_on_node+0x160/0x160

Not sure why you are CC'ing a call site, rather than the maintainers of
the code. Just looks like lockdep is using too small a static value.
Though it is pretty darn large...

-- 
Jens Axboe

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/