Zhao Forrest wrote:
These 2 kernel options are turned on by default in my kernel. Here's
snip from .config
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_BKL=y
CONFIG_NUMA=y
CONFIG_K8_NUMA=y
Does this fix it?
--- fs/buffer.c~
Zhao Forrest wrote:
These 2 kernel options are turned on by default in my kernel. Here's
snip from .config
# CONFIG_PREEMPT_NONE is not set
CONFIG_PREEMPT_VOLUNTARY=y
# CONFIG_PREEMPT is not set
CONFIG_PREEMPT_BKL=y
CONFIG_NUMA=y
CONFIG_K8_NUMA=y
Does this fix it?
--- fs/buffer.c~
On Thu, 2007-04-12 at 00:55 -0700, Andrew Morton wrote:
> On Thu, 12 Apr 2007 09:39:25 +0200 Peter Zijlstra <[EMAIL PROTECTED]> wrote:
>
> > On Wed, 2007-04-11 at 15:30 -0700, Andrew Morton wrote:
> >
> > > There used to be a cond_resched() in invalidate_mapping_pages() which
> > > would
> > >
On Thu, 12 Apr 2007 09:39:25 +0200 Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> On Wed, 2007-04-11 at 15:30 -0700, Andrew Morton wrote:
>
> > There used to be a cond_resched() in invalidate_mapping_pages() which would
> > have prevented this, but I rudely removed it to support
> >
On Wed, 2007-04-11 at 15:30 -0700, Andrew Morton wrote:
> There used to be a cond_resched() in invalidate_mapping_pages() which would
> have prevented this, but I rudely removed it to support
> /proc/sys/vm/drop_caches (which needs to call invalidate_inode_pages()
> under spinlock).
>
> We could
On 4/11/07, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
On Wed, 2007-04-11 at 17:53 +0800, Zhao Forrest wrote:
> I got some new information:
> Before soft lockup message is out, we have:
> [EMAIL PROTECTED] home]# cat /proc/slabinfo |grep buffer_head
> buffer_head 10927942 10942560120
On 4/11/07, Peter Zijlstra [EMAIL PROTECTED] wrote:
On Wed, 2007-04-11 at 17:53 +0800, Zhao Forrest wrote:
I got some new information:
Before soft lockup message is out, we have:
[EMAIL PROTECTED] home]# cat /proc/slabinfo |grep buffer_head
buffer_head 10927942 10942560120 32
On Wed, 2007-04-11 at 15:30 -0700, Andrew Morton wrote:
There used to be a cond_resched() in invalidate_mapping_pages() which would
have prevented this, but I rudely removed it to support
/proc/sys/vm/drop_caches (which needs to call invalidate_inode_pages()
under spinlock).
We could
On Thu, 12 Apr 2007 09:39:25 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Wed, 2007-04-11 at 15:30 -0700, Andrew Morton wrote:
There used to be a cond_resched() in invalidate_mapping_pages() which would
have prevented this, but I rudely removed it to support
/proc/sys/vm/drop_caches
On Thu, 2007-04-12 at 00:55 -0700, Andrew Morton wrote:
On Thu, 12 Apr 2007 09:39:25 +0200 Peter Zijlstra [EMAIL PROTECTED] wrote:
On Wed, 2007-04-11 at 15:30 -0700, Andrew Morton wrote:
There used to be a cond_resched() in invalidate_mapping_pages() which
would
have prevented
the bug on the latest kernel, but does any
> expert know if this is the known issue in old kernel? Or why
> kmem_cache_free occupy CPU for more than 10 seconds?
>
> Please let me know if you need any information.
>
> Thanks,
> Forrest
> --
On Wed, 2007-04-11 at 18:10 +0800, Zhao Forrest wrote:
> On 4/11/07, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> > On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
> > > I'm confused - which end of ths stack is up?
> > >
> > > cpuset_exit doesn't call do_exit, rather it's the other
> > > way
On 4/11/07, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
On Wed, 2007-04-11 at 17:53 +0800, Zhao Forrest wrote:
> I got some new information:
> Before soft lockup message is out, we have:
> [EMAIL PROTECTED] home]# cat /proc/slabinfo |grep buffer_head
> buffer_head 10927942 10942560120
Zhao Forrest wrote:
> On 4/11/07, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
>> On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
>> > I'm confused - which end of ths stack is up?
>> >
>> > cpuset_exit doesn't call do_exit, rather it's the other
>> > way around. But put_files_struct doesn't
On Wed, 2007-04-11 at 17:53 +0800, Zhao Forrest wrote:
> I got some new information:
> Before soft lockup message is out, we have:
> [EMAIL PROTECTED] home]# cat /proc/slabinfo |grep buffer_head
> buffer_head 10927942 10942560120 321 : tunables 32
> 168 : slabdata 341955
On Wed, 2007-04-11 at 18:10 +0800, Zhao Forrest wrote:
> On 4/11/07, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
> > On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
> > > I'm confused - which end of ths stack is up?
> > >
> > > cpuset_exit doesn't call do_exit, rather it's the other
> > > way
On 4/11/07, Peter Zijlstra <[EMAIL PROTECTED]> wrote:
On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
> I'm confused - which end of ths stack is up?
>
> cpuset_exit doesn't call do_exit, rather it's the other
> way around. But put_files_struct doesn't call do_exit,
> rather do_exit calls
On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
> I'm confused - which end of ths stack is up?
>
> cpuset_exit doesn't call do_exit, rather it's the other
> way around. But put_files_struct doesn't call do_exit,
> rather do_exit calls __exit_files calls put_files_struct.
I'm guessing its
t does any
> expert know if this is the known issue in old kernel? Or why
> kmem_cache_free occupy CPU for more than 10 seconds?
Sounds like slab corruption. CONFIG_DEBUG_SLAB should tell you more.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message
[] put_files_struct+0x6c/0xc3
[] do_exit+0x2d2/0x8b1
[] cpuset_exit+0x0/0x6c
I'm confused - which end of ths stack is up?
cpuset_exit doesn't call do_exit, rather it's the other
way around. But put_files_struct doesn't call do_exit,
rather do_exit calls __exit_files calls put_files_struct.
issue in old kernel? Or why
> kmem_cache_free occupy CPU for more than 10 seconds?
Sounds like slab corruption. CONFIG_DEBUG_SLAB should tell you more.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo i
t know if this is the known issue in old kernel? Or why
kmem_cache_free occupy CPU for more than 10 seconds?
Sounds like slab corruption. CONFIG_DEBUG_SLAB should tell you more.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED
kmem_cache_free occupy CPU for more than 10 seconds?
Please let me know if you need any information.
Thanks,
Forrest
--
BUG: soft lockup detected on CPU#1!
Call Trace:
[] softlockup_tick+0xdb/0xed
[] update_process_times+0x42/0x68
kmem_cache_free occupy CPU for more than 10 seconds?
Please let me know if you need any information.
Thanks,
Forrest
--
BUG: soft lockup detected on CPU#1!
Call Trace:
IRQ [800b2c93] softlockup_tick+0xdb/0xed
[800933df
if this is the known issue in old kernel? Or why
kmem_cache_free occupy CPU for more than 10 seconds?
Sounds like slab corruption. CONFIG_DEBUG_SLAB should tell you more.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo
kmem_cache_free occupy CPU for more than 10 seconds?
Sounds like slab corruption. CONFIG_DEBUG_SLAB should tell you more.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
[80038e33] put_files_struct+0x6c/0xc3
[8001543d] do_exit+0x2d2/0x8b1
[80047932] cpuset_exit+0x0/0x6c
I'm confused - which end of ths stack is up?
cpuset_exit doesn't call do_exit, rather it's the other
way around. But put_files_struct doesn't call do_exit,
rather
in old kernel? Or why
kmem_cache_free occupy CPU for more than 10 seconds?
Sounds like slab corruption. CONFIG_DEBUG_SLAB should tell you more.
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
I'm confused - which end of ths stack is up?
cpuset_exit doesn't call do_exit, rather it's the other
way around. But put_files_struct doesn't call do_exit,
rather do_exit calls __exit_files calls put_files_struct.
I'm guessing its
On 4/11/07, Peter Zijlstra [EMAIL PROTECTED] wrote:
On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
I'm confused - which end of ths stack is up?
cpuset_exit doesn't call do_exit, rather it's the other
way around. But put_files_struct doesn't call do_exit,
rather do_exit calls
On Wed, 2007-04-11 at 18:10 +0800, Zhao Forrest wrote:
On 4/11/07, Peter Zijlstra [EMAIL PROTECTED] wrote:
On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
I'm confused - which end of ths stack is up?
cpuset_exit doesn't call do_exit, rather it's the other
way around. But
On Wed, 2007-04-11 at 17:53 +0800, Zhao Forrest wrote:
I got some new information:
Before soft lockup message is out, we have:
[EMAIL PROTECTED] home]# cat /proc/slabinfo |grep buffer_head
buffer_head 10927942 10942560120 321 : tunables 32
168 : slabdata 341955 341955
Zhao Forrest wrote:
On 4/11/07, Peter Zijlstra [EMAIL PROTECTED] wrote:
On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
I'm confused - which end of ths stack is up?
cpuset_exit doesn't call do_exit, rather it's the other
way around. But put_files_struct doesn't call do_exit,
On 4/11/07, Peter Zijlstra [EMAIL PROTECTED] wrote:
On Wed, 2007-04-11 at 17:53 +0800, Zhao Forrest wrote:
I got some new information:
Before soft lockup message is out, we have:
[EMAIL PROTECTED] home]# cat /proc/slabinfo |grep buffer_head
buffer_head 10927942 10942560120 32
On Wed, 2007-04-11 at 18:10 +0800, Zhao Forrest wrote:
On 4/11/07, Peter Zijlstra [EMAIL PROTECTED] wrote:
On Wed, 2007-04-11 at 02:53 -0700, Paul Jackson wrote:
I'm confused - which end of ths stack is up?
cpuset_exit doesn't call do_exit, rather it's the other
way around. But
any
expert know if this is the known issue in old kernel? Or why
kmem_cache_free occupy CPU for more than 10 seconds?
Please let me know if you need any information.
Thanks,
Forrest
--
BUG: soft lockup detected on CPU#1!
Call
36 matches
Mail list logo