On 07/11/2013 01:11 PM, Peter Zijlstra wrote:
> On Thu, Jul 11, 2013 at 12:59:32PM -0400, Steven Rostedt wrote:
>> On Thu, 2013-07-11 at 12:55 -0400, Steven Rostedt wrote:
>>
Other than that, a function tracer environment that is safer to use might
be
useful for other people as
On Thu, Jul 11, 2013 at 12:59:32PM -0400, Steven Rostedt wrote:
> On Thu, 2013-07-11 at 12:55 -0400, Steven Rostedt wrote:
>
> > >
> > > Other than that, a function tracer environment that is safer to use might
> > > be
> > > useful for other people as well.
> >
> > Not sure how to make the env
On Thu, 2013-07-11 at 12:55 -0400, Steven Rostedt wrote:
> >
> > Other than that, a function tracer environment that is safer to use might be
> > useful for other people as well.
>
> Not sure how to make the environment safe, as the main purpose of the
> function trace is to debug those hard to
On Thu, 2013-07-11 at 18:42 +0200, Peter Zijlstra wrote:
> I'm afraid this is going to be hard to create and hard to keep correct :/
That's the nature of tracing functions ;-)
>
> Other than that, a function tracer environment that is safer to use might be
> useful for other people as well.
No
On Wed, Jul 10, 2013 at 08:58:11AM -0400, Steven Rostedt wrote:
> On Wed, 2013-07-10 at 14:42 +0200, Peter Zijlstra wrote:
>
> > but but but preempt_disable_notrace() isn't an rcu_read_lock().. You can
> > only
> > do that for rcu_sched.
>
> Right. And not even rcu_sched is safe, as function tra
On Wed, Jul 10, 2013 at 10:47:40AM -0400, Sasha Levin wrote:
> On 07/10/2013 05:54 AM, Peter Zijlstra wrote:
> >On Tue, Jul 09, 2013 at 06:35:27PM -0400, Sasha Levin wrote:
> >>While going through the NMI dump, I noticed that it's very incomplete, and
> >>full of:
> >>
> >>[ 2536.500130] INFO: NMI
On 07/10/2013 05:54 AM, Peter Zijlstra wrote:
On Tue, Jul 09, 2013 at 06:35:27PM -0400, Sasha Levin wrote:
While going through the NMI dump, I noticed that it's very incomplete, and full
of:
[ 2536.500130] INFO: NMI handler (arch_trigger_all_cpu_backtrace_handler)
took too long to run: 697182.
On Wed, 2013-07-10 at 14:42 +0200, Peter Zijlstra wrote:
> but but but preempt_disable_notrace() isn't an rcu_read_lock().. You can only
> do that for rcu_sched.
Right. And not even rcu_sched is safe, as function tracing can trace
functions out of rcu scope. That's why I had to add that code to d
On Wed, Jul 10, 2013 at 08:27:34AM -0400, Steven Rostedt wrote:
> On Wed, 2013-07-10 at 11:52 +0200, Peter Zijlstra wrote:
>
> > Fun.. :-) we trace __local_bh_enable() and hit a ftrace callback between
> > telling lockdep we enabled softirqs and actually doing so.
> >
> > I'm just a tad confused
On Wed, 2013-07-10 at 11:52 +0200, Peter Zijlstra wrote:
> Fun.. :-) we trace __local_bh_enable() and hit a ftrace callback between
> telling lockdep we enabled softirqs and actually doing so.
>
> I'm just a tad confused by the trace; it says we go:
> lock_is_held()
> check_flags()
>
> Loo
On Tue, Jul 09, 2013 at 06:35:27PM -0400, Sasha Levin wrote:
> While going through the NMI dump, I noticed that it's very incomplete, and
> full of:
>
> [ 2536.500130] INFO: NMI handler (arch_trigger_all_cpu_backtrace_handler)
> took too long to run: 697182.008 msecs
Now I would actually expect
On Tue, Jul 09, 2013 at 07:09:51PM -0400, Sasha Levin wrote:
> perf huh? I also have this spew I'm currently working on, it seems related:
>
> [ 1443.380407] [ cut here ]
> [ 1443.381713] WARNING: CPU: 20 PID: 49263 at kernel/lockdep.c:3552
> check_flags+0x16b/0x220()
> [
On 07/09/2013 07:03 PM, Dave Jones wrote:
On Tue, Jul 09, 2013 at 06:49:29PM -0400, Sasha Levin wrote:
> On 07/09/2013 06:47 PM, Dave Jones wrote:
> > > [ 2536.500130] INFO: NMI handler
(arch_trigger_all_cpu_backtrace_handler) took too long to run:
> > > 697182.008 msecs
> >
> > I
On Tue, Jul 09, 2013 at 06:49:29PM -0400, Sasha Levin wrote:
> On 07/09/2013 06:47 PM, Dave Jones wrote:
> > > [ 2536.500130] INFO: NMI handler
> > (arch_trigger_all_cpu_backtrace_handler) took too long to run:
> > > 697182.008 msecs
> >
> > I've been reporting these (and other traces)
On 07/09/2013 06:47 PM, Dave Jones wrote:
On Tue, Jul 09, 2013 at 06:35:27PM -0400, Sasha Levin wrote:
> Hi all,
>
> While fuzzing with trinity inside a KVM tools guest running latest -next,
I've
> stumbled on the following spew:
>
> [ 2536.440007] BUG: soft lockup - CPU#0 stuck for
On Tue, Jul 09, 2013 at 06:35:27PM -0400, Sasha Levin wrote:
> Hi all,
>
> While fuzzing with trinity inside a KVM tools guest running latest -next,
> I've
> stumbled on the following spew:
>
> [ 2536.440007] BUG: soft lockup - CPU#0 stuck for 23s!
> [trinity-child86:12368]
> [ 2536.44
Hi all,
While fuzzing with trinity inside a KVM tools guest running latest -next, I've
stumbled on the following spew:
[ 2536.440007] BUG: soft lockup - CPU#0 stuck for 23s! [trinity-child86:12368]
[ 2536.440007] Modules linked in:
[ 2536.440007] irq event stamp: 1572135
[ 2536.440007] hardirqs
17 matches
Mail list logo