Peter, If you give me your ack, I can pull this through my tree. Otherwise, it can go through tip. I just kicked off my test suite to test it overnight.
-- Steve On Wed, 10 Dec 2014 17:44:28 -0500 Steven Rostedt <[email protected]> wrote: > > When recording the state of a task for the sched_switch tracepoint a check of > task_preempt_count() is performed to see if PREEMPT_ACTIVE is set. This is > because, technically, a task being preempted is really in the TASK_RUNNING > state, and that is what should be recorded when tracing a sched_switch, > even if the task put itself into another state (it hasn't scheduled out > in that state yet). > > But with the change to use per_cpu preempt counts, the > task_thread_info(p)->preempt_count is no longer used, and instead > task_preempt_count(p) is used. > > The problem is that this does not use the current preempt count but a > stale one from a previous sched_switch. The task_preempt_count(p) uses > saved_preempt_count and not preempt_count(). But for tracing > sched_switch, if p is current, we really want preempt_count(). > > I hit this bug when I was tracing sleep and the call from do_nanosleep() > scheduled out in the "RUNNING" state. > > sleep-4290 [000] 537272.259992: sched_switch: sleep:4290 > [120] R ==> swapper/0:0 [120] > sleep-4290 [000] 537272.260015: kernel_stack: <stack > trace> > => __schedule (ffffffff8150864a) > => schedule (ffffffff815089f8) > => do_nanosleep (ffffffff8150b76c) > => hrtimer_nanosleep (ffffffff8108d66b) > => SyS_nanosleep (ffffffff8108d750) > => return_to_handler (ffffffff8150e8e5) > => tracesys_phase2 (ffffffff8150c844) > > After a bit of hair pulling, I found that the state was really > TASK_INTERRUPTIBLE, but the saved_preempt_count had an old PREEMPT_ACTIVE > set and caused the sched_switch tracepoint to show it as RUNNING. > > Cc: [email protected] # 3.13+ > Fixes: 01028747559a "sched: Create more preempt_count accessors" > Signed-off-by: Steven Rostedt <[email protected]> > --- > include/trace/events/sched.h | 6 +++++- > 1 file changed, 5 insertions(+), 1 deletion(-) > > diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h > index 0a68d5ae584e..13fbadcc172b 100644 > --- a/include/trace/events/sched.h > +++ b/include/trace/events/sched.h > @@ -97,10 +97,14 @@ static inline long __trace_sched_switch_state(struct > task_struct *p) > long state = p->state; > > #ifdef CONFIG_PREEMPT > + unsigned long pc; > + > + pc = (p == current) ? preempt_count() : task_preempt_count(p); > + > /* > * For all intents and purposes a preempted task is a running task. > */ > - if (task_preempt_count(p) & PREEMPT_ACTIVE) > + if (pc & PREEMPT_ACTIVE) > state = TASK_RUNNING | TASK_STATE_MAX; > #endif > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [email protected] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/

