On Thu, 11 Dec 2014 07:38:11 +0100
Ingo Molnar <[email protected]> wrote:

> > Cc: [email protected] # 3.13+
> > Fixes: 01028747559a "sched: Create more preempt_count accessors"
> > Signed-off-by: Steven Rostedt <[email protected]>
> > ---
> >  include/trace/events/sched.h | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> > 
> > diff --git a/include/trace/events/sched.h b/include/trace/events/sched.h
> > index 0a68d5ae584e..13fbadcc172b 100644
> > --- a/include/trace/events/sched.h
> > +++ b/include/trace/events/sched.h
> > @@ -97,10 +97,14 @@ static inline long __trace_sched_switch_state(struct 
> > task_struct *p)
> >     long state = p->state;
> >  
> >  #ifdef CONFIG_PREEMPT
> > +   unsigned long pc;
> > +
> > +   pc = (p == current) ? preempt_count() : task_preempt_count(p);
> > +
> >     /*
> >      * For all intents and purposes a preempted task is a running task.
> >      */
> > -   if (task_preempt_count(p) & PREEMPT_ACTIVE)
> > +   if (pc & PREEMPT_ACTIVE)
> >             state = TASK_RUNNING | TASK_STATE_MAX;
> 
> I really don't like the overhead around here.

Hi Ingo!

What overhead are you worried about? Note, this is in the schedule
tracepoint and does not affect the scheduler itself (as long as the
tracepoint is not enabled).

I'm also thinking that as long as "prev" is always guaranteed to be
"current" we can remove the check and just use preempt_count() always.
But I'm worried that we can't guaranteed that.

What other ideas do you have? Because wrong data is worse than the
overhead of the above code. If Thomas taught me anything, it's that!

-- Steve
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to