On Fri, 7 Apr 2017 07:36:19 -0700 "Paul E. McKenney" <[email protected]> wrote:
> On Fri, Apr 07, 2017 at 10:01:08AM -0400, Steven Rostedt wrote: > > From: "Steven Rostedt (VMware)" <[email protected]> > > > > The updates to the trace_active per cpu variable can be updated with the > > this_cpu_*() functions as it only gets updated on the CPU that the variable > > is on. > > > > Signed-off-by: Steven Rostedt (VMware) <[email protected]> > > --- > > kernel/trace/trace_stack.c | 23 +++++++---------------- > > 1 file changed, 7 insertions(+), 16 deletions(-) > > > > diff --git a/kernel/trace/trace_stack.c b/kernel/trace/trace_stack.c > > index 5fb1f2c87e6b..05ad2b86461e 100644 > > --- a/kernel/trace/trace_stack.c > > +++ b/kernel/trace/trace_stack.c > > @@ -207,13 +207,12 @@ stack_trace_call(unsigned long ip, unsigned long > > parent_ip, > > struct ftrace_ops *op, struct pt_regs *pt_regs) > > { > > unsigned long stack; > > - int cpu; > > > > preempt_disable_notrace(); > > > > - cpu = raw_smp_processor_id(); > > /* no atomic needed, we only modify this variable by this cpu */ > > - if (per_cpu(trace_active, cpu)++ != 0) > > + this_cpu_inc(trace_active); > > For whatever it is worth... > > I was about to complain that this_cpu_inc() only disables preemption, > not interrupts, but then I realized that any correct interrupt handler > would have to restore the per-CPU variable to its original value. Yep, that's the reason for the comment about "no atomic needed". This is a "stack modification". Any interruption in the flow will reset the changes back to the way it was before going back to what it interrupted. > > Presumably you have to sum up all the per-CPU trace_active counts, > given that there is no guarantee that a process-level dec will happen > on the same CPU that did the inc. That's why we disable preemption. We guarantee that a process-level dec *will* happen on the same CPU that did the inc. It's also the reason for the preemption disabled check in the stack_tracer_disable() code. -- Steve

