> -----Original Message-----
> From: xinhui [mailto:xinhui....@linux.vnet.ibm.com]
> Sent: Monday, June 20, 2016 4:29 PM
> To: Byungchul Park; pet...@infradead.org; mi...@kernel.org
> Cc: linux-kernel@vger.kernel.org; npig...@suse.de; wal...@google.com;
> a...@suse.de; t...@inhelltoy.tec.linutronix.de
> Subject: Re: [RFC 12/12] x86/dumpstack: Optimize save_stack_trace
> 
> 
> On 2016年06月20日 12:55, Byungchul Park wrote:
> > Currently, x86 implementation of save_stack_trace() is walking all stack
> > region word by word regardless of what the trace->max_entries is.
> > However, it's unnecessary to walk after already fulfilling caller's
> > requirement, say, if trace->nr_entries >= trace->max_entries is true.
> >
> > For example, CONFIG_LOCKDEP_CROSSRELEASE implementation calls
> > save_stack_trace() with max_entries = 5 frequently. I measured its
> > overhead and printed its difference of sched_clock() with my QEMU x86
> > machine.
> >
> > The latency was improved over 70% when trace->max_entries = 5.
> >
> [snip]
> 
> > +static int save_stack_end(void *data)
> > +{
> > +   struct stack_trace *trace = data;
> > +   return trace->nr_entries >= trace->max_entries;
> > +}
> > +
> >   static const struct stacktrace_ops save_stack_ops = {
> >     .stack          = save_stack_stack,
> >     .address        = save_stack_address,
> then why not check the return value of ->address(), -1 indicate there is
> no room to store any pointer.

Hello,

Indeed. It also looks good to me even though it has to propagate the condition
between callback functions. I will modify it if it's better.

Thank you.
Byungchul

> 
> >     .walk_stack     = print_context_stack,
> > +   .end_walk       = save_stack_end,
> >   };
> >
> >   static const struct stacktrace_ops save_stack_ops_nosched = {
> >

Reply via email to