On 2025/9/23 19:10 Jiri Olsa <[email protected]> write:
> On Tue, Sep 23, 2025 at 05:20:01PM +0800, Menglong Dong wrote:
> > For now, fgraph is used for the fprobe, even if we need trace the entry
> > only. However, the performance of ftrace is better than fgraph, and we
> > can use ftrace_ops for this case.
> > 
> > Then performance of kprobe-multi increases from 54M to 69M. Before this
> > commit:
> > 
> >   $ ./benchs/run_bench_trigger.sh kprobe-multi
> >   kprobe-multi   :   54.663 ± 0.493M/s
> > 
> > After this commit:
> > 
> >   $ ./benchs/run_bench_trigger.sh kprobe-multi
> >   kprobe-multi   :   69.447 ± 0.143M/s
> > 
> > Mitigation is disable during the bench testing above.
> > 
> > Signed-off-by: Menglong Dong <[email protected]>
> > ---
> >  kernel/trace/fprobe.c | 88 +++++++++++++++++++++++++++++++++++++++----
> >  1 file changed, 81 insertions(+), 7 deletions(-)
> > 
> > diff --git a/kernel/trace/fprobe.c b/kernel/trace/fprobe.c
> > index 1785fba367c9..de4ae075548d 100644
> > --- a/kernel/trace/fprobe.c
> > +++ b/kernel/trace/fprobe.c
> > @@ -292,7 +292,7 @@ static int fprobe_fgraph_entry(struct ftrace_graph_ent 
> > *trace, struct fgraph_ops
> >                             if (node->addr != func)
> >                                     continue;
> >                             fp = READ_ONCE(node->fp);
> > -                           if (fp && !fprobe_disabled(fp))
> > +                           if (fp && !fprobe_disabled(fp) && 
> > fp->exit_handler)
> >                                     fp->nmissed++;
> >                     }
> >                     return 0;
> > @@ -312,11 +312,11 @@ static int fprobe_fgraph_entry(struct 
> > ftrace_graph_ent *trace, struct fgraph_ops
> >             if (node->addr != func)
> >                     continue;
> >             fp = READ_ONCE(node->fp);
> > -           if (!fp || fprobe_disabled(fp))
> > +           if (unlikely(!fp || fprobe_disabled(fp) || !fp->exit_handler))
> >                     continue;
> >  
> >             data_size = fp->entry_data_size;
> > -           if (data_size && fp->exit_handler)
> > +           if (data_size)
> >                     data = fgraph_data + used + FPROBE_HEADER_SIZE_IN_LONG;
> >             else
> >                     data = NULL;
> > @@ -327,7 +327,7 @@ static int fprobe_fgraph_entry(struct ftrace_graph_ent 
> > *trace, struct fgraph_ops
> >                     ret = __fprobe_handler(func, ret_ip, fp, fregs, data);
> >  
> >             /* If entry_handler returns !0, nmissed is not counted but 
> > skips exit_handler. */
> > -           if (!ret && fp->exit_handler) {
> > +           if (!ret) {
> >                     int size_words = SIZE_IN_LONG(data_size);
> >  
> >                     if (write_fprobe_header(&fgraph_data[used], fp, 
> > size_words))
> > @@ -384,6 +384,70 @@ static struct fgraph_ops fprobe_graph_ops = {
> >  };
> >  static int fprobe_graph_active;
> >  
> > +/* ftrace_ops backend (entry-only) */
> > +static void fprobe_ftrace_entry(unsigned long ip, unsigned long parent_ip,
> > +   struct ftrace_ops *ops, struct ftrace_regs *fregs)
> > +{
> > +   struct fprobe_hlist_node *node;
> > +   struct rhlist_head *head, *pos;
> > +   struct fprobe *fp;
> > +
> > +   guard(rcu)();
> > +   head = rhltable_lookup(&fprobe_ip_table, &ip, fprobe_rht_params);
> 
> hi,
> so this is based on yout previous patch, right?
>   fprobe: use rhltable for fprobe_ip_table
> 
> would be better to mention that..  is there latest version of that somewhere?

Yeah, this is based on that version. That patch is applied
to: 
https://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace.git/log/?h=probes%2Ffor-next

And I do the testing on that branches.

Thanks!
Menglong Dong

> 
> thanks,
> jirka
> 
> 





Reply via email to