On Mon, Feb 16, 2026 at 11:09 PM Steven Rostedt <[email protected]> wrote:
>
> On Sat, 14 Feb 2026 18:36:24 +0800
> Donglin Peng <[email protected]> wrote:
>
> > To simplify implementation, I propose extending a new trigger type
> > (e.g., "funcgraph").
> > In ftrace_graph_ignore_func, we could look up the corresponding
> > trace_fprobe and
> > trace_event_file based on trace->func, then decide whether to trace
> > the function using
> > a helper like the following:
> >
> > static bool ftrace_graph_filter(struct trace_fprobe *tf, struct
> > ftrace_regs *fregs,
> > struct trace_event_file *trace_file)
> > {
> > struct fentry_trace_entry_head *entry;
> > struct trace_event_buffer fbuffer;
> > struct event_trigger_data *data;
> > int dsize;
> >
> > dsize = __get_data_size(&tf->tp, fregs, NULL);
> > entry = trace_event_buffer_reserve(&fbuffer, trace_file,
> > sizeof(*entry) + tf->tp.size +
> > dsize);
> > if (!entry)
> > return false;
> >
> > entry = ring_buffer_event_data(fbuffer.event);
> > store_trace_args(&entry[1], &tf->tp, fregs, NULL, sizeof(*entry),
> > dsize);
> >
> > list_for_each_entry_rcu(data, &trace_file->triggers, list) {
> > if (data->cmd_ops->trigger_type == TRIGGER_TYPE_FUNCGRAPH) {
> > struct event_filter *filter =
> > rcu_dereference_sched(data->filter);
> > if (filter && filter_match_preds(filter, entry))
> > return true; // Allow tracing
> > }
> > }
> > return false; // Skip tracing
> > }
> >
> > Does this approach make sense? Any suggestions or concerns?
>
> My biggest concern is with performance. You want to run this against all
> functions being traced?
>
> How is this different than just using fprobes?
Thanks, my initial thought was to extend the functionality of set_graph_function
for more granular tracing control. For instance, consider the function:
vfs_write(struct file *file, const char __user *buf, size_t count, loff_t *pos):
1. Enable tracing when count exceeds a threshold (e.g., 4KB):
echo 'vfs_write if count >= 4096' > set_graph_function
2. Filter by PID and CPU:
echo 'vfs_write if pid == 3456 && cpu == 4' > set_graph_function
To implement this, we would need data structures like trace_probe,
trace_event_call, and trace_event_file. Reusing trace_fprobe might
reduce redundant implementation, however, it's unclear whether this
approach is feasible without introducing unforeseen complexities.
To minimize performance overhead, an intermediate data structure
could be introduced when writing to set_graph_function. For example:
struct funcgraph_data {
unsigned long entry_ip; // Entry address of vfs_write
struct trace_event_file *file; // Associated trace event file
};
This structure would be added to a hash table (funcgraph_hash). At the
entry point of each instrumented function, the execution flow would
proceed as follows:
ftrace_graph_func
→ function_graph_enter_regs
→ trace_graph_entry_args
→ graph_entry
→ ftrace_graph_ignore_fun
→ ftrace_graph_filter(struct ftrace_graph_ent *trace)
Within ftrace_graph_filter, the hash table funcgraph_hash is first queried
using trace->func. If a matching funcgraph_data is found,
ftrace_graph_filter is subsequently invoked with funcgraph_data->file
to apply the filter logic.
>
> -- Steve