On Thu, 3 Oct 2024 11:16:34 -0400 Mathieu Desnoyers <[email protected]> wrote:
> In preparation for allowing system call enter/exit instrumentation to > handle page faults, make sure that bpf can handle this change by > explicitly disabling preemption within the bpf system call tracepoint > probes to respect the current expectations within bpf tracing code. > > This change does not yet allow bpf to take page faults per se within its > probe, but allows its existing probes to adapt to the upcoming change. > I guess the BPF folks should state if this is needed or not? Does the BPF hooks into the tracepoints expect preemption to be disabled when called? -- Steve > Signed-off-by: Mathieu Desnoyers <[email protected]> > Acked-by: Andrii Nakryiko <[email protected]> > Tested-by: Andrii Nakryiko <[email protected]> # BPF parts > Cc: Michael Jeanson <[email protected]> > Cc: Steven Rostedt <[email protected]> > Cc: Masami Hiramatsu <[email protected]> > Cc: Peter Zijlstra <[email protected]> > Cc: Alexei Starovoitov <[email protected]> > Cc: Yonghong Song <[email protected]> > Cc: Paul E. McKenney <[email protected]> > Cc: Ingo Molnar <[email protected]> > Cc: Arnaldo Carvalho de Melo <[email protected]> > Cc: Mark Rutland <[email protected]> > Cc: Alexander Shishkin <[email protected]> > Cc: Namhyung Kim <[email protected]> > Cc: Andrii Nakryiko <[email protected]> > Cc: [email protected] > Cc: Joel Fernandes <[email protected]> > --- > include/trace/bpf_probe.h | 11 ++++++++++- > 1 file changed, 10 insertions(+), 1 deletion(-) > > diff --git a/include/trace/bpf_probe.h b/include/trace/bpf_probe.h > index c85bbce5aaa5..211b98d45fc6 100644 > --- a/include/trace/bpf_probe.h > +++ b/include/trace/bpf_probe.h > @@ -53,8 +53,17 @@ __bpf_trace_##call(void *__data, proto) > \ > #define DECLARE_EVENT_CLASS(call, proto, args, tstruct, assign, print) > \ > __BPF_DECLARE_TRACE(call, PARAMS(proto), PARAMS(args)) > > +#define __BPF_DECLARE_TRACE_SYSCALL(call, proto, args) > \ > +static notrace void \ > +__bpf_trace_##call(void *__data, proto) > \ > +{ \ > + guard(preempt_notrace)(); \ > + CONCATENATE(bpf_trace_run, COUNT_ARGS(args))(__data, > CAST_TO_U64(args)); \ > +} > + > #undef DECLARE_EVENT_SYSCALL_CLASS > -#define DECLARE_EVENT_SYSCALL_CLASS DECLARE_EVENT_CLASS > +#define DECLARE_EVENT_SYSCALL_CLASS(call, proto, args, tstruct, assign, > print) \ > + __BPF_DECLARE_TRACE_SYSCALL(call, PARAMS(proto), PARAMS(args)) > > /* > * This part is compiled out, it is only here as a build time check
