bpf program should run under migration disabled, kprobe_multi_link_prog_run
called all the way from graph tracer, which disables preemption in
function_graph_enter_regs, as Jiri and Yonghong suggested, there is no
need to use migrate_disable. As a result, some overhead maybe will be
reduced.

Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
Acked-by: Yonghong Song <yonghong.s...@linux.dev>
Acked-by: Jiri Olsa <jo...@kernel.org>
Signed-off-by: Tao Chen <chen.dyl...@linux.dev>
---
 kernel/trace/bpf_trace.c | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

Change list:
 v1 -> v2:
  - s/called the way/called all the way/.(Jiri)
 v1: https://lore.kernel.org/bpf/f7acfd22-bcf3-4dff-9a87-7c1e6f84c...@linux.dev

diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
index 3ae52978cae..5701791e3cb 100644
--- a/kernel/trace/bpf_trace.c
+++ b/kernel/trace/bpf_trace.c
@@ -2734,14 +2734,19 @@ kprobe_multi_link_prog_run(struct bpf_kprobe_multi_link 
*link,
                goto out;
        }
 
-       migrate_disable();
+       /*
+        * bpf program should run under migration disabled, 
kprobe_multi_link_prog_run
+        * called all the way from graph tracer, which disables preemption in
+        * function_graph_enter_regs, so there is no need to use 
migrate_disable.
+        * Accessing the above percpu data bpf_prog_active is also safe for the 
same
+        * reason.
+        */
        rcu_read_lock();
        regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
        old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
        err = bpf_prog_run(link->link.prog, regs);
        bpf_reset_run_ctx(old_run_ctx);
        rcu_read_unlock();
-       migrate_enable();
 
  out:
        __this_cpu_dec(bpf_prog_active);
-- 
2.48.1


Reply via email to