On Mon, Aug 04, 2025 at 08:16:15PM +0800, Tao Chen wrote:
> The syscall link_create not protected by bpf_disable_instrumentation,
> accessing percpu data bpf_prog_active should use cpu local_lock when
> kprobe_multi program attach.
> 
> Fixes: 0dcac2725406 ("bpf: Add multi kprobe link")
> Signed-off-by: Tao Chen <chen.dyl...@linux.dev>
> ---
>  kernel/trace/bpf_trace.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/trace/bpf_trace.c b/kernel/trace/bpf_trace.c
> index 3ae52978cae..f6762552e8e 100644
> --- a/kernel/trace/bpf_trace.c
> +++ b/kernel/trace/bpf_trace.c
> @@ -2728,23 +2728,23 @@ kprobe_multi_link_prog_run(struct 
> bpf_kprobe_multi_link *link,
>       struct pt_regs *regs;
>       int err;
>  
> +     migrate_disable();
>       if (unlikely(__this_cpu_inc_return(bpf_prog_active) != 1)) {

this is called all the way from graph tracer, which disables preemption in
function_graph_enter_regs, so I think we can safely use __this_cpu_inc_return


>               bpf_prog_inc_misses_counter(link->link.prog);
>               err = 1;
>               goto out;
>       }
>  
> -     migrate_disable();

hum, but now I'm not sure why we disable migration in here then

jirka

>       rcu_read_lock();
>       regs = ftrace_partial_regs(fregs, bpf_kprobe_multi_pt_regs_ptr());
>       old_run_ctx = bpf_set_run_ctx(&run_ctx.session_ctx.run_ctx);
>       err = bpf_prog_run(link->link.prog, regs);
>       bpf_reset_run_ctx(old_run_ctx);
>       rcu_read_unlock();
> -     migrate_enable();
>  
>   out:
>       __this_cpu_dec(bpf_prog_active);
> +     migrate_enable();
>       return err;
>  }
>  
> -- 
> 2.48.1
> 

Reply via email to