On Fri, 23 Jan 2026 09:36:41 +0800
[email protected] wrote:

> From: sunliming <[email protected]>
> 
> In enable_boot_kprobe_events(), it returns directly when dyn_event_list is
> empty, thereby reducing the function's execution time. This function may
> otherwise wait for the event_mutex lock for tens of milliseconds on certain
> machines, which is unnecessary when dyn_event_list is empty.

Have you measured this?

I'm curious as to what may be holding the event_mutex at this time.
This is called by the initcall functions which are currently all
serialized. Any conflict would have to be caused by a worker or kthread
and not another initcall callback.

I'm not against the patch, I just want to understand more about it.

> 
> Signed-off-by: sunliming <[email protected]>
> ---
>  kernel/trace/trace_kprobe.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/kernel/trace/trace_kprobe.c b/kernel/trace/trace_kprobe.c
> index 9953506370a5..d89a403c99d4 100644
> --- a/kernel/trace/trace_kprobe.c
> +++ b/kernel/trace/trace_kprobe.c
> @@ -1982,6 +1982,9 @@ static __init void enable_boot_kprobe_events(void)
>       struct trace_kprobe *tk;
>       struct dyn_event *pos;
>  
> +     if (list_empty(&dyn_event_list))
> +             return;

The above should definitely have a comment or we should wrap that with
another macro because the above is assuming that
for_each_trace_kprobe() uses dyn_event_list. If that ever changes in
the future, this will be broken.

Perhaps we should add a:

        if (trace_kprobe_list_empty())
                return;

?

-- Steve



> +
>       guard(mutex)(&event_mutex);
>       for_each_trace_kprobe(tk, pos) {
>               list_for_each_entry(file, &tr->events, list)


Reply via email to