On Tue, 30 Sep 2025 12:10:52 +0200
Peter Zijlstra <[email protected]> wrote:

> On Tue, Sep 30, 2025 at 05:58:26PM +0900, Masami Hiramatsu wrote:
> > On Mon, 29 Sep 2025 12:12:59 +0200
> > Peter Zijlstra <[email protected]> wrote:
> > 
> > > On Mon, Sep 29, 2025 at 11:38:08AM +0206, John Ogness wrote:
> > > 
> > > > >> Problem:
> > > > >> 1. CPU0 executes (1) assigning tp_event->perf_events = list
> > > > 
> > > > smp_wmb()
> > > > 
> > > > >> 2. CPU0 executes (2) enabling kprobe functionality via class->reg()
> > > > >> 3. CPU1 triggers and reaches kprobe_dispatcher
> > > > >> 4. CPU1 checks TP_FLAG_PROFILE - condition passes (step 2 completed)
> > > > 
> > > > smp_rmb()
> > > > 
> > > > >> 5. CPU1 calls kprobe_perf_func() and crashes at (3) because
> > > > >>    call->perf_events is still NULL
> > > > >> 
> > > > >> The issue: Assignment in step 1 may not be visible to CPU1 due to
> > > > >> missing memory barriers before step 2 sets TP_FLAG_PROFILE flag.
> > > > 
> > > > A better explanation of the issue would be: CPU1 sees that kprobe
> > > > functionality is enabled but does not see that perf_events has been
> > > > assigned.
> > > > 
> > > > Add pairing read and write memory barriers to guarantee that if CPU1
> > > > sees that kprobe functionality is enabled, it must also see that
> > > > perf_events has been assigned.
> > > > 
> > > > Note that this could also be done more efficiently using a store_release
> > > > when setting the flag (in step 2) and a load_acquire when loading the
> > > > flag (in step 4).
> > > 
> > > The RELEASE+ACQUIRE is a better pattern for these cases. 
> > > 
> > > And I'll argue the barrier should be in 2 not 1, since it is 2 that sets
> > > the flag checked in 4.  Any store before that flag might be affected,
> > > not just the ->perf_events list.
> > 
> > RELEASE+ACQUIRE ensures the memory ordering on the `same` CPU, so do we 
> > still need smp_rmb() and smp_wmb()? e.g.
> 
> Eh, no, that's wrong. RELEASE and ACQUIRE are SMP barriers.

OK, thanks for confirmation!


-- 
Masami Hiramatsu (Google) <[email protected]>

Reply via email to