Le Mon, May 20, 2024 at 04:25:33PM -0700, Paul E. McKenney a écrit :
> Good points!  How about the following?
> 
>               // Note that cpu_curr_snapshot() picks up the target
>               // CPU's current task while its runqueue is locked with
>               // an smp_mb__after_spinlock().  This ensures that either
>               // the grace-period kthread will see that task's read-side
>               // critical section or the task will see the updater's pre-GP
>               // accesses.  The trailng smp_mb() in cpu_curr_snapshot()

*trailing

>               // does not currently play a role other than simplify
>               // that function's ordering semantics.  If these simplified
>               // ordering semantics continue to be redundant, that smp_mb()
>               // might be removed.
> 
> Keeping in mind that the commit's log fully lays out the troublesome
> scenario.

Yep, looks very good!

Thanks!

> 
>                                                       Thanx, Paul
> 

Reply via email to