On Mon, Jun 23, 2025 at 11:13:03AM -0700, Paul E. McKenney wrote: > On Mon, Jun 23, 2025 at 12:49:41PM +0200, Sebastian Andrzej Siewior wrote: > > On 2025-06-20 04:23:49 [-0700], Paul E. McKenney wrote: > > > > I hope not because it is not any different from > > > > > > > > CPU 2 CPU 3 > > > > ===== ===== > > > > NMI > > > > rcu_read_lock(); > > > > synchronize_rcu(); > > > > // need all CPUs report a QS. > > > > rcu_read_unlock(); > > > > // no rcu_read_unlock_special() due to in_nmi(). > > > > > > > > If the NMI happens while the CPU is in userland (say a perf event) then > > > > the NMI returns directly to userland. > > > > After the tracing event completes (in this case) the CPU should run into > > > > another RCU section on its way out via context switch or the tick > > > > interrupt. > > > > I assume the tick interrupt is what makes the NMI case work. > > > > > > Are you promising that interrupts will be always be disabled across > > > the whole rcu_read_lock_notrace() read-side critical section? If so, > > > could we please have a lockdep_assert_irqs_disabled() call to check that? > > > > No, that should stay preemptible because bpf can attach itself to > > tracepoints and this is the root cause of the exercise. Now if you say > > it has to be run with disabled interrupts to match the NMI case then it > > makes sense (since NMIs have interrupts off) but I do not understand why > > it matters here (since the CPU returns to userland without passing the > > kernel). > > Given your patch, if you don't disable interrupts in a preemptible kernel > across your rcu_read_lock_notrace()/rcu_read_unlock_notrace() pair, then a > concurrent expedited grace period might send its IPI in the middle of that > critical section. That IPI handler would set up state so that the next > rcu_preempt_deferred_qs_irqrestore() would report the quiescent state. > Except that without the call to rcu_read_unlock_special(), there might > not be any subsequent call to rcu_preempt_deferred_qs_irqrestore(). > > This is even more painful if this is a CONFIG_PREEMPT_RT kernel. > Then if that critical section was preempted and then priority-boosted, > the unboosting also won't happen until the next call to that same > rcu_preempt_deferred_qs_irqrestore() function, which again might not > happen. Or might be significantly delayed. > > Or am I missing some trick that fixes all of this? > > > I'm not sure how much can be done here due to the notrace part. Assuming > > rcu_read_unlock_special() is not doable, would forcing a context switch > > (via setting need-resched and irq_work, as the IRQ-off case) do the > > trick? > > Looking through rcu_preempt_deferred_qs_irqrestore() it does not look to > > be "usable from the scheduler (with rq lock held)" due to RCU-boosting > > or the wake of expedited_wq (which is one of the requirement). > > But if rq_lock is held, then interrupts are disabled, which will > cause the unboosting to be deferred. > > Or are the various deferral mechanisms also unusable in this context?
OK, looking back through this thread, it appears that you need both an rcu_read_lock_notrace() and an rcu_read_unlock_notrace() that are covered by Mathieu's list of requirements [1]: | - NMI-safe This is covered by the existing rcu_read_lock() and rcu_read_unlock(). | - notrace I am guessing that by "notrace", you mean the "notrace" CPP macro attribute defined in include/linux/compiler_types.h. This has no fewer than four different definitions, so I will need some help understanding what the restrictions are. | - usable from the scheduler (with rq lock held) This is covered by the existing rcu_read_lock() and rcu_read_unlock(). | - usable to trace the RCU implementation This one I don't understand. Can I put tracepoints on rcu_read_lock_notrace() and rcu_read_unlock_notrace() or can't I? I was assuming that tracepoints would be forbidden. Until I reached this requirement, that is. One possible path forward is to ensure that rcu_read_unlock_special() calls only functions that are compatible with the notrace/trace requirements. The ones that look like they might need some help are raise_softirq_irqoff() and irq_work_queue_on(). Note that although rcu_preempt_deferred_qs_irqrestore() would also need help, it is easy to avoid its being invoked, for example, by disabing interrupts across the call to rcu_read_unlock_notrace(). Or by making rcu_read_unlock_notrace() do the disabling. However, I could easily be missing something, especially given my being confused by the juxtaposition of "notrace" and "usable to trace the RCU implementation". These appear to me to be contradicting each other. Help? Thanx, Paul [1] https://lore.kernel.org/all/20250613152218.1924093-1-bige...@linutronix.de/