Add a check for t->rcu_blocked_node being NULL after removing the task from the per-CPU blocked list. If NULL, the task was only on the per-CPU list and not on the rcu_node's blkd_tasks list, so we can skip all the rnp lock acquisition and quiescent state reporting.
Currently this path is not taken since tasks are always added to both lists. This prepares for a future optimization where tasks will initially be added only to the per-CPU list and promoted to the rnp list only when a grace period needs to wait for them. Signed-off-by: Joel Fernandes <[email protected]> --- kernel/rcu/tree_plugin.h | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h index 5d2bde19131a..ee26e87c72f8 100644 --- a/kernel/rcu/tree_plugin.h +++ b/kernel/rcu/tree_plugin.h @@ -549,6 +549,22 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct *t, unsigned long flags) list_del_init(&t->rcu_rdp_entry); t->rcu_blocked_cpu = -1; raw_spin_unlock(&blocked_rdp->blkd_lock); + /* + * TODO: This should just be "WARN_ON_ONCE(rnp); return;" since after + * the last patches, the task can only be in either the rdp or the rnp + * list, not both. Since blocked_cpu != -1, it is clearly not in the rnp + * so we activate the benefits of this patchset by removing the task + * from the rdp blocked list and early returning. + */ + if (!rnp) { + /* + * Task was only on per-CPU list, not on rnp list. + * This can happen in future when tasks are added + * only to rdp initially and promoted to rnp later. + */ + local_irq_restore(flags); + return; + } } #endif raw_spin_lock_rcu_node(rnp); /* irqs already disabled. */ -- 2.34.1

