If a task is on the rcu_node_entry list, its blocked flag should
already be set (it's set before adding to any list in
rcu_note_context_switch()). The current code silently re-sets it,
which could mask bugs.

Add a WARN_ON_ONCE to detect this invariant violation. If this
warning ever fires, it indicates a bug where a task was added to
a blocked list without properly setting the blocked flag first.

Signed-off-by: Joel Fernandes <[email protected]>
---
 kernel/rcu/tree_plugin.h | 1 +
 1 file changed, 1 insertion(+)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index dbe2d02be824..73ba5f4a968d 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -846,6 +846,7 @@ void exit_rcu(void)
        if (unlikely(!list_empty(&current->rcu_node_entry))) {
                rcu_preempt_depth_set(1);
                barrier();
+               WARN_ON_ONCE(!t->rcu_read_unlock_special.b.blocked);
                WRITE_ONCE(t->rcu_read_unlock_special.b.blocked, true);
        } else if (unlikely(rcu_preempt_depth())) {
                rcu_preempt_depth_set(1);
-- 
2.34.1


Reply via email to