Offlining offloaded CPUs don't migrate their callbacks just like
non-offloaded CPUs do. It's up to their CB/GP kthread to handle what
remains.

Therefore we can't afford to de-offload an offline CPU that still has
pending work to do, or the callbacks would be ignored.

NOTE: The long term solution will be to wait for all pending callbacks
to be processed before completing a CPU down operation.

Suggested-by: Paul E. McKenney <[email protected]>
Signed-off-by: Frederic Weisbecker <[email protected]>
Cc: Paul E. McKenney <[email protected]>
Cc: Josh Triplett <[email protected]>
Cc: Steven Rostedt <[email protected]>
Cc: Mathieu Desnoyers <[email protected]>
Cc: Lai Jiangshan <[email protected]>
Cc: Joel Fernandes <[email protected]>
Cc: Neeraj Upadhyay <[email protected]>
---
 kernel/rcu/tree_plugin.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
index 09caf319a4a9..33e9d53d2181 100644
--- a/kernel/rcu/tree_plugin.h
+++ b/kernel/rcu/tree_plugin.h
@@ -2228,6 +2228,14 @@ static int __rcu_nocb_rdp_deoffload(struct rcu_data *rdp)
        printk("De-offloading %d\n", rdp->cpu);
 
        rcu_nocb_lock_irqsave(rdp, flags);
+       /*
+        * If there are still pending work offloaded, the offline
+        * CPU won't help much handling them.
+        */
+       if (cpu_is_offline(rdp->cpu) && !rcu_segcblist_empty(&rdp->cblist)) {
+               rcu_nocb_unlock_irqrestore(rdp, flags);
+               return -EBUSY;
+       }
        raw_spin_lock_rcu_node(rnp);
        rcu_segcblist_offload(cblist, false);
        raw_spin_unlock_rcu_node(rnp);
-- 
2.25.1

Reply via email to