From: Boqun Feng <boqun.f...@gmail.com>

commit fcc63543650150629c8a873cbef3578770acecd9 upstream.

Currently, the parallelized initialization of expedited grace periods uses
the workqueue associated with each rcu_node structure's ->grplo field.
This works fine unless that CPU is offline.  This commit therefore uses
the CPU corresponding to the lowest-numbered online CPU, or just queues
the work on WORK_CPU_UNBOUND if there are no online CPUs corresponding
to this rcu_node structure.

Note that this patch uses cpu_is_offline() instead of the usual approach
of checking bits in the rcu_node structure's ->qsmaskinitnext field.  This
is safe because preemption is disabled across both the cpu_is_offline()
check and the call to queue_work_on().

Signed-off-by: Boqun Feng <boqun.f...@gmail.com>
[ paulmck: Disable preemption to close offline race window. ]
Signed-off-by: Paul E. McKenney <paul...@linux.vnet.ibm.com>
[ paulmck: Apply Peter Zijlstra feedback on CPU selection. ]
Tested-by: Aneesh Kumar K.V <aneesh.ku...@linux.vnet.ibm.com>
Signed-off-by: Paul Gortmaker <paul.gortma...@windriver.com>
---
 kernel/rcu/tree_exp.h | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/kernel/rcu/tree_exp.h b/kernel/rcu/tree_exp.h
index d40708e8c5d6..01b6ddeb4f05 100644
--- a/kernel/rcu/tree_exp.h
+++ b/kernel/rcu/tree_exp.h
@@ -472,6 +472,7 @@ static void sync_rcu_exp_select_node_cpus(struct 
work_struct *wp)
 static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
                                     smp_call_func_t func)
 {
+       int cpu;
        struct rcu_node *rnp;
 
        trace_rcu_exp_grace_period(rsp->name, rcu_exp_gp_seq_endval(rsp), 
TPS("reset"));
@@ -492,7 +493,13 @@ static void sync_rcu_exp_select_cpus(struct rcu_state *rsp,
                        continue;
                }
                INIT_WORK(&rnp->rew.rew_work, sync_rcu_exp_select_node_cpus);
-               queue_work_on(rnp->grplo, rcu_par_gp_wq, &rnp->rew.rew_work);
+               preempt_disable();
+               cpu = cpumask_next(rnp->grplo - 1, cpu_online_mask);
+               /* If all offline, queue the work on an unbound CPU. */
+               if (unlikely(cpu > rnp->grphi))
+                       cpu = WORK_CPU_UNBOUND;
+               queue_work_on(cpu, rcu_par_gp_wq, &rnp->rew.rew_work);
+               preempt_enable();
                rnp->exp_need_flush = true;
        }
 
-- 
2.15.0

-- 
_______________________________________________
linux-yocto mailing list
linux-yocto@yoctoproject.org
https://lists.yoctoproject.org/listinfo/linux-yocto

Reply via email to