On Thu, Jul 23, 2020 at 09:14:11AM -0700, Paul E. McKenney wrote:
> > --- a/kernel/rcu/tree.c
> > +++ b/kernel/rcu/tree.c
> > @@ -1287,8 +1287,6 @@ static int rcu_implicit_dynticks_qs(stru
> >             if (IS_ENABLED(CONFIG_IRQ_WORK) &&
> >                 !rdp->rcu_iw_pending && rdp->rcu_iw_gp_seq != rnp->gp_seq &&
> >                 (rnp->ffmask & rdp->grpmask)) {
> > -                   init_irq_work(&rdp->rcu_iw, rcu_iw_handler);
> 
> We are actually better off with the IRQ_WORK_INIT_HARD() here rather
> than unconditionally at boot.

Ah, but there isn't an init_irq_work() variant that does the HARD thing.

> The reason for this is that we get here only if a single grace
> period extends beyond 10.5 seconds (mainline) or beyond 30 seconds
> (many distribution kernels).  Which almost never happens.  And yes,
> rcutree_prepare_cpu() is also invoked as each CPU that comes online,
> not that this is all that common outside of rcutorture and boot time.  ;-)

What do you mean 'also' ? Afaict this is CPU bringup only code (initial
and hotplug). We really don't care about code there. It's the slowest
possible path we have in the kernel.

> > -                   atomic_set(&rdp->rcu_iw.flags, IRQ_WORK_HARD_IRQ);
> >                     rdp->rcu_iw_pending = true;
> >                     rdp->rcu_iw_gp_seq = rnp->gp_seq;
> >                     irq_work_queue_on(&rdp->rcu_iw, rdp->cpu);

Reply via email to