On Fri, Sep 04, 2020 at 06:41:42AM -0700, Paul E. McKenney wrote:
> On Fri, Sep 04, 2020 at 12:05:34PM +0800, Boqun Feng wrote:
> > Hi Paul,
> > 
> > On Mon, Aug 31, 2020 at 11:11:12AM -0700, paul...@kernel.org wrote:
> > > From: "Paul E. McKenney" <paul...@kernel.org>
> > > 
> > > The ->rcu_read_unlock_special.b.need_qs field in the task_struct
> > > structure indicates that the RCU core needs a quiscent state from the
> > > corresponding task.  The __rcu_read_unlock() function checks this (via
> > > an eventual call to rcu_preempt_deferred_qs_irqrestore()), and if set
> > > reports a quiscent state immediately upon exit from the outermost RCU
> > > read-side critical section.
> > > 
> > > Currently, this flag is only set when the scheduling-clock interrupt
> > > decides that the current RCU grace period is too old, as in about
> > > one full second too old.  But if the kernel has been built with
> > > CONFIG_RCU_STRICT_GRACE_PERIOD=y, we clearly do not want to wait that
> > > long.  This commit therefore sets the .need_qs field immediately at the
> > > start of the RCU read-side critical section from within __rcu_read_lock()
> > > in order to unconditionally enlist help from __rcu_read_unlock().
> > > 
> > 
> > So why not make rcu_preempt_deferred_qs_irqrestore() always treat
> > need_qs is true if CONFIG_RCU_STRICT_GRACE_PERIOD = y? IOW:
> > 
> > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > index 982fc5be5269..2a9f31545453 100644
> > --- a/kernel/rcu/tree_plugin.h
> > +++ b/kernel/rcu/tree_plugin.h
> > @@ -449,6 +449,8 @@ rcu_preempt_deferred_qs_irqrestore(struct task_struct 
> > *t, unsigned long flags)
> >      * t->rcu_read_unlock_special cannot change.
> >      */
> >     special = t->rcu_read_unlock_special;
> > +   if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) && rcu_state.gp_kthread)
> > +           special.b.need_qs = true;
> >     rdp = this_cpu_ptr(&rcu_data);
> >     if (!special.s && !rdp->exp_deferred_qs) {
> >             local_irq_restore(flags);
> > 
> > , and in this way, you can save one store for each rcu_read_lock() ;-)
> 
> Because unless I am missing something subtle, if the .need_qs
> flag is not set, execution is not guaranteed to reach
> rcu_preempt_deferred_qs_irqrestore().
> 

Fair enough. Although I think we can also add IS_ENABLED(...) check to
make the outermost rcu_read_unlock() to call rcu_read_unlock_special()
unconditionally, but that's too much I think.

Regards,
Boqun

>                                                       Thanx, Paul
> 
> > Regards,
> > Boqun
> > 
> > > But note the additional check for rcu_state.gp_kthread, which prevents
> > > attempts to awaken RCU's grace-period kthread during early boot before
> > > there is a scheduler.  Leaving off this check results in early boot hangs.
> > > So early that there is no console output.  Thus, this additional check
> > > fails until such time as RCU's grace-period kthread has been created,
> > > avoiding these empty-console hangs.
> > > 
> > > Reported-by Jann Horn <ja...@google.com>
> > > Signed-off-by: Paul E. McKenney <paul...@kernel.org>
> > > ---
> > >  kernel/rcu/tree_plugin.h | 2 ++
> > >  1 file changed, 2 insertions(+)
> > > 
> > > diff --git a/kernel/rcu/tree_plugin.h b/kernel/rcu/tree_plugin.h
> > > index 44cf77d..668bbd2 100644
> > > --- a/kernel/rcu/tree_plugin.h
> > > +++ b/kernel/rcu/tree_plugin.h
> > > @@ -376,6 +376,8 @@ void __rcu_read_lock(void)
> > >   rcu_preempt_read_enter();
> > >   if (IS_ENABLED(CONFIG_PROVE_LOCKING))
> > >           WARN_ON_ONCE(rcu_preempt_depth() > RCU_NEST_PMAX);
> > > + if (IS_ENABLED(CONFIG_RCU_STRICT_GRACE_PERIOD) && rcu_state.gp_kthread)
> > > +         WRITE_ONCE(current->rcu_read_unlock_special.b.need_qs, true);
> > >   barrier();  /* critical section after entry code. */
> > >  }
> > >  EXPORT_SYMBOL_GPL(__rcu_read_lock);
> > > -- 
> > > 2.9.5
> > > 

Reply via email to