Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-26 Thread Scott Wood
On Mon, 2019-08-26 at 17:59 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-23 14:46:39 [-0500], Scott Wood wrote:
> > > > Before consolidation, RT mapped rcu_read_lock_bh_held() to
> > > > rcu_read_lock_bh() and called rcu_read_lock() from
> > > > rcu_read_lock_bh().  This
> > > > somehow got lost when rebasing on top of 5.0.
> > > 
> > > so now rcu_read_lock_bh_held() is untouched and in_softirq() reports
> > > 1.
> > > So the problem is that we never hold RCU but report 1 like we do?
> > 
> > Yes.
> 
> I understand the part where "rcu_read_lock() becomes part of
> local_bh_disable()". But why do you modify rcu_read_lock_bh_held() and
> rcu_read_lock_bh()? Couldn't they remain as-is?

Yes, it looks like they can.  I recall having problems with
rcu_read_lock_bh_held() in an earlier version which is what prompted the
change, but looking back I don't see what the problem was.

-Scott




Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-26 Thread Sebastian Andrzej Siewior
On 2019-08-23 14:46:39 [-0500], Scott Wood wrote:
> > > Before consolidation, RT mapped rcu_read_lock_bh_held() to
> > > rcu_read_lock_bh() and called rcu_read_lock() from
> > > rcu_read_lock_bh().  This
> > > somehow got lost when rebasing on top of 5.0.
> > 
> > so now rcu_read_lock_bh_held() is untouched and in_softirq() reports 1.
> > So the problem is that we never hold RCU but report 1 like we do?
> 
> Yes.

I understand the part where "rcu_read_lock() becomes part of
local_bh_disable()". But why do you modify rcu_read_lock_bh_held() and
rcu_read_lock_bh()? Couldn't they remain as-is?

> -Scott

Sebastian


Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-24 Thread Paul E. McKenney
On Thu, Aug 22, 2019 at 10:23:23PM -0500, Scott Wood wrote:
> On Thu, 2019-08-22 at 09:39 -0400, Joel Fernandes wrote:
> > On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> > > On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:

[ . . . ]

> > > >  include/linux/rcupdate.h |  4 
> > > >  kernel/rcu/update.c  |  4 
> > > >  kernel/softirq.c | 12 +---
> > > >  3 files changed, 17 insertions(+), 3 deletions(-)
> > > > 
> > > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > > > index 388ace315f32..d6e357378732 100644
> > > > --- a/include/linux/rcupdate.h
> > > > +++ b/include/linux/rcupdate.h
> > > > @@ -615,10 +615,12 @@ static inline void rcu_read_unlock(void)
> > > >  static inline void rcu_read_lock_bh(void)
> > > >  {
> > > > local_bh_disable();
> > > > +#ifndef CONFIG_PREEMPT_RT_FULL
> > > > __acquire(RCU_BH);
> > > > rcu_lock_acquire(_bh_lock_map);
> > > > RCU_LOCKDEP_WARN(!rcu_is_watching(),
> > > >  "rcu_read_lock_bh() used illegally while 
> > > > idle");
> > > > +#endif
> > > 
> > > Any chance of this using "if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))"?
> > > We should be OK providing a do-nothing __maybe_unused rcu_bh_lock_map
> > > for lockdep-enabled -rt kernels, right?
> > 
> > Since this function is small, I prefer if -rt defines their own
> > rcu_read_lock_bh() which just does the local_bh_disable(). That would be
> > way
> > cleaner IMO. IIRC, -rt does similar things for spinlocks, but it has been
> > sometime since I look at the -rt patchset.
> 
> I'll do it whichever way you all decide, though I'm not sure I agree about
> it being cleaner (especially while RT is still out-of-tree and a change to
> the non-RT version that fails to trigger a merge conflict is a concern).
> 
> What about moving everything but the local_bh_disable into a separate
> function called from rcu_read_lock_bh, and making that a no-op on RT?

That makes a lot of sense to me!

Thanx, Paul


Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-23 Thread Scott Wood
On Fri, 2019-08-23 at 18:17 +0200, Sebastian Andrzej Siewior wrote:
> On 2019-08-22 22:23:23 [-0500], Scott Wood wrote:
> > On Thu, 2019-08-22 at 09:39 -0400, Joel Fernandes wrote:
> > > On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> > > > On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > > > > Signed-off-by: Scott Wood 
> > > > > ---
> > > > > Another question is whether non-raw spinlocks are intended to
> > > > > create
> > > > > an
> > > > > RCU read-side critical section due to implicit preempt disable.
> > > > 
> > > > Hmmm...  Did non-raw spinlocks act like rcu_read_lock_sched()
> > > > and rcu_read_unlock_sched() pairs in -rt prior to the RCU flavor
> > > > consolidation?  If not, I don't see why they should do so after that
> > > > consolidation in -rt.
> > > 
> > > May be I am missing something, but I didn't see the connection between
> > > consolidation and this patch. AFAICS, this patch is so that
> > > rcu_read_lock_bh_held() works at all on -rt. Did I badly miss
> > > something?
> > 
> > Before consolidation, RT mapped rcu_read_lock_bh_held() to
> > rcu_read_lock_bh() and called rcu_read_lock() from
> > rcu_read_lock_bh().  This
> > somehow got lost when rebasing on top of 5.0.
> 
> so now rcu_read_lock_bh_held() is untouched and in_softirq() reports 1.
> So the problem is that we never hold RCU but report 1 like we do?

Yes.

-Scott




Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-23 Thread Sebastian Andrzej Siewior
On 2019-08-22 22:23:23 [-0500], Scott Wood wrote:
> On Thu, 2019-08-22 at 09:39 -0400, Joel Fernandes wrote:
> > On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> > > On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > > > Signed-off-by: Scott Wood 
> > > > ---
> > > > Another question is whether non-raw spinlocks are intended to create
> > > > an
> > > > RCU read-side critical section due to implicit preempt disable.
> > > 
> > > Hmmm...  Did non-raw spinlocks act like rcu_read_lock_sched()
> > > and rcu_read_unlock_sched() pairs in -rt prior to the RCU flavor
> > > consolidation?  If not, I don't see why they should do so after that
> > > consolidation in -rt.
> > 
> > May be I am missing something, but I didn't see the connection between
> > consolidation and this patch. AFAICS, this patch is so that
> > rcu_read_lock_bh_held() works at all on -rt. Did I badly miss something?
> 
> Before consolidation, RT mapped rcu_read_lock_bh_held() to
> rcu_read_lock_bh() and called rcu_read_lock() from rcu_read_lock_bh().  This
> somehow got lost when rebasing on top of 5.0.

so now rcu_read_lock_bh_held() is untouched and in_softirq() reports 1.
So the problem is that we never hold RCU but report 1 like we do?

Sebastian


Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-22 Thread Scott Wood
On Thu, 2019-08-22 at 09:39 -0400, Joel Fernandes wrote:
> On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> > On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > > Signed-off-by: Scott Wood 
> > > ---
> > > Another question is whether non-raw spinlocks are intended to create
> > > an
> > > RCU read-side critical section due to implicit preempt disable.
> > 
> > Hmmm...  Did non-raw spinlocks act like rcu_read_lock_sched()
> > and rcu_read_unlock_sched() pairs in -rt prior to the RCU flavor
> > consolidation?  If not, I don't see why they should do so after that
> > consolidation in -rt.
> 
> May be I am missing something, but I didn't see the connection between
> consolidation and this patch. AFAICS, this patch is so that
> rcu_read_lock_bh_held() works at all on -rt. Did I badly miss something?

Before consolidation, RT mapped rcu_read_lock_bh_held() to
rcu_read_lock_bh() and called rcu_read_lock() from rcu_read_lock_bh().  This
somehow got lost when rebasing on top of 5.0.

> > >  include/linux/rcupdate.h |  4 
> > >  kernel/rcu/update.c  |  4 
> > >  kernel/softirq.c | 12 +---
> > >  3 files changed, 17 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > > index 388ace315f32..d6e357378732 100644
> > > --- a/include/linux/rcupdate.h
> > > +++ b/include/linux/rcupdate.h
> > > @@ -615,10 +615,12 @@ static inline void rcu_read_unlock(void)
> > >  static inline void rcu_read_lock_bh(void)
> > >  {
> > >   local_bh_disable();
> > > +#ifndef CONFIG_PREEMPT_RT_FULL
> > >   __acquire(RCU_BH);
> > >   rcu_lock_acquire(_bh_lock_map);
> > >   RCU_LOCKDEP_WARN(!rcu_is_watching(),
> > >"rcu_read_lock_bh() used illegally while idle");
> > > +#endif
> > 
> > Any chance of this using "if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))"?
> > We should be OK providing a do-nothing __maybe_unused rcu_bh_lock_map
> > for lockdep-enabled -rt kernels, right?
> 
> Since this function is small, I prefer if -rt defines their own
> rcu_read_lock_bh() which just does the local_bh_disable(). That would be
> way
> cleaner IMO. IIRC, -rt does similar things for spinlocks, but it has been
> sometime since I look at the -rt patchset.

I'll do it whichever way you all decide, though I'm not sure I agree about
it being cleaner (especially while RT is still out-of-tree and a change to
the non-RT version that fails to trigger a merge conflict is a concern).

What about moving everything but the local_bh_disable into a separate
function called from rcu_read_lock_bh, and making that a no-op on RT?

> > >  
> > > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > > index 016c66a98292..a9cdf3d562bc 100644
> > > --- a/kernel/rcu/update.c
> > > +++ b/kernel/rcu/update.c
> > > @@ -296,7 +296,11 @@ int rcu_read_lock_bh_held(void)
> > >   return 0;
> > >   if (!rcu_lockdep_current_cpu_online())
> > >   return 0;
> > > +#ifdef CONFIG_PREEMPT_RT_FULL
> > > + return lock_is_held(_lock_map) || irqs_disabled();
> > > +#else
> > >   return in_softirq() || irqs_disabled();
> > > +#endif
> > 
> > And globally.
> 
> And could be untangled a bit as well:
> 
> if (irqs_disabled())
>   return 1;
> 
> if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
>   return lock_is_held(_lock_map);
> 
> return in_softirq();

OK.

-Scott




Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-22 Thread Paul E. McKenney
On Thu, Aug 22, 2019 at 09:36:21PM -0500, Scott Wood wrote:
> On Wed, 2019-08-21 at 16:33 -0700, Paul E. McKenney wrote:
> > On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > > index 388ace315f32..d6e357378732 100644
> > > --- a/include/linux/rcupdate.h
> > > +++ b/include/linux/rcupdate.h
> > > @@ -615,10 +615,12 @@ static inline void rcu_read_unlock(void)
> > >  static inline void rcu_read_lock_bh(void)
> > >  {
> > >   local_bh_disable();
> > > +#ifndef CONFIG_PREEMPT_RT_FULL
> > >   __acquire(RCU_BH);
> > >   rcu_lock_acquire(_bh_lock_map);
> > >   RCU_LOCKDEP_WARN(!rcu_is_watching(),
> > >"rcu_read_lock_bh() used illegally while idle");
> > > +#endif
> > 
> > Any chance of this using "if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))"?
> > We should be OK providing a do-nothing __maybe_unused rcu_bh_lock_map
> > for lockdep-enabled -rt kernels, right?
> 
> OK.
> 
> > > @@ -185,8 +189,10 @@ void __local_bh_enable_ip(unsigned long ip,
> > > > > unsigned int cnt)
> > >   WARN_ON_ONCE(count < 0);
> > >   local_irq_enable();
> > >  
> > > - if (!in_atomic())
> > > + if (!in_atomic()) {
> > > + rcu_read_unlock();
> > >   local_unlock(bh_lock);
> > > + }
> > 
> > The return from in_atomic() is guaranteed to be the same at
> > local_bh_enable() time as was at the call to the corresponding
> > local_bh_disable()?
> 
> That's an existing requirement on RT (which rcutorture currently violates)
> due to bh_lock.
> 
> > I could have sworn that I ran afoul of this last year.  Might these
> > added rcu_read_lock() and rcu_read_unlock() calls need to check for
> > CONFIG_PREEMPT_RT_FULL?
> 
> This code is already under a PREEMPT_RT_FULL ifdef.

Good enough, then!

Thanx, Paul


Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-22 Thread Scott Wood
On Wed, 2019-08-21 at 16:33 -0700, Paul E. McKenney wrote:
> On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 388ace315f32..d6e357378732 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -615,10 +615,12 @@ static inline void rcu_read_unlock(void)
> >  static inline void rcu_read_lock_bh(void)
> >  {
> > local_bh_disable();
> > +#ifndef CONFIG_PREEMPT_RT_FULL
> > __acquire(RCU_BH);
> > rcu_lock_acquire(_bh_lock_map);
> > RCU_LOCKDEP_WARN(!rcu_is_watching(),
> >  "rcu_read_lock_bh() used illegally while idle");
> > +#endif
> 
> Any chance of this using "if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))"?
> We should be OK providing a do-nothing __maybe_unused rcu_bh_lock_map
> for lockdep-enabled -rt kernels, right?

OK.

> > @@ -185,8 +189,10 @@ void __local_bh_enable_ip(unsigned long ip,
> > > > unsigned int cnt)
> > WARN_ON_ONCE(count < 0);
> > local_irq_enable();
> >  
> > -   if (!in_atomic())
> > +   if (!in_atomic()) {
> > +   rcu_read_unlock();
> > local_unlock(bh_lock);
> > +   }
> 
> The return from in_atomic() is guaranteed to be the same at
> local_bh_enable() time as was at the call to the corresponding
> local_bh_disable()?

That's an existing requirement on RT (which rcutorture currently violates)
due to bh_lock.

> I could have sworn that I ran afoul of this last year.  Might these
> added rcu_read_lock() and rcu_read_unlock() calls need to check for
> CONFIG_PREEMPT_RT_FULL?

This code is already under a PREEMPT_RT_FULL ifdef.

-Scott




Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-22 Thread Paul E. McKenney
On Thu, Aug 22, 2019 at 09:50:09PM -0400, Joel Fernandes wrote:
> On Thu, Aug 22, 2019 at 08:27:06AM -0700, Paul E. McKenney wrote:
> > On Thu, Aug 22, 2019 at 09:39:55AM -0400, Joel Fernandes wrote:
> > > On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> > > > On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > > > > A plain local_bh_disable() is documented as creating an RCU critical
> > > > > section, and (at least) rcutorture expects this to be the case.  
> > > > > However,
> > > > > in_softirq() doesn't block a grace period on PREEMPT_RT, since RCU 
> > > > > checks
> > > > > preempt_count() directly.  Even if RCU were changed to check
> > > > > in_softirq(), that wouldn't allow blocked BH disablers to be boosted.
> > > > > 
> > > > > Fix this by calling rcu_read_lock() from local_bh_disable(), and 
> > > > > update
> > > > > rcu_read_lock_bh_held() accordingly.
> > > > 
> > > > Cool!  Some questions and comments below.
> > > > 
> > > > Thanx, Paul
> > > > 
> > > > > Signed-off-by: Scott Wood 
> > > > > ---
> > > > > Another question is whether non-raw spinlocks are intended to create 
> > > > > an
> > > > > RCU read-side critical section due to implicit preempt disable.
> > > > 
> > > > Hmmm...  Did non-raw spinlocks act like rcu_read_lock_sched()
> > > > and rcu_read_unlock_sched() pairs in -rt prior to the RCU flavor
> > > > consolidation?  If not, I don't see why they should do so after that
> > > > consolidation in -rt.
> > > 
> > > May be I am missing something, but I didn't see the connection between
> > > consolidation and this patch. AFAICS, this patch is so that
> > > rcu_read_lock_bh_held() works at all on -rt. Did I badly miss something?
> > 
> > I was interpreting Scott's question (which would be excluded from the
> > git commit log) as relating to a possible follow-on patch.
> > 
> > The question is "how special can non-raw spinlocks be in -rt?".  From what
> > I can see, they have been treated as sleeplocks from an RCU viewpoint,
> > so maybe that should continue to be the case.  It does deserve some
> > thought because in mainline a non-raw spinlock really would block a
> > post-consolidation RCU grace period, even in PREEMPT kernels.
> > 
> > But then again, you cannot preempt a non-raw spinlock in mainline but
> > you can in -rt, so extending that exception to RCU is not unreasonable.
> > 
> > Either way, we do need to make a definite decision and document it.
> > If I were forced to make a decision right now, I would follow the old
> > behavior, so that only raw spinlocks were guaranteed to block RCU grace
> > periods.  But I am not being forced, so let's actually discuss and make
> > a conscious decision.  ;-)
> 
> I think non-raw spinlocks on -rt should at least do rcu_read_lock() so that
> any driver or kernel code that depends on this behavior and works on non-rt
> also works on -rt. It also removes the chance a kernel developer may miss
> documentation and accidentally forget that their code may break on -rt. I am
> curious to see how much this design pattern appears in the kernel
> (spin_lock'ed section "intended" as an RCU-reader by code sequences).
> 
> Logically speaking, to me anything that disables preemption on non-RT should
> do rcu_read_lock() on -rt so that from RCU's perspective, things are working.
> But I wonder where we would draw the line and if the bar is to need actual
> examples of usage patterns to make a decision..
> 
> Any thoughts?

Yes.  Let's listen to what the -rt guys have to say.  After all, they
are the ones who would be dealing with any differences in semantics.

Thanx, Paul


Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-22 Thread Joel Fernandes
On Thu, Aug 22, 2019 at 08:27:06AM -0700, Paul E. McKenney wrote:
> On Thu, Aug 22, 2019 at 09:39:55AM -0400, Joel Fernandes wrote:
> > On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> > > On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > > > A plain local_bh_disable() is documented as creating an RCU critical
> > > > section, and (at least) rcutorture expects this to be the case.  
> > > > However,
> > > > in_softirq() doesn't block a grace period on PREEMPT_RT, since RCU 
> > > > checks
> > > > preempt_count() directly.  Even if RCU were changed to check
> > > > in_softirq(), that wouldn't allow blocked BH disablers to be boosted.
> > > > 
> > > > Fix this by calling rcu_read_lock() from local_bh_disable(), and update
> > > > rcu_read_lock_bh_held() accordingly.
> > > 
> > > Cool!  Some questions and comments below.
> > > 
> > >   Thanx, Paul
> > > 
> > > > Signed-off-by: Scott Wood 
> > > > ---
> > > > Another question is whether non-raw spinlocks are intended to create an
> > > > RCU read-side critical section due to implicit preempt disable.
> > > 
> > > Hmmm...  Did non-raw spinlocks act like rcu_read_lock_sched()
> > > and rcu_read_unlock_sched() pairs in -rt prior to the RCU flavor
> > > consolidation?  If not, I don't see why they should do so after that
> > > consolidation in -rt.
> > 
> > May be I am missing something, but I didn't see the connection between
> > consolidation and this patch. AFAICS, this patch is so that
> > rcu_read_lock_bh_held() works at all on -rt. Did I badly miss something?
> 
> I was interpreting Scott's question (which would be excluded from the
> git commit log) as relating to a possible follow-on patch.
> 
> The question is "how special can non-raw spinlocks be in -rt?".  From what
> I can see, they have been treated as sleeplocks from an RCU viewpoint,
> so maybe that should continue to be the case.  It does deserve some
> thought because in mainline a non-raw spinlock really would block a
> post-consolidation RCU grace period, even in PREEMPT kernels.
> 
> But then again, you cannot preempt a non-raw spinlock in mainline but
> you can in -rt, so extending that exception to RCU is not unreasonable.
> 
> Either way, we do need to make a definite decision and document it.
> If I were forced to make a decision right now, I would follow the old
> behavior, so that only raw spinlocks were guaranteed to block RCU grace
> periods.  But I am not being forced, so let's actually discuss and make
> a conscious decision.  ;-)

I think non-raw spinlocks on -rt should at least do rcu_read_lock() so that
any driver or kernel code that depends on this behavior and works on non-rt
also works on -rt. It also removes the chance a kernel developer may miss
documentation and accidentally forget that their code may break on -rt. I am
curious to see how much this design pattern appears in the kernel
(spin_lock'ed section "intended" as an RCU-reader by code sequences).

Logically speaking, to me anything that disables preemption on non-RT should
do rcu_read_lock() on -rt so that from RCU's perspective, things are working.
But I wonder where we would draw the line and if the bar is to need actual
examples of usage patterns to make a decision..

Any thoughts?

thanks,

 - Joel



Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-22 Thread Paul E. McKenney
On Thu, Aug 22, 2019 at 09:39:55AM -0400, Joel Fernandes wrote:
> On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> > On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > > A plain local_bh_disable() is documented as creating an RCU critical
> > > section, and (at least) rcutorture expects this to be the case.  However,
> > > in_softirq() doesn't block a grace period on PREEMPT_RT, since RCU checks
> > > preempt_count() directly.  Even if RCU were changed to check
> > > in_softirq(), that wouldn't allow blocked BH disablers to be boosted.
> > > 
> > > Fix this by calling rcu_read_lock() from local_bh_disable(), and update
> > > rcu_read_lock_bh_held() accordingly.
> > 
> > Cool!  Some questions and comments below.
> > 
> > Thanx, Paul
> > 
> > > Signed-off-by: Scott Wood 
> > > ---
> > > Another question is whether non-raw spinlocks are intended to create an
> > > RCU read-side critical section due to implicit preempt disable.
> > 
> > Hmmm...  Did non-raw spinlocks act like rcu_read_lock_sched()
> > and rcu_read_unlock_sched() pairs in -rt prior to the RCU flavor
> > consolidation?  If not, I don't see why they should do so after that
> > consolidation in -rt.
> 
> May be I am missing something, but I didn't see the connection between
> consolidation and this patch. AFAICS, this patch is so that
> rcu_read_lock_bh_held() works at all on -rt. Did I badly miss something?

I was interpreting Scott's question (which would be excluded from the
git commit log) as relating to a possible follow-on patch.

The question is "how special can non-raw spinlocks be in -rt?".  From what
I can see, they have been treated as sleeplocks from an RCU viewpoint,
so maybe that should continue to be the case.  It does deserve some
thought because in mainline a non-raw spinlock really would block a
post-consolidation RCU grace period, even in PREEMPT kernels.

But then again, you cannot preempt a non-raw spinlock in mainline but
you can in -rt, so extending that exception to RCU is not unreasonable.

Either way, we do need to make a definite decision and document it.
If I were forced to make a decision right now, I would follow the old
behavior, so that only raw spinlocks were guaranteed to block RCU grace
periods.  But I am not being forced, so let's actually discuss and make
a conscious decision.  ;-)

Thanx, Paul


Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-22 Thread Joel Fernandes
On Wed, Aug 21, 2019 at 04:33:58PM -0700, Paul E. McKenney wrote:
> On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> > A plain local_bh_disable() is documented as creating an RCU critical
> > section, and (at least) rcutorture expects this to be the case.  However,
> > in_softirq() doesn't block a grace period on PREEMPT_RT, since RCU checks
> > preempt_count() directly.  Even if RCU were changed to check
> > in_softirq(), that wouldn't allow blocked BH disablers to be boosted.
> > 
> > Fix this by calling rcu_read_lock() from local_bh_disable(), and update
> > rcu_read_lock_bh_held() accordingly.
> 
> Cool!  Some questions and comments below.
> 
>   Thanx, Paul
> 
> > Signed-off-by: Scott Wood 
> > ---
> > Another question is whether non-raw spinlocks are intended to create an
> > RCU read-side critical section due to implicit preempt disable.
> 
> Hmmm...  Did non-raw spinlocks act like rcu_read_lock_sched()
> and rcu_read_unlock_sched() pairs in -rt prior to the RCU flavor
> consolidation?  If not, I don't see why they should do so after that
> consolidation in -rt.

May be I am missing something, but I didn't see the connection between
consolidation and this patch. AFAICS, this patch is so that
rcu_read_lock_bh_held() works at all on -rt. Did I badly miss something?

> >  If they
> > are, then we'd need to add rcu_read_lock() there as well since RT doesn't
> > disable preemption (and rcutorture should explicitly test with a
> > spinlock).  If not, the documentation should make that clear.
> 
> True enough!
> 
> >  include/linux/rcupdate.h |  4 
> >  kernel/rcu/update.c  |  4 
> >  kernel/softirq.c | 12 +---
> >  3 files changed, 17 insertions(+), 3 deletions(-)
> > 
> > diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> > index 388ace315f32..d6e357378732 100644
> > --- a/include/linux/rcupdate.h
> > +++ b/include/linux/rcupdate.h
> > @@ -615,10 +615,12 @@ static inline void rcu_read_unlock(void)
> >  static inline void rcu_read_lock_bh(void)
> >  {
> > local_bh_disable();
> > +#ifndef CONFIG_PREEMPT_RT_FULL
> > __acquire(RCU_BH);
> > rcu_lock_acquire(_bh_lock_map);
> > RCU_LOCKDEP_WARN(!rcu_is_watching(),
> >  "rcu_read_lock_bh() used illegally while idle");
> > +#endif
> 
> Any chance of this using "if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))"?
> We should be OK providing a do-nothing __maybe_unused rcu_bh_lock_map
> for lockdep-enabled -rt kernels, right?

Since this function is small, I prefer if -rt defines their own
rcu_read_lock_bh() which just does the local_bh_disable(). That would be way
cleaner IMO. IIRC, -rt does similar things for spinlocks, but it has been
sometime since I look at the -rt patchset.

> >  }
> >  
> >  /*
> > @@ -628,10 +630,12 @@ static inline void rcu_read_lock_bh(void)
> >   */
> >  static inline void rcu_read_unlock_bh(void)
> >  {
> > +#ifndef CONFIG_PREEMPT_RT_FULL
> > RCU_LOCKDEP_WARN(!rcu_is_watching(),
> >  "rcu_read_unlock_bh() used illegally while idle");
> > rcu_lock_release(_bh_lock_map);
> > __release(RCU_BH);
> > +#endif
> 
> Ditto.
> 
> > local_bh_enable();
> >  }
> >  
> > diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> > index 016c66a98292..a9cdf3d562bc 100644
> > --- a/kernel/rcu/update.c
> > +++ b/kernel/rcu/update.c
> > @@ -296,7 +296,11 @@ int rcu_read_lock_bh_held(void)
> > return 0;
> > if (!rcu_lockdep_current_cpu_online())
> > return 0;
> > +#ifdef CONFIG_PREEMPT_RT_FULL
> > +   return lock_is_held(_lock_map) || irqs_disabled();
> > +#else
> > return in_softirq() || irqs_disabled();
> > +#endif
> 
> And globally.

And could be untangled a bit as well:

if (irqs_disabled())
return 1;

if (IS_ENABLED(CONFIG_PREEMPT_RT_FULL))
return lock_is_held(_lock_map);

return in_softirq();

> >  }
> >  EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
> >  
> > diff --git a/kernel/softirq.c b/kernel/softirq.c
> > index d16d080a74f7..6080c9328df1 100644
> > --- a/kernel/softirq.c
> > +++ b/kernel/softirq.c
> > @@ -115,8 +115,10 @@ void __local_bh_disable_ip(unsigned long ip, unsigned 
> > int cnt)
> > long soft_cnt;
> >  
> > WARN_ON_ONCE(in_irq());
> > -   if (!in_atomic())
> > +   if (!in_atomic()) {
> > local_lock(bh_lock);
> > +   rcu_read_lock();
> > +   }
> > soft_cnt = this_cpu_inc_return(softirq_counter);
> > WARN_ON_ONCE(soft_cnt == 0);
> > current->softirq_count += SOFTIRQ_DISABLE_OFFSET;
> > @@ -151,8 +153,10 @@ void _local_bh_enable(void)
> >  #endif
> >  
> > current->softirq_count -= SOFTIRQ_DISABLE_OFFSET;
> > -   if (!in_atomic())
> > +   if (!in_atomic()) {
> > +   rcu_read_unlock();
> > local_unlock(bh_lock);
> > +   }
> >  }
> >  
> >  void _local_bh_enable_rt(void)
> > @@ -185,8 +189,10 @@ void 

Re: [PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-21 Thread Paul E. McKenney
On Wed, Aug 21, 2019 at 06:19:04PM -0500, Scott Wood wrote:
> A plain local_bh_disable() is documented as creating an RCU critical
> section, and (at least) rcutorture expects this to be the case.  However,
> in_softirq() doesn't block a grace period on PREEMPT_RT, since RCU checks
> preempt_count() directly.  Even if RCU were changed to check
> in_softirq(), that wouldn't allow blocked BH disablers to be boosted.
> 
> Fix this by calling rcu_read_lock() from local_bh_disable(), and update
> rcu_read_lock_bh_held() accordingly.

Cool!  Some questions and comments below.

Thanx, Paul

> Signed-off-by: Scott Wood 
> ---
> Another question is whether non-raw spinlocks are intended to create an
> RCU read-side critical section due to implicit preempt disable.

Hmmm...  Did non-raw spinlocks act like rcu_read_lock_sched()
and rcu_read_unlock_sched() pairs in -rt prior to the RCU flavor
consolidation?  If not, I don't see why they should do so after that
consolidation in -rt.

>  If they
> are, then we'd need to add rcu_read_lock() there as well since RT doesn't
> disable preemption (and rcutorture should explicitly test with a
> spinlock).  If not, the documentation should make that clear.

True enough!

>  include/linux/rcupdate.h |  4 
>  kernel/rcu/update.c  |  4 
>  kernel/softirq.c | 12 +---
>  3 files changed, 17 insertions(+), 3 deletions(-)
> 
> diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
> index 388ace315f32..d6e357378732 100644
> --- a/include/linux/rcupdate.h
> +++ b/include/linux/rcupdate.h
> @@ -615,10 +615,12 @@ static inline void rcu_read_unlock(void)
>  static inline void rcu_read_lock_bh(void)
>  {
>   local_bh_disable();
> +#ifndef CONFIG_PREEMPT_RT_FULL
>   __acquire(RCU_BH);
>   rcu_lock_acquire(_bh_lock_map);
>   RCU_LOCKDEP_WARN(!rcu_is_watching(),
>"rcu_read_lock_bh() used illegally while idle");
> +#endif

Any chance of this using "if (!IS_ENABLED(CONFIG_PREEMPT_RT_FULL))"?
We should be OK providing a do-nothing __maybe_unused rcu_bh_lock_map
for lockdep-enabled -rt kernels, right?

>  }
>  
>  /*
> @@ -628,10 +630,12 @@ static inline void rcu_read_lock_bh(void)
>   */
>  static inline void rcu_read_unlock_bh(void)
>  {
> +#ifndef CONFIG_PREEMPT_RT_FULL
>   RCU_LOCKDEP_WARN(!rcu_is_watching(),
>"rcu_read_unlock_bh() used illegally while idle");
>   rcu_lock_release(_bh_lock_map);
>   __release(RCU_BH);
> +#endif

Ditto.

>   local_bh_enable();
>  }
>  
> diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
> index 016c66a98292..a9cdf3d562bc 100644
> --- a/kernel/rcu/update.c
> +++ b/kernel/rcu/update.c
> @@ -296,7 +296,11 @@ int rcu_read_lock_bh_held(void)
>   return 0;
>   if (!rcu_lockdep_current_cpu_online())
>   return 0;
> +#ifdef CONFIG_PREEMPT_RT_FULL
> + return lock_is_held(_lock_map) || irqs_disabled();
> +#else
>   return in_softirq() || irqs_disabled();
> +#endif

And globally.

>  }
>  EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
>  
> diff --git a/kernel/softirq.c b/kernel/softirq.c
> index d16d080a74f7..6080c9328df1 100644
> --- a/kernel/softirq.c
> +++ b/kernel/softirq.c
> @@ -115,8 +115,10 @@ void __local_bh_disable_ip(unsigned long ip, unsigned 
> int cnt)
>   long soft_cnt;
>  
>   WARN_ON_ONCE(in_irq());
> - if (!in_atomic())
> + if (!in_atomic()) {
>   local_lock(bh_lock);
> + rcu_read_lock();
> + }
>   soft_cnt = this_cpu_inc_return(softirq_counter);
>   WARN_ON_ONCE(soft_cnt == 0);
>   current->softirq_count += SOFTIRQ_DISABLE_OFFSET;
> @@ -151,8 +153,10 @@ void _local_bh_enable(void)
>  #endif
>  
>   current->softirq_count -= SOFTIRQ_DISABLE_OFFSET;
> - if (!in_atomic())
> + if (!in_atomic()) {
> + rcu_read_unlock();
>   local_unlock(bh_lock);
> + }
>  }
>  
>  void _local_bh_enable_rt(void)
> @@ -185,8 +189,10 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int 
> cnt)
>   WARN_ON_ONCE(count < 0);
>   local_irq_enable();
>  
> - if (!in_atomic())
> + if (!in_atomic()) {
> + rcu_read_unlock();
>   local_unlock(bh_lock);
> + }

The return from in_atomic() is guaranteed to be the same at
local_bh_enable() time as was at the call to the corresponding
local_bh_disable()?

I could have sworn that I ran afoul of this last year.  Might these
added rcu_read_lock() and rcu_read_unlock() calls need to check for
CONFIG_PREEMPT_RT_FULL?

>   current->softirq_count -= SOFTIRQ_DISABLE_OFFSET;
>   preempt_check_resched();
> -- 
> 1.8.3.1
> 


[PATCH RT v2 1/3] rcu: Acquire RCU lock when disabling BHs

2019-08-21 Thread Scott Wood
A plain local_bh_disable() is documented as creating an RCU critical
section, and (at least) rcutorture expects this to be the case.  However,
in_softirq() doesn't block a grace period on PREEMPT_RT, since RCU checks
preempt_count() directly.  Even if RCU were changed to check
in_softirq(), that wouldn't allow blocked BH disablers to be boosted.

Fix this by calling rcu_read_lock() from local_bh_disable(), and update
rcu_read_lock_bh_held() accordingly.

Signed-off-by: Scott Wood 
---
Another question is whether non-raw spinlocks are intended to create an
RCU read-side critical section due to implicit preempt disable.  If they
are, then we'd need to add rcu_read_lock() there as well since RT doesn't
disable preemption (and rcutorture should explicitly test with a
spinlock).  If not, the documentation should make that clear.

 include/linux/rcupdate.h |  4 
 kernel/rcu/update.c  |  4 
 kernel/softirq.c | 12 +---
 3 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 388ace315f32..d6e357378732 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -615,10 +615,12 @@ static inline void rcu_read_unlock(void)
 static inline void rcu_read_lock_bh(void)
 {
local_bh_disable();
+#ifndef CONFIG_PREEMPT_RT_FULL
__acquire(RCU_BH);
rcu_lock_acquire(_bh_lock_map);
RCU_LOCKDEP_WARN(!rcu_is_watching(),
 "rcu_read_lock_bh() used illegally while idle");
+#endif
 }
 
 /*
@@ -628,10 +630,12 @@ static inline void rcu_read_lock_bh(void)
  */
 static inline void rcu_read_unlock_bh(void)
 {
+#ifndef CONFIG_PREEMPT_RT_FULL
RCU_LOCKDEP_WARN(!rcu_is_watching(),
 "rcu_read_unlock_bh() used illegally while idle");
rcu_lock_release(_bh_lock_map);
__release(RCU_BH);
+#endif
local_bh_enable();
 }
 
diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 016c66a98292..a9cdf3d562bc 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -296,7 +296,11 @@ int rcu_read_lock_bh_held(void)
return 0;
if (!rcu_lockdep_current_cpu_online())
return 0;
+#ifdef CONFIG_PREEMPT_RT_FULL
+   return lock_is_held(_lock_map) || irqs_disabled();
+#else
return in_softirq() || irqs_disabled();
+#endif
 }
 EXPORT_SYMBOL_GPL(rcu_read_lock_bh_held);
 
diff --git a/kernel/softirq.c b/kernel/softirq.c
index d16d080a74f7..6080c9328df1 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -115,8 +115,10 @@ void __local_bh_disable_ip(unsigned long ip, unsigned int 
cnt)
long soft_cnt;
 
WARN_ON_ONCE(in_irq());
-   if (!in_atomic())
+   if (!in_atomic()) {
local_lock(bh_lock);
+   rcu_read_lock();
+   }
soft_cnt = this_cpu_inc_return(softirq_counter);
WARN_ON_ONCE(soft_cnt == 0);
current->softirq_count += SOFTIRQ_DISABLE_OFFSET;
@@ -151,8 +153,10 @@ void _local_bh_enable(void)
 #endif
 
current->softirq_count -= SOFTIRQ_DISABLE_OFFSET;
-   if (!in_atomic())
+   if (!in_atomic()) {
+   rcu_read_unlock();
local_unlock(bh_lock);
+   }
 }
 
 void _local_bh_enable_rt(void)
@@ -185,8 +189,10 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int 
cnt)
WARN_ON_ONCE(count < 0);
local_irq_enable();
 
-   if (!in_atomic())
+   if (!in_atomic()) {
+   rcu_read_unlock();
local_unlock(bh_lock);
+   }
 
current->softirq_count -= SOFTIRQ_DISABLE_OFFSET;
preempt_check_resched();
-- 
1.8.3.1