Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-22 Thread Paul E. McKenney
On Mon, May 21, 2018 at 09:43:27PM -0700, Joel Fernandes wrote:
> On Mon, May 21, 2018 at 09:16:51PM -0700, Paul E. McKenney wrote:
> > On Mon, May 21, 2018 at 05:28:23PM -0700, Paul E. McKenney wrote:
> > > On Mon, May 21, 2018 at 05:07:34PM -0700, Joel Fernandes wrote:
> > > > On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> > > > > On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > > > > > We acquire gp_seq_needed locklessly. To be safe, lets do the 
> > > > > > unlocking
> > > > > > after the access.
> > > > > 
> > > > > Actually, no, we hold rnp_start's ->lock throughout.  And this CPU 
> > > > > (or in
> > > > > the case of no-CBs CPUs, this task) is in charge of 
> > > > > rdp->gp_seq_needed,
> > > > > so nothing else is accessing it.  Or at least that is the intent.  ;-)
> > > > 
> > > > I was talking about protecting the internal node's rnp->gp_seq_needed, 
> > > > not
> > > > the rnp_start's gp_seq_needed.
> > > 
> > > Ah, good point, I missed the "if" condition.  This can be argued to work,
> > > sort of, given that we still hold the leaf rcu_node structure's lock,
> > > so that there is a limit to how far grace periods can advance.
> > > 
> > > But the code would of course be much cleaner with your change.
> > > 
> > > > We are protecting them in the loop:
> > > > 
> > > > like this:
> > > > for(...)
> > > > if (rnp != rnp_start)
> > > > raw_spin_lock_rcu_node(rnp);
> > > > [...]
> > > > // access rnp->gp_seq and rnp->gp_seq_needed
> > > > [...]
> > > > if (rnp != rnp_start)
> > > > raw_spin_unlock_rcu_node(rnp);
> > > > 
> > > > But we don't need to do such protection in unlock_out ? I'm sorry if I'm
> > > > missing something, but I'm wondering if rnp->gp_seq_needed of an 
> > > > internal
> > > > node can be accessed locklessly, then why can't that be done also in the
> > > > funnel locking loop - after all we are holding the rnp_start's lock 
> > > > through
> > > > out right?
> > > 
> > > I was focused on the updates, and missed the rnp->gp_seq_req access in the
> > > "if" statement.  The current code does sort of work, but only assuming
> > > that the compiler doesn't tear the load, and so your change would help.
> > > Could you please resend with your other two updated patches?  It depends
> > > on one of the earlier patches, so does not apply cleanly as-is.  I could
> > > hand-apply it, but that sounds like a good way to make your updated
> > > series fail to apply.  ;-)
> > > 
> > > But could you also make the commit log explicitly call out the "if"
> > > condition as being the offending access?
> > 
> > Never mind, me being stupid.  I need to apply this change to the original
> > commit "rcu: Make rcu_nocb_wait_gp() check if GP already requested", which
> > I have done with this attribution:
> > 
> > [ paulmck: Move lock release past "if" as suggested by Joel Fernandes. ]
> > 
> > I have rebased my stack on top of the updated commit.
> 
> Cool, makes sense. I am assuming this means I don't have to resend this
> patch, if I do let me know :)

No need.

> Either way, once you push your updated tree to kernel.org, I'll double check
> to make sure the change is in :)

Please see 9624746baf6b ("rcu: Make rcu_nocb_wait_gp() check if GP
already requested") on branch rcu/dev.

Thanx, Paul



Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-22 Thread Paul E. McKenney
On Mon, May 21, 2018 at 09:43:27PM -0700, Joel Fernandes wrote:
> On Mon, May 21, 2018 at 09:16:51PM -0700, Paul E. McKenney wrote:
> > On Mon, May 21, 2018 at 05:28:23PM -0700, Paul E. McKenney wrote:
> > > On Mon, May 21, 2018 at 05:07:34PM -0700, Joel Fernandes wrote:
> > > > On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> > > > > On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > > > > > We acquire gp_seq_needed locklessly. To be safe, lets do the 
> > > > > > unlocking
> > > > > > after the access.
> > > > > 
> > > > > Actually, no, we hold rnp_start's ->lock throughout.  And this CPU 
> > > > > (or in
> > > > > the case of no-CBs CPUs, this task) is in charge of 
> > > > > rdp->gp_seq_needed,
> > > > > so nothing else is accessing it.  Or at least that is the intent.  ;-)
> > > > 
> > > > I was talking about protecting the internal node's rnp->gp_seq_needed, 
> > > > not
> > > > the rnp_start's gp_seq_needed.
> > > 
> > > Ah, good point, I missed the "if" condition.  This can be argued to work,
> > > sort of, given that we still hold the leaf rcu_node structure's lock,
> > > so that there is a limit to how far grace periods can advance.
> > > 
> > > But the code would of course be much cleaner with your change.
> > > 
> > > > We are protecting them in the loop:
> > > > 
> > > > like this:
> > > > for(...)
> > > > if (rnp != rnp_start)
> > > > raw_spin_lock_rcu_node(rnp);
> > > > [...]
> > > > // access rnp->gp_seq and rnp->gp_seq_needed
> > > > [...]
> > > > if (rnp != rnp_start)
> > > > raw_spin_unlock_rcu_node(rnp);
> > > > 
> > > > But we don't need to do such protection in unlock_out ? I'm sorry if I'm
> > > > missing something, but I'm wondering if rnp->gp_seq_needed of an 
> > > > internal
> > > > node can be accessed locklessly, then why can't that be done also in the
> > > > funnel locking loop - after all we are holding the rnp_start's lock 
> > > > through
> > > > out right?
> > > 
> > > I was focused on the updates, and missed the rnp->gp_seq_req access in the
> > > "if" statement.  The current code does sort of work, but only assuming
> > > that the compiler doesn't tear the load, and so your change would help.
> > > Could you please resend with your other two updated patches?  It depends
> > > on one of the earlier patches, so does not apply cleanly as-is.  I could
> > > hand-apply it, but that sounds like a good way to make your updated
> > > series fail to apply.  ;-)
> > > 
> > > But could you also make the commit log explicitly call out the "if"
> > > condition as being the offending access?
> > 
> > Never mind, me being stupid.  I need to apply this change to the original
> > commit "rcu: Make rcu_nocb_wait_gp() check if GP already requested", which
> > I have done with this attribution:
> > 
> > [ paulmck: Move lock release past "if" as suggested by Joel Fernandes. ]
> > 
> > I have rebased my stack on top of the updated commit.
> 
> Cool, makes sense. I am assuming this means I don't have to resend this
> patch, if I do let me know :)

No need.

> Either way, once you push your updated tree to kernel.org, I'll double check
> to make sure the change is in :)

Please see 9624746baf6b ("rcu: Make rcu_nocb_wait_gp() check if GP
already requested") on branch rcu/dev.

Thanx, Paul



Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Joel Fernandes
On Mon, May 21, 2018 at 09:16:51PM -0700, Paul E. McKenney wrote:
> On Mon, May 21, 2018 at 05:28:23PM -0700, Paul E. McKenney wrote:
> > On Mon, May 21, 2018 at 05:07:34PM -0700, Joel Fernandes wrote:
> > > On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> > > > On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > > > > We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> > > > > after the access.
> > > > 
> > > > Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or 
> > > > in
> > > > the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
> > > > so nothing else is accessing it.  Or at least that is the intent.  ;-)
> > > 
> > > I was talking about protecting the internal node's rnp->gp_seq_needed, not
> > > the rnp_start's gp_seq_needed.
> > 
> > Ah, good point, I missed the "if" condition.  This can be argued to work,
> > sort of, given that we still hold the leaf rcu_node structure's lock,
> > so that there is a limit to how far grace periods can advance.
> > 
> > But the code would of course be much cleaner with your change.
> > 
> > > We are protecting them in the loop:
> > > 
> > > like this:
> > > for(...)
> > >   if (rnp != rnp_start)
> > >   raw_spin_lock_rcu_node(rnp);
> > >   [...]
> > >   // access rnp->gp_seq and rnp->gp_seq_needed
> > >   [...]
> > >   if (rnp != rnp_start)
> > >   raw_spin_unlock_rcu_node(rnp);
> > > 
> > > But we don't need to do such protection in unlock_out ? I'm sorry if I'm
> > > missing something, but I'm wondering if rnp->gp_seq_needed of an internal
> > > node can be accessed locklessly, then why can't that be done also in the
> > > funnel locking loop - after all we are holding the rnp_start's lock 
> > > through
> > > out right?
> > 
> > I was focused on the updates, and missed the rnp->gp_seq_req access in the
> > "if" statement.  The current code does sort of work, but only assuming
> > that the compiler doesn't tear the load, and so your change would help.
> > Could you please resend with your other two updated patches?  It depends
> > on one of the earlier patches, so does not apply cleanly as-is.  I could
> > hand-apply it, but that sounds like a good way to make your updated
> > series fail to apply.  ;-)
> > 
> > But could you also make the commit log explicitly call out the "if"
> > condition as being the offending access?
> 
> Never mind, me being stupid.  I need to apply this change to the original
> commit "rcu: Make rcu_nocb_wait_gp() check if GP already requested", which
> I have done with this attribution:
> 
> [ paulmck: Move lock release past "if" as suggested by Joel Fernandes. ]
> 
> I have rebased my stack on top of the updated commit.

Cool, makes sense. I am assuming this means I don't have to resend this
patch, if I do let me know :)

Either way, once you push your updated tree to kernel.org, I'll double check
to make sure the change is in :)

thanks, good night,

 - Joel



Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Joel Fernandes
On Mon, May 21, 2018 at 09:16:51PM -0700, Paul E. McKenney wrote:
> On Mon, May 21, 2018 at 05:28:23PM -0700, Paul E. McKenney wrote:
> > On Mon, May 21, 2018 at 05:07:34PM -0700, Joel Fernandes wrote:
> > > On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> > > > On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > > > > We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> > > > > after the access.
> > > > 
> > > > Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or 
> > > > in
> > > > the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
> > > > so nothing else is accessing it.  Or at least that is the intent.  ;-)
> > > 
> > > I was talking about protecting the internal node's rnp->gp_seq_needed, not
> > > the rnp_start's gp_seq_needed.
> > 
> > Ah, good point, I missed the "if" condition.  This can be argued to work,
> > sort of, given that we still hold the leaf rcu_node structure's lock,
> > so that there is a limit to how far grace periods can advance.
> > 
> > But the code would of course be much cleaner with your change.
> > 
> > > We are protecting them in the loop:
> > > 
> > > like this:
> > > for(...)
> > >   if (rnp != rnp_start)
> > >   raw_spin_lock_rcu_node(rnp);
> > >   [...]
> > >   // access rnp->gp_seq and rnp->gp_seq_needed
> > >   [...]
> > >   if (rnp != rnp_start)
> > >   raw_spin_unlock_rcu_node(rnp);
> > > 
> > > But we don't need to do such protection in unlock_out ? I'm sorry if I'm
> > > missing something, but I'm wondering if rnp->gp_seq_needed of an internal
> > > node can be accessed locklessly, then why can't that be done also in the
> > > funnel locking loop - after all we are holding the rnp_start's lock 
> > > through
> > > out right?
> > 
> > I was focused on the updates, and missed the rnp->gp_seq_req access in the
> > "if" statement.  The current code does sort of work, but only assuming
> > that the compiler doesn't tear the load, and so your change would help.
> > Could you please resend with your other two updated patches?  It depends
> > on one of the earlier patches, so does not apply cleanly as-is.  I could
> > hand-apply it, but that sounds like a good way to make your updated
> > series fail to apply.  ;-)
> > 
> > But could you also make the commit log explicitly call out the "if"
> > condition as being the offending access?
> 
> Never mind, me being stupid.  I need to apply this change to the original
> commit "rcu: Make rcu_nocb_wait_gp() check if GP already requested", which
> I have done with this attribution:
> 
> [ paulmck: Move lock release past "if" as suggested by Joel Fernandes. ]
> 
> I have rebased my stack on top of the updated commit.

Cool, makes sense. I am assuming this means I don't have to resend this
patch, if I do let me know :)

Either way, once you push your updated tree to kernel.org, I'll double check
to make sure the change is in :)

thanks, good night,

 - Joel



Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Paul E. McKenney
On Mon, May 21, 2018 at 05:28:23PM -0700, Paul E. McKenney wrote:
> On Mon, May 21, 2018 at 05:07:34PM -0700, Joel Fernandes wrote:
> > On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> > > On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > > > We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> > > > after the access.
> > > 
> > > Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or in
> > > the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
> > > so nothing else is accessing it.  Or at least that is the intent.  ;-)
> > 
> > I was talking about protecting the internal node's rnp->gp_seq_needed, not
> > the rnp_start's gp_seq_needed.
> 
> Ah, good point, I missed the "if" condition.  This can be argued to work,
> sort of, given that we still hold the leaf rcu_node structure's lock,
> so that there is a limit to how far grace periods can advance.
> 
> But the code would of course be much cleaner with your change.
> 
> > We are protecting them in the loop:
> > 
> > like this:
> > for(...)
> > if (rnp != rnp_start)
> > raw_spin_lock_rcu_node(rnp);
> > [...]
> > // access rnp->gp_seq and rnp->gp_seq_needed
> > [...]
> > if (rnp != rnp_start)
> > raw_spin_unlock_rcu_node(rnp);
> > 
> > But we don't need to do such protection in unlock_out ? I'm sorry if I'm
> > missing something, but I'm wondering if rnp->gp_seq_needed of an internal
> > node can be accessed locklessly, then why can't that be done also in the
> > funnel locking loop - after all we are holding the rnp_start's lock through
> > out right?
> 
> I was focused on the updates, and missed the rnp->gp_seq_req access in the
> "if" statement.  The current code does sort of work, but only assuming
> that the compiler doesn't tear the load, and so your change would help.
> Could you please resend with your other two updated patches?  It depends
> on one of the earlier patches, so does not apply cleanly as-is.  I could
> hand-apply it, but that sounds like a good way to make your updated
> series fail to apply.  ;-)
> 
> But could you also make the commit log explicitly call out the "if"
> condition as being the offending access?

Never mind, me being stupid.  I need to apply this change to the original
commit "rcu: Make rcu_nocb_wait_gp() check if GP already requested", which
I have done with this attribution:

[ paulmck: Move lock release past "if" as suggested by Joel Fernandes. ]

I have rebased my stack on top of the updated commit.

Thanx, Paul



Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Paul E. McKenney
On Mon, May 21, 2018 at 05:28:23PM -0700, Paul E. McKenney wrote:
> On Mon, May 21, 2018 at 05:07:34PM -0700, Joel Fernandes wrote:
> > On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> > > On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > > > We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> > > > after the access.
> > > 
> > > Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or in
> > > the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
> > > so nothing else is accessing it.  Or at least that is the intent.  ;-)
> > 
> > I was talking about protecting the internal node's rnp->gp_seq_needed, not
> > the rnp_start's gp_seq_needed.
> 
> Ah, good point, I missed the "if" condition.  This can be argued to work,
> sort of, given that we still hold the leaf rcu_node structure's lock,
> so that there is a limit to how far grace periods can advance.
> 
> But the code would of course be much cleaner with your change.
> 
> > We are protecting them in the loop:
> > 
> > like this:
> > for(...)
> > if (rnp != rnp_start)
> > raw_spin_lock_rcu_node(rnp);
> > [...]
> > // access rnp->gp_seq and rnp->gp_seq_needed
> > [...]
> > if (rnp != rnp_start)
> > raw_spin_unlock_rcu_node(rnp);
> > 
> > But we don't need to do such protection in unlock_out ? I'm sorry if I'm
> > missing something, but I'm wondering if rnp->gp_seq_needed of an internal
> > node can be accessed locklessly, then why can't that be done also in the
> > funnel locking loop - after all we are holding the rnp_start's lock through
> > out right?
> 
> I was focused on the updates, and missed the rnp->gp_seq_req access in the
> "if" statement.  The current code does sort of work, but only assuming
> that the compiler doesn't tear the load, and so your change would help.
> Could you please resend with your other two updated patches?  It depends
> on one of the earlier patches, so does not apply cleanly as-is.  I could
> hand-apply it, but that sounds like a good way to make your updated
> series fail to apply.  ;-)
> 
> But could you also make the commit log explicitly call out the "if"
> condition as being the offending access?

Never mind, me being stupid.  I need to apply this change to the original
commit "rcu: Make rcu_nocb_wait_gp() check if GP already requested", which
I have done with this attribution:

[ paulmck: Move lock release past "if" as suggested by Joel Fernandes. ]

I have rebased my stack on top of the updated commit.

Thanx, Paul



Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Paul E. McKenney
On Mon, May 21, 2018 at 05:07:34PM -0700, Joel Fernandes wrote:
> On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> > On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > > We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> > > after the access.
> > 
> > Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or in
> > the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
> > so nothing else is accessing it.  Or at least that is the intent.  ;-)
> 
> I was talking about protecting the internal node's rnp->gp_seq_needed, not
> the rnp_start's gp_seq_needed.

Ah, good point, I missed the "if" condition.  This can be argued to work,
sort of, given that we still hold the leaf rcu_node structure's lock,
so that there is a limit to how far grace periods can advance.

But the code would of course be much cleaner with your change.

> We are protecting them in the loop:
> 
> like this:
> for(...)
>   if (rnp != rnp_start)
>   raw_spin_lock_rcu_node(rnp);
>   [...]
>   // access rnp->gp_seq and rnp->gp_seq_needed
>   [...]
>   if (rnp != rnp_start)
>   raw_spin_unlock_rcu_node(rnp);
> 
> But we don't need to do such protection in unlock_out ? I'm sorry if I'm
> missing something, but I'm wondering if rnp->gp_seq_needed of an internal
> node can be accessed locklessly, then why can't that be done also in the
> funnel locking loop - after all we are holding the rnp_start's lock through
> out right?

I was focused on the updates, and missed the rnp->gp_seq_req access in the
"if" statement.  The current code does sort of work, but only assuming
that the compiler doesn't tear the load, and so your change would help.
Could you please resend with your other two updated patches?  It depends
on one of the earlier patches, so does not apply cleanly as-is.  I could
hand-apply it, but that sounds like a good way to make your updated
series fail to apply.  ;-)

But could you also make the commit log explicitly call out the "if"
condition as being the offending access?

Thanx, Paul



Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Paul E. McKenney
On Mon, May 21, 2018 at 05:07:34PM -0700, Joel Fernandes wrote:
> On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> > On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > > We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> > > after the access.
> > 
> > Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or in
> > the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
> > so nothing else is accessing it.  Or at least that is the intent.  ;-)
> 
> I was talking about protecting the internal node's rnp->gp_seq_needed, not
> the rnp_start's gp_seq_needed.

Ah, good point, I missed the "if" condition.  This can be argued to work,
sort of, given that we still hold the leaf rcu_node structure's lock,
so that there is a limit to how far grace periods can advance.

But the code would of course be much cleaner with your change.

> We are protecting them in the loop:
> 
> like this:
> for(...)
>   if (rnp != rnp_start)
>   raw_spin_lock_rcu_node(rnp);
>   [...]
>   // access rnp->gp_seq and rnp->gp_seq_needed
>   [...]
>   if (rnp != rnp_start)
>   raw_spin_unlock_rcu_node(rnp);
> 
> But we don't need to do such protection in unlock_out ? I'm sorry if I'm
> missing something, but I'm wondering if rnp->gp_seq_needed of an internal
> node can be accessed locklessly, then why can't that be done also in the
> funnel locking loop - after all we are holding the rnp_start's lock through
> out right?

I was focused on the updates, and missed the rnp->gp_seq_req access in the
"if" statement.  The current code does sort of work, but only assuming
that the compiler doesn't tear the load, and so your change would help.
Could you please resend with your other two updated patches?  It depends
on one of the earlier patches, so does not apply cleanly as-is.  I could
hand-apply it, but that sounds like a good way to make your updated
series fail to apply.  ;-)

But could you also make the commit log explicitly call out the "if"
condition as being the offending access?

Thanx, Paul



Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Joel Fernandes
On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> > after the access.
> 
> Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or in
> the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
> so nothing else is accessing it.  Or at least that is the intent.  ;-)

I was talking about protecting the internal node's rnp->gp_seq_needed, not
the rnp_start's gp_seq_needed.

We are protecting them in the loop:

like this:
for(...)
if (rnp != rnp_start)
raw_spin_lock_rcu_node(rnp);
[...]
// access rnp->gp_seq and rnp->gp_seq_needed
[...]
if (rnp != rnp_start)
raw_spin_unlock_rcu_node(rnp);

But we don't need to do such protection in unlock_out ? I'm sorry if I'm
missing something, but I'm wondering if rnp->gp_seq_needed of an internal
node can be accessed locklessly, then why can't that be done also in the
funnel locking loop - after all we are holding the rnp_start's lock through
out right?

thanks!

 - Joel
 


Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Joel Fernandes
On Mon, May 21, 2018 at 04:25:38PM -0700, Paul E. McKenney wrote:
> On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> > We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> > after the access.
> 
> Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or in
> the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
> so nothing else is accessing it.  Or at least that is the intent.  ;-)

I was talking about protecting the internal node's rnp->gp_seq_needed, not
the rnp_start's gp_seq_needed.

We are protecting them in the loop:

like this:
for(...)
if (rnp != rnp_start)
raw_spin_lock_rcu_node(rnp);
[...]
// access rnp->gp_seq and rnp->gp_seq_needed
[...]
if (rnp != rnp_start)
raw_spin_unlock_rcu_node(rnp);

But we don't need to do such protection in unlock_out ? I'm sorry if I'm
missing something, but I'm wondering if rnp->gp_seq_needed of an internal
node can be accessed locklessly, then why can't that be done also in the
funnel locking loop - after all we are holding the rnp_start's lock through
out right?

thanks!

 - Joel
 


Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Paul E. McKenney
On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> after the access.

Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or in
the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
so nothing else is accessing it.  Or at least that is the intent.  ;-)

One exception is CPU hotplug, but in that case, only the CPU doing the
hotplugging is allowed to touch rdp->gp_seq_needed and even then only
while the incoming/outgoing CPU is inactive.

Thanx, Paul

> Signed-off-by: Joel Fernandes 
> ---
>  kernel/rcu/tree.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 879c67a31116..efbd21b2a1a6 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1603,13 +1603,13 @@ static bool rcu_start_this_gp(struct rcu_node 
> *rnp_start, struct rcu_data *rdp,
>   trace_rcu_grace_period(rsp->name, READ_ONCE(rsp->gp_seq), 
> TPS("newreq"));
>   ret = true;  /* Caller must wake GP kthread. */
>  unlock_out:
> - if (rnp != rnp_start)
> - raw_spin_unlock_rcu_node(rnp);
>   /* Push furthest requested GP to leaf node and rcu_data structure. */
>   if (ULONG_CMP_GE(rnp->gp_seq_needed, gp_seq_req)) {
>   rnp_start->gp_seq_needed = gp_seq_req;
>   rdp->gp_seq_needed = gp_seq_req;
>   }
> + if (rnp != rnp_start)
> + raw_spin_unlock_rcu_node(rnp);
>   return ret;
>  }
> 
> -- 
> 2.17.0.441.gb46fe60e1d-goog
> 



Re: [PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-21 Thread Paul E. McKenney
On Sun, May 20, 2018 at 09:42:20PM -0700, Joel Fernandes wrote:
> We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
> after the access.

Actually, no, we hold rnp_start's ->lock throughout.  And this CPU (or in
the case of no-CBs CPUs, this task) is in charge of rdp->gp_seq_needed,
so nothing else is accessing it.  Or at least that is the intent.  ;-)

One exception is CPU hotplug, but in that case, only the CPU doing the
hotplugging is allowed to touch rdp->gp_seq_needed and even then only
while the incoming/outgoing CPU is inactive.

Thanx, Paul

> Signed-off-by: Joel Fernandes 
> ---
>  kernel/rcu/tree.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
> index 879c67a31116..efbd21b2a1a6 100644
> --- a/kernel/rcu/tree.c
> +++ b/kernel/rcu/tree.c
> @@ -1603,13 +1603,13 @@ static bool rcu_start_this_gp(struct rcu_node 
> *rnp_start, struct rcu_data *rdp,
>   trace_rcu_grace_period(rsp->name, READ_ONCE(rsp->gp_seq), 
> TPS("newreq"));
>   ret = true;  /* Caller must wake GP kthread. */
>  unlock_out:
> - if (rnp != rnp_start)
> - raw_spin_unlock_rcu_node(rnp);
>   /* Push furthest requested GP to leaf node and rcu_data structure. */
>   if (ULONG_CMP_GE(rnp->gp_seq_needed, gp_seq_req)) {
>   rnp_start->gp_seq_needed = gp_seq_req;
>   rdp->gp_seq_needed = gp_seq_req;
>   }
> + if (rnp != rnp_start)
> + raw_spin_unlock_rcu_node(rnp);
>   return ret;
>  }
> 
> -- 
> 2.17.0.441.gb46fe60e1d-goog
> 



[PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-20 Thread Joel Fernandes
We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
after the access.

Signed-off-by: Joel Fernandes 
---
 kernel/rcu/tree.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 879c67a31116..efbd21b2a1a6 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1603,13 +1603,13 @@ static bool rcu_start_this_gp(struct rcu_node 
*rnp_start, struct rcu_data *rdp,
trace_rcu_grace_period(rsp->name, READ_ONCE(rsp->gp_seq), 
TPS("newreq"));
ret = true;  /* Caller must wake GP kthread. */
 unlock_out:
-   if (rnp != rnp_start)
-   raw_spin_unlock_rcu_node(rnp);
/* Push furthest requested GP to leaf node and rcu_data structure. */
if (ULONG_CMP_GE(rnp->gp_seq_needed, gp_seq_req)) {
rnp_start->gp_seq_needed = gp_seq_req;
rdp->gp_seq_needed = gp_seq_req;
}
+   if (rnp != rnp_start)
+   raw_spin_unlock_rcu_node(rnp);
return ret;
 }
 
-- 
2.17.0.441.gb46fe60e1d-goog



[PATCH v3 4/4] rcu: Unlock non-start node only after accessing its gp_seq_needed

2018-05-20 Thread Joel Fernandes
We acquire gp_seq_needed locklessly. To be safe, lets do the unlocking
after the access.

Signed-off-by: Joel Fernandes 
---
 kernel/rcu/tree.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 879c67a31116..efbd21b2a1a6 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -1603,13 +1603,13 @@ static bool rcu_start_this_gp(struct rcu_node 
*rnp_start, struct rcu_data *rdp,
trace_rcu_grace_period(rsp->name, READ_ONCE(rsp->gp_seq), 
TPS("newreq"));
ret = true;  /* Caller must wake GP kthread. */
 unlock_out:
-   if (rnp != rnp_start)
-   raw_spin_unlock_rcu_node(rnp);
/* Push furthest requested GP to leaf node and rcu_data structure. */
if (ULONG_CMP_GE(rnp->gp_seq_needed, gp_seq_req)) {
rnp_start->gp_seq_needed = gp_seq_req;
rdp->gp_seq_needed = gp_seq_req;
}
+   if (rnp != rnp_start)
+   raw_spin_unlock_rcu_node(rnp);
return ret;
 }
 
-- 
2.17.0.441.gb46fe60e1d-goog