On Sat, Aug 24, 2013 at 03:25:36PM -0400, Mathieu Desnoyers wrote:
> * Paul E. McKenney (paul...@linux.vnet.ibm.com) wrote:
> [...]
> > The result is as follows.  Better?
> 
> Hi Paul,
> 
> Pitching in late in the thread, so that I can get a share of the fun ;-)
> 
> >                                                     Thanx, Paul
> > 
> > #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
> > static void rcu_torture_leak_cb(struct rcu_head *rhp)
> > {
> > }
> > 
> > static void rcu_torture_err_cb(struct rcu_head *rhp)
> > {
> >     /*
> >      * This -might- happen due to race conditions, but is unlikely.
> >      * The scenario that leads to this happening is that the
> >      * first of the pair of duplicate callbacks is queued,
> >      * someone else starts a grace period that includes that
> >      * callback, then the second of the pair must wait for the
> >      * next grace period.  Unlikely, but can happen.  If it
> >      * does happen, the debug-objects subsystem won't have splatted.
> >      */
> >     pr_alert("rcutorture: duplicated callback was invoked.\n");
> > }
> > #endif /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */
> > 
> 
> Hrm. Putting an #ifdef within a function when not utterly needed is
> usually a bad idea. How about:
> 
> /*
>  * Verify that double-free causes debug-objects to complain, but only
>  * if CONFIG_DEBUG_OBJECTS_RCU_HEAD=y.  Otherwise, say that the test
>  * cannot be carried out.
>  */
> #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
> static void rcu_test_debug_objects(void)
> {
>       struct rcu_head rh1;
>       struct rcu_head rh2;
> 
>       init_rcu_head_on_stack(&rh1);
>       init_rcu_head_on_stack(&rh2);
>       pr_alert("rcutorture: WARN: Duplicate call_rcu() test starting.\n");
>       preempt_disable(); /* Prevent preemption from interrupting test. */
>       rcu_read_lock(); /* Make it impossible to finish a grace period. */
>       call_rcu(&rh1, rcu_torture_leak_cb); /* Start grace period. */
>       local_irq_disable(); /* Make it harder to start a new grace period. */
>       call_rcu(&rh2, rcu_torture_leak_cb);
>       call_rcu(&rh2, rcu_torture_err_cb); /* Duplicate callback. */
>       local_irq_enable();
>       rcu_read_unlock();
>       preempt_enable();
>       rcu_barrier();
>       pr_alert("rcutorture: WARN: Duplicate call_rcu() test complete.\n");
>       destroy_rcu_head_on_stack(&rh1);
>       destroy_rcu_head_on_stack(&rh2);
> }
> #else /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */
> static void rcu_test_debug_objects(void)
> {
>       pr_alert("rcutorture: !CONFIG_DEBUG_OBJECTS_RCU_HEAD, not testing 
> duplicate call_rcu()\n");
> }
> #endif /* #else #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */

The objection to this is that it duplicates the function header, both
copies of which must be updated.  See Josh's and my discussion on this
point earlier in the thread.  Who knows, Josh might eventually convince
me to individually ifdef the functions in kernel/rcutree_plugin.h,
but I am not quite there yet.  ;-)

That said, I do see two benefits of individual ifdef:

1.      It is easy to see when a given function is included.
        As it is, there are runs of many hundreds of lines of
        code where it might not be obvious to the casual reader.

2.      Duplicated code is asking for silent rebase and merge errors.
        This can happen when a change to the function header collides
        with a change that duplicates the function under ifdef.

So perhaps I will eventually convert to the individual-ifdef style.
At the moment, I am in no hurry.

> More comments inlined in the code below,
> 
> > /*
> >  * Verify that double-free causes debug-objects to complain, but only
> >  * if CONFIG_DEBUG_OBJECTS_RCU_HEAD=y.  Otherwise, say that the test
> >  * cannot be carried out.
> >  */
> > static void rcu_test_debug_objects(void)
> > {
> > #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD
> >     struct rcu_head rh1;
> >     struct rcu_head rh2;
> > 
> >     init_rcu_head_on_stack(&rh1);
> >     init_rcu_head_on_stack(&rh2);
> >     pr_alert("rcutorture: WARN: Duplicate call_rcu() test starting.\n");
> >     preempt_disable(); /* Prevent preemption from interrupting test. */
> >     rcu_read_lock(); /* Make it impossible to finish a grace period. */
> >     call_rcu(&rh1, rcu_torture_leak_cb); /* Start grace period. */
> 
> Are we really "starting" a grace period ? If rcu_test_debug_objects() is
> executed after some callbacks are already queued, are we, by definition,
> "starting" the grace period ?
> 
> Also, I find it weird to have, in that order:
> 
> 1) preempt_disable()
> 2) rcu_read_lock()
> 3) local_irq_disable()
> 
> I would rather expect:
> 
> 1) rcu_read_lock()
> 2) preempt_disable()
> 3) local_irq_disable()
> 
> So they come in increasing order of impact on the system: with
> non-preemptable RCU, the read-lock and preempt disable mean the same
> thing, however, with preemptable RCU, the impact of preempt disable
> seems larger than the impact of RCU read lock: preemption is still
> enabled when within a RCU critical section. Both will work, but I find
> this call order slightly weird.

If this was code that ran normally, I would agree with you.  Your
suggested ordering would reduce the scheduling-latency impact due to
this code, among other things.

However, this code is a one-off that runs only once during an rcutorture
run that explicitly calls for it using rcutorture's new object_debug
module parameter.

> Also, if your goal is to increase the chances that call_rcu() enqueues
> both callbacks into the same grace period, you might want to issue a
> rcu_barrier() early in this function, so that call_rcu() has even more
> chances to enqueue the callbacks into the same grace period.

The problem is that neither rcu_barrier() nor synchronize_rcu() clean
up at a well-defined point in time.  In both cases, the task will be
awakened at some time after the operation completes, by which time
any number of new grace periods might have started and completed.

> However, if you care about testing enqueue into same _and_ different
> grace periods, you might want to turn this single-shot test into a
> stress-test by calling it repeatedly.

Good point, and I might do that.  However, the above code has not yet
failed to produce a debug-objects splat.  When and if it does fail to
splat, then running the code repeatedly would definitely be one of the
fixes that I would consider.  (Another potential fix being to simply load
the rcutorture module a few times in succession, though your suggestion
would in fact work better for my current testing setup.)

Thank you for your review and comments!

                                                        Thanx, Paul

> Thanks!
> 
> Mathieu
> 
> >     local_irq_disable(); /* Make it harder to start a new grace period. */
> >     call_rcu(&rh2, rcu_torture_leak_cb);
> >     call_rcu(&rh2, rcu_torture_err_cb); /* Duplicate callback. */
> >     local_irq_enable();
> >     rcu_read_unlock();
> >     preempt_enable();
> >     rcu_barrier();
> >     pr_alert("rcutorture: WARN: Duplicate call_rcu() test complete.\n");
> >     destroy_rcu_head_on_stack(&rh1);
> >     destroy_rcu_head_on_stack(&rh2);
> > #else /* #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */
> >     pr_alert("rcutorture: !CONFIG_DEBUG_OBJECTS_RCU_HEAD, not testing 
> > duplicate call_rcu()\n");
> > #endif /* #else #ifdef CONFIG_DEBUG_OBJECTS_RCU_HEAD */
> > }
> > 
> > > > +
> > > >  static int __init
> > > >  rcu_torture_init(void)
> > > >  {
> > > > @@ -2163,6 +2206,8 @@ rcu_torture_init(void)
> > > >                 firsterr = retval;
> > > >                 goto unwind;
> > > >         }
> > > > +       if (object_debug)
> > > > +               rcu_test_debug_objects();
> > > >         rcutorture_record_test_transition();
> > > >         mutex_unlock(&fullstop_mutex);
> > > >         return 0;
> > > 
> > 
> 
> -- 
> Mathieu Desnoyers
> EfficiOS Inc.
> http://www.efficios.com
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to