* Paolo Bonzini ([email protected]) wrote:
> > Just to let you know that I pushed two updates into urcu: one fixes a
> > grace period hang caused by a missing wakeup in the synchronize_rcu QSBR
> > code. This appears to hit us due to the more fine-grained wakeup
> > code brought by Paolo. The wakeup was really missing from the
> > synchronize_rcu code (so Paolo's code just triggered an existing
> > problem). I thought it would be good to let you know the effect: grace
> > periods are delayed forever. This problem never appeared in a release (I
> > caught it before).
> 
> Good catch.  Why not use rcu_thread_offline/online in synchronize_rcu,
> instead of touching rcu_reader.ctr directly?  I had this in my QEMU
> branch but hadn't posted yet because it was meant as a cleanup only.

Yep. I think in the initial QSBR versions, we could remove one barrier
by inlining this, but I think I recall it was only a compiler barrier,
so I agree that it's better to make the code easier to manage than save
this possibly effect-less barrier (it is immediately followed by a mutex
lock and preceded by a mutex unlock).

Merged,

Thanks!

Mathieu

> 
> Paolo
> 
> ----------------------- 8< ---------------------
> 
> From 7ad6897f696034ef0651c912e43931a2b0bbe631 Mon Sep 17 00:00:00 2001
> From: Paolo Bonzini <[email protected]>
> Date: Mon, 12 Sep 2011 09:24:18 +0200
> Subject: [PATCH] urcu-qsbr: use rcu_thread_offline/rcu_thread_online instead 
> of inlining them
> 
> ---
>  urcu-qsbr.c |   40 +++++++++++++++++-----------------------
>  1 files changed, 17 insertions(+), 23 deletions(-)
> 
> diff --git a/urcu-qsbr.c b/urcu-qsbr.c
> index 1dc9979..1adaa94 100644
> --- a/urcu-qsbr.c
> +++ b/urcu-qsbr.c
> @@ -208,21 +208,17 @@ void synchronize_rcu(void)
>       was_online = rcu_reader.ctr;
>  
>       /* All threads should read qparity before accessing data structure
> -      * where new ptr points to.
> -      */
> -     /* Write new ptr before changing the qparity */
> -     cmm_smp_mb();
> -
> -     /*
> +      * where new ptr points to.  In the "then" case, rcu_thread_offline
> +      * includes a memory barrier.
> +      *
>        * Mark the writer thread offline to make sure we don't wait for
>        * our own quiescent state. This allows using synchronize_rcu()
>        * in threads registered as readers.
>        */
> -     if (was_online) {
> -             CMM_STORE_SHARED(rcu_reader.ctr, 0);
> -             cmm_smp_mb();   /* write rcu_reader.ctr before read futex */
> -             wake_up_gp();
> -     }
> +     if (was_online)
> +             rcu_thread_offline();
> +     else
> +             cmm_smp_mb();
>  
>       mutex_lock(&rcu_gp_lock);
>  
> @@ -263,9 +259,9 @@ out:
>        * freed.
>        */
>       if (was_online)
> -             _CMM_STORE_SHARED(rcu_reader.ctr,
> -                               CMM_LOAD_SHARED(rcu_gp_ctr));
> -     cmm_smp_mb();
> +             rcu_thread_online();
> +     else
> +             cmm_smp_mb();
>  }
>  #else /* !(CAA_BITS_PER_LONG < 64) */
>  void synchronize_rcu(void)
> @@ -279,12 +275,10 @@ void synchronize_rcu(void)
>        * our own quiescent state. This allows using synchronize_rcu()
>        * in threads registered as readers.
>        */
> -     cmm_smp_mb();
> -     if (was_online) {
> -             CMM_STORE_SHARED(rcu_reader.ctr, 0);
> -             cmm_smp_mb();   /* write rcu_reader.ctr before read futex */
> -             wake_up_gp();
> -     }
> +     if (was_online)
> +             rcu_thread_offline();
> +     else
> +             cmm_smp_mb();
>  
>       mutex_lock(&rcu_gp_lock);
>       if (cds_list_empty(&registry))
> @@ -294,9 +288,9 @@ out:
>       mutex_unlock(&rcu_gp_lock);
>  
>       if (was_online)
> -             _CMM_STORE_SHARED(rcu_reader.ctr,
> -                               CMM_LOAD_SHARED(rcu_gp_ctr));
> -     cmm_smp_mb();
> +             rcu_thread_online();
> +     else
> +             cmm_smp_mb();
>  }
>  #endif  /* !(CAA_BITS_PER_LONG < 64) */
>  
> -- 
> 1.7.6
> 

-- 
Mathieu Desnoyers
Operating System Efficiency R&D Consultant
EfficiOS Inc.
http://www.efficios.com

_______________________________________________
ltt-dev mailing list
[email protected]
http://lists.casi.polymtl.ca/cgi-bin/mailman/listinfo/ltt-dev

Reply via email to