On Thu, Sep 26, 2013 at 06:14:26PM +0200, Oleg Nesterov wrote:
> On 09/26, Peter Zijlstra wrote:
> >
> > On Thu, Sep 26, 2013 at 05:53:21PM +0200, Oleg Nesterov wrote:
> > > On 09/26, Peter Zijlstra wrote:
> > > >  void cpu_hotplug_done(void)
> > > >  {
> > > > -       cpu_hotplug.active_writer = NULL;
> > > > -       mutex_unlock(&cpu_hotplug.lock);
> > > > +       /* Signal the writer is done, no fast path yet. */
> > > > +       __cpuhp_state = readers_slow;
> > > > +       wake_up_all(&cpuhp_readers);
> > > > +
> > > > +       /*
> > > > +        * The wait_event()/wake_up_all() prevents the race where the 
> > > > readers
> > > > +        * are delayed between fetching __cpuhp_state and blocking.
> > > > +        */
> > > > +
> > > > +       /* See percpu_up_write(); readers will no longer attempt to 
> > > > block. */
> > > > +       synchronize_sched();
> > >
> > > Shouldn't you move wake_up_all(&cpuhp_readers) down after
> > > synchronize_sched() (or add another one) ? To ensure that a reader can't
> > > see state = BLOCK after wakeup().
> >
> > Well, if they are blocked, the wake_up_all() will do an actual
> > try_to_wake_up() which issues a MB as per smp_mb__before_spinlock().
> 
> Yes. Everything is fine with the already blocked readers.
> 
> I meant the new reader which still can see state = BLOCK after we
> do wakeup(), but I didn't notice it should do __wait_event() which
> takes the lock unconditionally, it must see the change after that.

Ah, because both __wake_up() and __wait_event()->prepare_to_wait() take
q->lock. Thereby matching the __wake_up() RELEASE to the __wait_event()
ACQUIRE, creating the full barrier.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to