On Mon, Aug 05, 2019 at 10:58:13PM +0800, Boqun Feng wrote:
> On Mon, Aug 05, 2019 at 10:43:18PM +0800, Boqun Feng wrote:
> > On Mon, Aug 05, 2019 at 04:02:41PM +0200, Peter Zijlstra wrote:
> > [...]
> > >  
> > >  static inline void percpu_up_read(struct percpu_rw_semaphore *sem)
> > >  {
> > > + rwsem_release(&sem->dep_map, 1, _RET_IP_);
> > > +
> > >   preempt_disable();
> > >   /*
> > >    * Same as in percpu_down_read().
> > >    */
> > > - if (likely(rcu_sync_is_idle(&sem->rss)))
> > > + if (likely(rcu_sync_is_idle(&sem->rss))) {
> > >           __this_cpu_dec(*sem->read_count);
> > > - else
> > > -         __percpu_up_read(sem); /* Unconditional memory barrier */
> > > - preempt_enable();
> > > +         preempt_enable();
> > > +         return;
> > > + }
> > >  
> > > - rwsem_release(&sem->rw_sem.dep_map, 1, _RET_IP_);
> > 
> > Missing a preempt_enable() here?
> > 
> 
> Ah.. you modified the semantics of __percpu_up_read() to imply a
> preempt_enable(), sorry for the noise...

Yes indeed; I suppose I should've noted that in the Changlog. The reason
is that waitqueues use spin_lock() which change into a sleepable lock on
RT and thus cannot be used with preeption disabled. We also cannot
(easily) switch to swait because we use both exclusive and !exclusive
waits.

Reply via email to