On Thu, 2019-01-10 at 20:44 +0100, Peter Zijlstra wrote:
> On Thu, Jan 10, 2019 at 10:51:27AM -0800, Bart Van Assche wrote:
> > On Thu, 2019-01-10 at 16:24 +0100, Peter Zijlstra wrote:
> > >  /*
> > >   * A data structure for delayed freeing of data structures that may be
> > > - * accessed by RCU readers at the time these were freed. The size of the 
> > > array
> > > - * is a compromise between minimizing the amount of memory used by this 
> > > array
> > > - * and minimizing the number of wait_event() calls by 
> > > get_pending_free_lock().
> > > + * accessed by RCU readers at the time these were freed.
> > >   */
> > >  static struct pending_free {
> > > - struct list_head zapped_classes;
> > >   struct rcu_head  rcu_head;
> > > + int              index;
> > >   int              pending;
> > > -} pending_free[2];
> > > -static DECLARE_WAIT_QUEUE_HEAD(rcu_cb);
> > > + struct list_head zapped[2];
> > > +} pending_free;
> > 
> > Hi Peter,
> > 
> > If the zapped[] array only has two elements there is no guarantee that an
> > element will be free when zap_class() is called. I think we need at least
> > num_online_cpus() elements to guarantee that at least one element is free
> > when zap_class() is called. So removing the wait loop from
> > get_pending_free_lock() seems wrong to me. Have you tried to run a workload
> > that keeps all CPUs busy and that triggers get_pending_free_lock()
> > frequently?
> 
> I have not ran (yet); but I do not quite follow your argument. There is
> only a single rcu_head, yes? Thereby only a single list can be pending
> at any one time, and the other list is free to be appended to during
> this time -- all is serialized by the graph lock after all.
> 
> When the rcu callback happens, we flush the list we started the QS for,
> which then becomes empty and if the open list contains entries, we
> flip the lot and requeue the rcu_head for another QS.
> 
> Therefore we only ever need 2 lists; 1 closed with entries waiting for
> the callback, 1 open, to which we can append all newly freed entries.

Hi Peter,

Now that I had a closer look at your patch I think the approach of your patch
is fine. Sorry for the confusion.

Bart.

Reply via email to