On Mon, 12 Feb 2018, Yang Shi wrote:
> On 2/12/18 8:25 AM, Thomas Gleixner wrote:
> > On Tue, 6 Feb 2018, Yang Shi wrote:
> > > + /*
> > > +  * Reuse objs from the global free list, they will be reinitialized
> > > +  * when allocating
> > > +  */
> > > + while (obj_nr_tofree > 0 && (obj_pool_free < obj_pool_min_free)) {
> > > +         raw_spin_lock_irqsave(&pool_lock, flags);
> > > +         obj = hlist_entry(obj_to_free.first, typeof(*obj), node);
> > This is racy vs. the worker thread. Assume obj_nr_tofree = 1:
> > 
> > CPU0                                        CPU1
> > worker
> >     lock(&pool_lock);                       while (obj_nr_tofree > 0 && 
> > ...) {
> >       obj = hlist_entry(obj_to_free);         lock(&pool_lock);
> >       hlist_del(obj);                       
> >       obj_nr_tofree--;
> >       ...
> >     unlock(&pool_lock);
> >                                       obj = hlist_entry(obj_to_free);
> >                                       hlist_del(obj); <------- NULL
> > pointer dereference
> > 
> > Not what you want, right? The counter or the list head need to be rechecked
> > after the lock is acquired.
> 
> Yes, you are right. Will fix the race in newer version.

I fixed up all the minor issues with this series and applied it to:

   git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git core/debugobjects

Please double check the result.

Thanks,

        tglx

Reply via email to