On Fri, 20 Jun 2025 10:15:42 +0200
Peter Zijlstra <pet...@infradead.org> wrote:

> On Tue, Jun 10, 2025 at 08:54:29PM -0400, Steven Rostedt wrote:
> 
> 
> >  void unwind_deferred_cancel(struct unwind_work *work)
> >  {
> > +   struct task_struct *g, *t;
> > +
> >     if (!work)
> >             return;
> >  
> >     guard(mutex)(&callback_mutex);
> >     list_del(&work->list);
> > +
> > +   clear_bit(work->bit, &unwind_mask);  
> 
> atomic bitop

Yeah, it just seemed cleaner than: unwind_mask &= ~(work->bit);

It's not needed as the update of unwind_mask is done within the
callback_mutex.

> 
> > +
> > +   guard(rcu)();
> > +   /* Clear this bit from all threads */
> > +   for_each_process_thread(g, t) {
> > +           clear_bit(work->bit, &t->unwind_info.unwind_mask);
> > +   }
> >  }
> >  
> >  int unwind_deferred_init(struct unwind_work *work, unwind_callback_t func)
> > @@ -256,6 +278,14 @@ int unwind_deferred_init(struct unwind_work *work, 
> > unwind_callback_t func)
> >     memset(work, 0, sizeof(*work));
> >  
> >     guard(mutex)(&callback_mutex);
> > +
> > +   /* See if there's a bit in the mask available */
> > +   if (unwind_mask == ~0UL)
> > +           return -EBUSY;
> > +
> > +   work->bit = ffz(unwind_mask);
> > +   unwind_mask |= BIT(work->bit);  
> 
> regular or
> 
> > +
> >     list_add(&work->list, &callbacks);
> >     work->func = func;
> >     return 0;
> > @@ -267,6 +297,7 @@ void unwind_task_init(struct task_struct *task)
> >  
> >     memset(info, 0, sizeof(*info));
> >     init_task_work(&info->work, unwind_deferred_task_work);
> > +   info->unwind_mask = 0;
> >  }  
> 
> Which is somewhat inconsistent;
> 
>   __clear_bit()/__set_bit()

Hmm, are the above non-atomic?

> 
> or:
> 
>   unwind_mask &= ~BIT() / unwind_mask |= BIT()

although, because the update is always guarded, this may be the better
approach, as it shows there's no atomic needed.

-- Steve


Reply via email to