On Thu, Apr 24, 2014 at 10:48:32AM -0400, Tejun Heo wrote:
> On Thu, Apr 24, 2014 at 04:37:34PM +0200, Frederic Weisbecker wrote:
> > +static int apply_workqueue_attrs_locked(struct workqueue_struct *wq,
> > +                                   const struct workqueue_attrs *attrs)
> >  {
> >     struct workqueue_attrs *new_attrs, *tmp_attrs;
> >     struct pool_workqueue **pwq_tbl, *dfl_pwq;
> > @@ -3976,15 +3960,6 @@ int apply_workqueue_attrs(struct workqueue_struct 
> > *wq,
> >     copy_workqueue_attrs(tmp_attrs, new_attrs);
> >  
> >     /*
> > -    * CPUs should stay stable across pwq creations and installations.
> > -    * Pin CPUs, determine the target cpumask for each node and create
> > -    * pwqs accordingly.
> > -    */
> > -   get_online_cpus();
> > -
> > -   mutex_lock(&wq_pool_mutex);
> 
> lockdep_assert_held()

Not sure... Only a small part of the function actually needs to be locked. 
Namely
those doing the pwq allocations, which already have the lockdep_assert_held().
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to