On Thu, Aug 15, 2019 at 10:18:42AM -0400, Joel Fernandes (Google) wrote:
> list_for_each_entry_rcu now has support to check for RCU reader sections
> as well as lock. Just use the support in it, instead of explicitly
> checking in the caller.

...

>  #define assert_rcu_or_wq_mutex_or_pool_mutex(wq)                     \
>       RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&                       \
>                        !lockdep_is_held(&wq->mutex) &&                \

Can't you also get rid of this macro?

It's used in one place:

static struct pool_workqueue *unbound_pwq_by_node(struct workqueue_struct *wq,
                                                  int node)
{
        assert_rcu_or_wq_mutex_or_pool_mutex(wq);

        /*
         * XXX: @node can be NUMA_NO_NODE if CPU goes offline while a
         * delayed item is pending.  The plan is to keep CPU -> NODE
         * mapping valid and stable across CPU on/offlines.  Once that
         * happens, this workaround can be removed.
         */
        if (unlikely(node == NUMA_NO_NODE))
                return wq->dfl_pwq;

        return rcu_dereference_raw(wq->numa_pwq_tbl[node]);
}

Shouldn't we delete that assert and use

+       return rcu_dereference_check(wq->numa_pwq_tbl[node],
+                       lockdep_is_held(&wq->mutex) ||
+                       lockdep_is_held(&wq_pool_mutex));

Reply via email to