Re: [PATCH tip/core/rcu 25/41] workqueue: Replace call_rcu_sched() with call_rcu()
On Tue, Nov 13, 2018 at 07:48:03AM -0800, Tejun Heo wrote: > On Sun, Nov 11, 2018 at 11:43:54AM -0800, Paul E. McKenney wrote: > > Now that call_rcu()'s callback is not invoked until after all > > preempt-disable regions of code have completed (in addition to explicitly > > marked RCU read-side critical sections), call_rcu() can be used in place > > of call_rcu_sched(). This commit therefore makes that change. > > > > Signed-off-by: Paul E. McKenney > > Cc: Tejun Heo > > Cc: Lai Jiangshan > > Acked-by: Tejun Heo Applied all four, thank you! Thanx, Paul
Re: [PATCH tip/core/rcu 25/41] workqueue: Replace call_rcu_sched() with call_rcu()
On Tue, Nov 13, 2018 at 07:48:03AM -0800, Tejun Heo wrote: > On Sun, Nov 11, 2018 at 11:43:54AM -0800, Paul E. McKenney wrote: > > Now that call_rcu()'s callback is not invoked until after all > > preempt-disable regions of code have completed (in addition to explicitly > > marked RCU read-side critical sections), call_rcu() can be used in place > > of call_rcu_sched(). This commit therefore makes that change. > > > > Signed-off-by: Paul E. McKenney > > Cc: Tejun Heo > > Cc: Lai Jiangshan > > Acked-by: Tejun Heo Applied all four, thank you! Thanx, Paul
Re: [PATCH tip/core/rcu 25/41] workqueue: Replace call_rcu_sched() with call_rcu()
On Sun, Nov 11, 2018 at 11:43:54AM -0800, Paul E. McKenney wrote: > Now that call_rcu()'s callback is not invoked until after all > preempt-disable regions of code have completed (in addition to explicitly > marked RCU read-side critical sections), call_rcu() can be used in place > of call_rcu_sched(). This commit therefore makes that change. > > Signed-off-by: Paul E. McKenney > Cc: Tejun Heo > Cc: Lai Jiangshan Acked-by: Tejun Heo Thanks. -- tejun
Re: [PATCH tip/core/rcu 25/41] workqueue: Replace call_rcu_sched() with call_rcu()
On Sun, Nov 11, 2018 at 11:43:54AM -0800, Paul E. McKenney wrote: > Now that call_rcu()'s callback is not invoked until after all > preempt-disable regions of code have completed (in addition to explicitly > marked RCU read-side critical sections), call_rcu() can be used in place > of call_rcu_sched(). This commit therefore makes that change. > > Signed-off-by: Paul E. McKenney > Cc: Tejun Heo > Cc: Lai Jiangshan Acked-by: Tejun Heo Thanks. -- tejun
[PATCH tip/core/rcu 25/41] workqueue: Replace call_rcu_sched() with call_rcu()
Now that call_rcu()'s callback is not invoked until after all preempt-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_sched(). This commit therefore makes that change. Signed-off-by: Paul E. McKenney Cc: Tejun Heo Cc: Lai Jiangshan --- kernel/workqueue.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 0280deac392e..392be4b252f6 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3396,7 +3396,7 @@ static void put_unbound_pool(struct worker_pool *pool) del_timer_sync(>mayday_timer); /* sched-RCU protected to allow dereferences from get_work_pool() */ - call_rcu_sched(>rcu, rcu_free_pool); + call_rcu(>rcu, rcu_free_pool); } /** @@ -3503,14 +3503,14 @@ static void pwq_unbound_release_workfn(struct work_struct *work) put_unbound_pool(pool); mutex_unlock(_pool_mutex); - call_rcu_sched(>rcu, rcu_free_pwq); + call_rcu(>rcu, rcu_free_pwq); /* * If we're the last pwq going away, @wq is already dead and no one * is gonna access it anymore. Schedule RCU free. */ if (is_last) - call_rcu_sched(>rcu, rcu_free_wq); + call_rcu(>rcu, rcu_free_wq); } /** @@ -4195,7 +4195,7 @@ void destroy_workqueue(struct workqueue_struct *wq) * The base ref is never dropped on per-cpu pwqs. Directly * schedule RCU free. */ - call_rcu_sched(>rcu, rcu_free_wq); + call_rcu(>rcu, rcu_free_wq); } else { /* * We're the sole accessor of @wq at this point. Directly -- 2.17.1
[PATCH tip/core/rcu 25/41] workqueue: Replace call_rcu_sched() with call_rcu()
Now that call_rcu()'s callback is not invoked until after all preempt-disable regions of code have completed (in addition to explicitly marked RCU read-side critical sections), call_rcu() can be used in place of call_rcu_sched(). This commit therefore makes that change. Signed-off-by: Paul E. McKenney Cc: Tejun Heo Cc: Lai Jiangshan --- kernel/workqueue.c | 8 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 0280deac392e..392be4b252f6 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -3396,7 +3396,7 @@ static void put_unbound_pool(struct worker_pool *pool) del_timer_sync(>mayday_timer); /* sched-RCU protected to allow dereferences from get_work_pool() */ - call_rcu_sched(>rcu, rcu_free_pool); + call_rcu(>rcu, rcu_free_pool); } /** @@ -3503,14 +3503,14 @@ static void pwq_unbound_release_workfn(struct work_struct *work) put_unbound_pool(pool); mutex_unlock(_pool_mutex); - call_rcu_sched(>rcu, rcu_free_pwq); + call_rcu(>rcu, rcu_free_pwq); /* * If we're the last pwq going away, @wq is already dead and no one * is gonna access it anymore. Schedule RCU free. */ if (is_last) - call_rcu_sched(>rcu, rcu_free_wq); + call_rcu(>rcu, rcu_free_wq); } /** @@ -4195,7 +4195,7 @@ void destroy_workqueue(struct workqueue_struct *wq) * The base ref is never dropped on per-cpu pwqs. Directly * schedule RCU free. */ - call_rcu_sched(>rcu, rcu_free_wq); + call_rcu(>rcu, rcu_free_wq); } else { /* * We're the sole accessor of @wq at this point. Directly -- 2.17.1