The patch below does not apply to the 3.14-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <[email protected]>.

thanks,

greg k-h

------------------ original commit in Linus's tree ------------------

>From 789cbbeca4eb7141cbd748ee93772471101b507b Mon Sep 17 00:00:00 2001
From: Joe Lawrence <[email protected]>
Date: Sun, 5 Oct 2014 13:24:21 -0400
Subject: [PATCH] workqueue: Add quiescent state between work items

Similar to the stop_machine deadlock scenario on !PREEMPT kernels
addressed in b22ce2785d97 "workqueue: cond_resched() after processing
each work item", kworker threads requeueing back-to-back with zero jiffy
delay can stall RCU. The cond_resched call introduced in that fix will
yield only iff there are other higher priority tasks to run, so force a
quiescent RCU state between work items.

Signed-off-by: Joe Lawrence <[email protected]>
Link: 
https://lkml.kernel.org/r/[email protected]
Link: 
https://lkml.kernel.org/r/[email protected]
Fixes: b22ce2785d97 ("workqueue: cond_resched() after processing each work 
item")
Cc: <[email protected]>
Acked-by: Tejun Heo <[email protected]>
Signed-off-by: Paul E. McKenney <[email protected]>

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5dbe22aa3efd..345bec95e708 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -2043,8 +2043,10 @@ __acquires(&pool->lock)
         * kernels, where a requeueing work item waiting for something to
         * happen could deadlock with stop_machine as such work item could
         * indefinitely requeue itself while all other CPUs are trapped in
-        * stop_machine.
+        * stop_machine. At the same time, report a quiescent RCU state so
+        * the same condition doesn't freeze RCU.
         */
+       rcu_note_voluntary_context_switch(current);
        cond_resched();
 
        spin_lock_irq(&pool->lock);

--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to