3.10-stable review patch. If anyone has any objections, please let me know.
--
From: Stephen Boyd
commit 40fea92ffb5fa0ef26d10ae0fe5688bc8e61c791 upstream.
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the qos back down to the
3.10-stable review patch. If anyone has any objections, please let me know.
--
From: Stephen Boyd sb...@codeaurora.org
commit 40fea92ffb5fa0ef26d10ae0fe5688bc8e61c791 upstream.
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the
On Tuesday, August 13, 2013 06:13:25 PM Tejun Heo wrote:
> Hello,
>
> On Tue, Aug 13, 2013 at 02:12:40PM -0700, Stephen Boyd wrote:
> > @@ -308,7 +319,7 @@ static void pm_qos_work_fn(struct work_struct *work)
> > struct pm_qos_request,
> >
Hello,
On Tue, Aug 13, 2013 at 02:12:40PM -0700, Stephen Boyd wrote:
> @@ -308,7 +319,7 @@ static void pm_qos_work_fn(struct work_struct *work)
> struct pm_qos_request,
> work);
>
> -
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the qos back down to the default
after the timeout. When the work item runs, pm_qos_work_fn() will
call pm_qos_update_request() and deadlock because it tries to
cancel itself via
On 08/13, Rafael J. Wysocki wrote:
> On Tuesday, August 13, 2013 01:01:46 PM Tejun Heo wrote:
> > Hello,
> >
> > On Tue, Aug 13, 2013 at 09:46:26AM -0700, Stephen Boyd wrote:
> > > >> + if (PM_QOS_DEFAULT_VALUE != req->node.prio)
> > > >> + pm_qos_update_target(
> > > >> +
On Tuesday, August 13, 2013 01:01:46 PM Tejun Heo wrote:
> Hello,
>
> On Tue, Aug 13, 2013 at 09:46:26AM -0700, Stephen Boyd wrote:
> > >> +if (PM_QOS_DEFAULT_VALUE != req->node.prio)
> > >> +pm_qos_update_target(
> > >> +
> > >>
Hello,
On Tue, Aug 13, 2013 at 09:46:26AM -0700, Stephen Boyd wrote:
> >> + if (PM_QOS_DEFAULT_VALUE != req->node.prio)
> >> + pm_qos_update_target(
> >> + pm_qos_array[req->pm_qos_class]->constraints,
> >> + >node, PM_QOS_UPDATE_REQ,
>
On 08/13/13 09:43, Tejun Heo wrote:
> Hello, Stephen.
>
> On Thu, Aug 08, 2013 at 01:13:57PM -0700, Stephen Boyd wrote:
>> pm_qos_update_request_timeout() updates a qos and then schedules
>> a delayed work item to bring the qos back down to the default
>> after the timeout. When the work item
Hello, Stephen.
On Thu, Aug 08, 2013 at 01:13:57PM -0700, Stephen Boyd wrote:
> pm_qos_update_request_timeout() updates a qos and then schedules
> a delayed work item to bring the qos back down to the default
> after the timeout. When the work item runs, pm_qos_work_fn() will
> call
On 08/08/13 13:13, Stephen Boyd wrote:
> pm_qos_update_request_timeout() updates a qos and then schedules
> a delayed work item to bring the qos back down to the default
> after the timeout. When the work item runs, pm_qos_work_fn() will
> call pm_qos_update_request() and deadlock because it tries
On 08/08/13 13:13, Stephen Boyd wrote:
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the qos back down to the default
after the timeout. When the work item runs, pm_qos_work_fn() will
call pm_qos_update_request() and deadlock because it tries to
Hello, Stephen.
On Thu, Aug 08, 2013 at 01:13:57PM -0700, Stephen Boyd wrote:
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the qos back down to the default
after the timeout. When the work item runs, pm_qos_work_fn() will
call
On 08/13/13 09:43, Tejun Heo wrote:
Hello, Stephen.
On Thu, Aug 08, 2013 at 01:13:57PM -0700, Stephen Boyd wrote:
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the qos back down to the default
after the timeout. When the work item runs,
Hello,
On Tue, Aug 13, 2013 at 09:46:26AM -0700, Stephen Boyd wrote:
+ if (PM_QOS_DEFAULT_VALUE != req-node.prio)
+ pm_qos_update_target(
+ pm_qos_array[req-pm_qos_class]-constraints,
+ req-node, PM_QOS_UPDATE_REQ,
+
On Tuesday, August 13, 2013 01:01:46 PM Tejun Heo wrote:
Hello,
On Tue, Aug 13, 2013 at 09:46:26AM -0700, Stephen Boyd wrote:
+if (PM_QOS_DEFAULT_VALUE != req-node.prio)
+pm_qos_update_target(
+
On 08/13, Rafael J. Wysocki wrote:
On Tuesday, August 13, 2013 01:01:46 PM Tejun Heo wrote:
Hello,
On Tue, Aug 13, 2013 at 09:46:26AM -0700, Stephen Boyd wrote:
+ if (PM_QOS_DEFAULT_VALUE != req-node.prio)
+ pm_qos_update_target(
+
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the qos back down to the default
after the timeout. When the work item runs, pm_qos_work_fn() will
call pm_qos_update_request() and deadlock because it tries to
cancel itself via
Hello,
On Tue, Aug 13, 2013 at 02:12:40PM -0700, Stephen Boyd wrote:
@@ -308,7 +319,7 @@ static void pm_qos_work_fn(struct work_struct *work)
struct pm_qos_request,
work);
-
On Tuesday, August 13, 2013 06:13:25 PM Tejun Heo wrote:
Hello,
On Tue, Aug 13, 2013 at 02:12:40PM -0700, Stephen Boyd wrote:
@@ -308,7 +319,7 @@ static void pm_qos_work_fn(struct work_struct *work)
struct pm_qos_request,
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the qos back down to the default
after the timeout. When the work item runs, pm_qos_work_fn() will
call pm_qos_update_request() and deadlock because it tries to
cancel itself via
pm_qos_update_request_timeout() updates a qos and then schedules
a delayed work item to bring the qos back down to the default
after the timeout. When the work item runs, pm_qos_work_fn() will
call pm_qos_update_request() and deadlock because it tries to
cancel itself via
From: Michael Büsch
---
This is a commit scheduled for the next v2.6.34 longterm release.
http://git.kernel.org/?p=linux/kernel/git/paulg/longterm-queue-2.6.34.git
If you see a problem with using this for longterm, please comment.
From: Michael Büsch m...@bues.ch
---
This is a commit scheduled for the next v2.6.34 longterm release.
http://git.kernel.org/?p=linux/kernel/git/paulg/longterm-queue-2.6.34.git
If you see a problem with using this for longterm, please comment.
On Monday, 11 December 2006 07:52, Dipankar Sarma wrote:
> On Sun, Dec 10, 2006 at 03:18:38PM +0100, Rafael J. Wysocki wrote:
> > On Sunday, 10 December 2006 13:16, Andrew Morton wrote:
> > > On Sun, 10 Dec 2006 12:49:43 +0100
> >
> > Hm, currently we're using the CPU hotplug to disable the
On Monday, 11 December 2006 07:52, Dipankar Sarma wrote:
On Sun, Dec 10, 2006 at 03:18:38PM +0100, Rafael J. Wysocki wrote:
On Sunday, 10 December 2006 13:16, Andrew Morton wrote:
On Sun, 10 Dec 2006 12:49:43 +0100
Hm, currently we're using the CPU hotplug to disable the nonboot CPUs
On Sun, Dec 10, 2006 at 03:18:38PM +0100, Rafael J. Wysocki wrote:
> On Sunday, 10 December 2006 13:16, Andrew Morton wrote:
> > On Sun, 10 Dec 2006 12:49:43 +0100
>
> Hm, currently we're using the CPU hotplug to disable the nonboot CPUs before
> the freezer is called. ;-)
>
> However, we're now
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> > > > > > {
> > > > > > int cpu = raw_smp_processor_id();
> > > > > > /*
> > > > > > * Interrupts/softirqs are hotplug-safe:
> > > > > > */
> > > > > > if (in_interrupt())
> > > > > >
> On Mon, 11 Dec 2006 11:15:45 +0530 Srivatsa Vaddagiri <[EMAIL PROTECTED]>
> wrote:
> On Sun, Dec 10, 2006 at 04:16:00AM -0800, Andrew Morton wrote:
> > One quite different way of addressing all of this is to stop using
> > stop_machine_run() for hotplug synchronisation and switch to the swsusp
On Sun, Dec 10, 2006 at 04:16:00AM -0800, Andrew Morton wrote:
> One quite different way of addressing all of this is to stop using
> stop_machine_run() for hotplug synchronisation and switch to the swsusp
> freezer infrastructure: all kernel threads and user processes need to stop
> and park
On Sun, Dec 10, 2006 at 09:26:16AM +0100, Ingo Molnar wrote:
> something like the pseudocode further below - when applied to a data
> structure it has semantics and scalability close to that of
> preempt_disable(), but it is still preemptible and the lock is specific.
Ingo,
The
On Sun, Dec 10, 2006 at 09:26:16AM +0100, Ingo Molnar wrote:
something like the pseudocode further below - when applied to a data
structure it has semantics and scalability close to that of
preempt_disable(), but it is still preemptible and the lock is specific.
Ingo,
The psuedo-code
On Sun, Dec 10, 2006 at 04:16:00AM -0800, Andrew Morton wrote:
One quite different way of addressing all of this is to stop using
stop_machine_run() for hotplug synchronisation and switch to the swsusp
freezer infrastructure: all kernel threads and user processes need to stop
and park
On Mon, 11 Dec 2006 11:15:45 +0530 Srivatsa Vaddagiri [EMAIL PROTECTED]
wrote:
On Sun, Dec 10, 2006 at 04:16:00AM -0800, Andrew Morton wrote:
One quite different way of addressing all of this is to stop using
stop_machine_run() for hotplug synchronisation and switch to the swsusp
freezer
* Andrew Morton [EMAIL PROTECTED] wrote:
{
int cpu = raw_smp_processor_id();
/*
* Interrupts/softirqs are hotplug-safe:
*/
if (in_interrupt())
return;
On Sun, Dec 10, 2006 at 03:18:38PM +0100, Rafael J. Wysocki wrote:
On Sunday, 10 December 2006 13:16, Andrew Morton wrote:
On Sun, 10 Dec 2006 12:49:43 +0100
Hm, currently we're using the CPU hotplug to disable the nonboot CPUs before
the freezer is called. ;-)
However, we're now trying
On Sat, 9 Dec 2006 11:26:52 +0100
Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > > > + if (cpu != -1)
> > > > + mutex_lock(_mutex);
> > >
> > > events/4 thread itself wanting the same mutex above?
> >
> > Could
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> > > + if (cpu != -1)
> > > + mutex_lock(_mutex);
> >
> > events/4 thread itself wanting the same mutex above?
>
> Could do, not sure. I'm planning on converting all the locking around
> here to preempt_disable() though.
* Andrew Morton [EMAIL PROTECTED] wrote:
+ if (cpu != -1)
+ mutex_lock(workqueue_mutex);
events/4 thread itself wanting the same mutex above?
Could do, not sure. I'm planning on converting all the locking around
here to preempt_disable() though.
please
On Sat, 9 Dec 2006 11:26:52 +0100
Ingo Molnar [EMAIL PROTECTED] wrote:
* Andrew Morton [EMAIL PROTECTED] wrote:
+ if (cpu != -1)
+ mutex_lock(workqueue_mutex);
events/4 thread itself wanting the same mutex above?
Could do, not sure.
On Thu, Dec 07, 2006 at 08:54:07PM -0800, Andrew Morton wrote:
> Could do, not sure.
AFAICS it will deadlock for sure.
> I'm planning on converting all the locking around here
> to preempt_disable() though.
Will look forward to that patch. Its hard to dance around w/o a
lock_cpu_hotplug()
On Thu, Dec 07, 2006 at 08:54:07PM -0800, Andrew Morton wrote:
Could do, not sure.
AFAICS it will deadlock for sure.
I'm planning on converting all the locking around here
to preempt_disable() though.
Will look forward to that patch. Its hard to dance around w/o a
lock_cpu_hotplug() ..:)
On Fri, 8 Dec 2006 08:23:01 +0530
Srivatsa Vaddagiri <[EMAIL PROTECTED]> wrote:
> On Thu, Dec 07, 2006 at 11:37:00AM -0800, Andrew Morton wrote:
> > -static void flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
> > +/*
> > + * If cpu == -1 it's a single-threaded workqueue and the caller does
On Thu, Dec 07, 2006 at 11:37:00AM -0800, Andrew Morton wrote:
> -static void flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
> +/*
> + * If cpu == -1 it's a single-threaded workqueue and the caller does not hold
> + * workqueue_mutex
> + */
> +static void flush_cpu_workqueue(struct
On Thu, 7 Dec 2006 10:51:48 -0800
Andrew Morton <[EMAIL PROTECTED]> wrote:
> + if (!cpu_online(cpu)) /* oops, CPU got unplugged */
> + goto bail;
hm, actually we can pull the same trick with flush_scheduled_work().
Should fix quite a few things...
From:
On Wed, 6 Dec 2006 17:26:14 -0700
Bjorn Helgaas <[EMAIL PROTECTED]> wrote:
> I'm seeing a workqueue-related deadlock. This is on an ia64
> box running SLES10, but it looks like the same problem should
> be possible in current upstream on any architecture.
>
> Here are the two tasks involved:
>
On Wed, 6 Dec 2006 17:26:14 -0700
Bjorn Helgaas [EMAIL PROTECTED] wrote:
I'm seeing a workqueue-related deadlock. This is on an ia64
box running SLES10, but it looks like the same problem should
be possible in current upstream on any architecture.
Here are the two tasks involved:
On Thu, Dec 07, 2006 at 11:37:00AM -0800, Andrew Morton wrote:
-static void flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
+/*
+ * If cpu == -1 it's a single-threaded workqueue and the caller does not hold
+ * workqueue_mutex
+ */
+static void flush_cpu_workqueue(struct
On Fri, 8 Dec 2006 08:23:01 +0530
Srivatsa Vaddagiri [EMAIL PROTECTED] wrote:
On Thu, Dec 07, 2006 at 11:37:00AM -0800, Andrew Morton wrote:
-static void flush_cpu_workqueue(struct cpu_workqueue_struct *cwq)
+/*
+ * If cpu == -1 it's a single-threaded workqueue and the caller does not
On Thu, Dec 07, 2006 at 11:47:01AM +0530, Srivatsa Vaddagiri wrote:
> - Make it rw-sem
I think rw-sems also were shown to hit deadlocks (recursive read-lock
attempt deadlocks when a writer comes between the two read attempts by the same
thread). So below suggestion only seems to makes sense
On Wed, Dec 06, 2006 at 05:26:14PM -0700, Bjorn Helgaas wrote:
> loadkeys is holding the cpu_hotplug lock (acquired in flush_workqueue())
> and waiting in flush_cpu_workqueue() until the cpu_workqueue drains.
>
> But events/4 is responsible for draining it, and it is blocked waiting
> to acquire
I'm seeing a workqueue-related deadlock. This is on an ia64
box running SLES10, but it looks like the same problem should
be possible in current upstream on any architecture.
Here are the two tasks involved:
events/4:
schedule
__down
__lock_cpu_hotplug
lock_cpu_hotplug
I'm seeing a workqueue-related deadlock. This is on an ia64
box running SLES10, but it looks like the same problem should
be possible in current upstream on any architecture.
Here are the two tasks involved:
events/4:
schedule
__down
__lock_cpu_hotplug
lock_cpu_hotplug
On Wed, Dec 06, 2006 at 05:26:14PM -0700, Bjorn Helgaas wrote:
loadkeys is holding the cpu_hotplug lock (acquired in flush_workqueue())
and waiting in flush_cpu_workqueue() until the cpu_workqueue drains.
But events/4 is responsible for draining it, and it is blocked waiting
to acquire the
On Thu, Dec 07, 2006 at 11:47:01AM +0530, Srivatsa Vaddagiri wrote:
- Make it rw-sem
I think rw-sems also were shown to hit deadlocks (recursive read-lock
attempt deadlocks when a writer comes between the two read attempts by the same
thread). So below suggestion only seems to makes sense
55 matches
Mail list logo