Re: [PATCH] cpufreq: Fix timer/workqueue corruption due to double queueing

2013-08-27 Thread Viresh Kumar
On 28 August 2013 00:17, Stephen Boyd  wrote:
> Sure that works just as well. Here's a patch.
>
>  drivers/cpufreq/cpufreq_governor.c | 3 +++
>  1 file changed, 3 insertions(+)
>
> diff --git a/drivers/cpufreq/cpufreq_governor.c 
> b/drivers/cpufreq/cpufreq_governor.c
> index 7b839a8..b9b20fd 100644
> --- a/drivers/cpufreq/cpufreq_governor.c
> +++ b/drivers/cpufreq/cpufreq_governor.c
> @@ -133,6 +133,9 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
> cpufreq_policy *policy,
>  {
> int i;
>
> +   if (!policy->governor_enabled)
> +   return;
> +
> if (!all_cpus) {
> __gov_queue_work(smp_processor_id(), dbs_data, delay);
> } else {

Acked-by: Viresh Kumar 
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] cpufreq: Fix timer/workqueue corruption due to double queueing

2013-08-27 Thread Stephen Boyd
When a CPU is hot removed we'll cancel all the delayed work items
via gov_cancel_work(). Normally this will just cancel a delayed
timer on each CPU that the policy is managing and the work won't
run, but if the work is already running the workqueue code will
wait for the work to finish before continuing to prevent the
work items from re-queuing themselves like they normally do. This
scheme will work most of the time, except for the case where the
work function determines that it should adjust the delay for all
other CPUs that the policy is managing. If this scenario occurs,
the canceling CPU will cancel its own work but queue up the other
CPUs works to run. For example:

 CPU0CPU1
 
 cpu_down()
  ...
  __cpufreq_remove_dev()
   cpufreq_governor_dbs()
case CPUFREQ_GOV_STOP:
 gov_cancel_work(dbs_data, policy);
  cpu0 work is canceled
   timer is canceled
   cpu1 work is canceled
od_dbs_timer()
 gov_queue_work(*, *, true);
  cpu0 work queued
  cpu1 work queued
  cpu2 work queued
  ...
   cpu1 work is canceled
   cpu2 work is canceled
   ...

At the end of the GOV_STOP case cpu0 still has a work queued to
run although the code is expecting all of the works to be
canceled. __cpufreq_remove_dev() will then proceed to
re-initialize all the other CPUs works except for the CPU that is
going down. The CPUFREQ_GOV_START case in cpufreq_governor_dbs()
will trample over the queued work and debugobjects will spit out
a warning:

WARNING: at lib/debugobjects.c:260 debug_print_object+0x94/0xbc()
ODEBUG: init active (active state 0) object type: timer_list hint: 
delayed_work_timer_fn+0x0/0x10
Modules linked in:
CPU: 0 PID: 1491 Comm: sh Tainted: GW3.10.0 #19
[] (unwind_backtrace+0x0/0x11c) from [] 
(show_stack+0x10/0x14)
[] (show_stack+0x10/0x14) from [] 
(warn_slowpath_common+0x4c/0x6c)
[] (warn_slowpath_common+0x4c/0x6c) from [] 
(warn_slowpath_fmt+0x2c/0x3c)
[] (warn_slowpath_fmt+0x2c/0x3c) from [] 
(debug_print_object+0x94/0xbc)
[] (debug_print_object+0x94/0xbc) from [] 
(__debug_object_init+0x2d0/0x340)
[] (__debug_object_init+0x2d0/0x340) from [] 
(init_timer_key+0x14/0xb0)
[] (init_timer_key+0x14/0xb0) from [] 
(cpufreq_governor_dbs+0x3e8/0x5f8)
[] (cpufreq_governor_dbs+0x3e8/0x5f8) from [] 
(__cpufreq_governor+0xdc/0x1a4)
[] (__cpufreq_governor+0xdc/0x1a4) from [] 
(__cpufreq_remove_dev.isra.10+0x3b4/0x434)
[] (__cpufreq_remove_dev.isra.10+0x3b4/0x434) from [] 
(cpufreq_cpu_callback+0x60/0x80)
[] (cpufreq_cpu_callback+0x60/0x80) from [] 
(notifier_call_chain+0x38/0x68)
[] (notifier_call_chain+0x38/0x68) from [] 
(__cpu_notify+0x28/0x40)
[] (__cpu_notify+0x28/0x40) from [] (_cpu_down+0x7c/0x2c0)
[] (_cpu_down+0x7c/0x2c0) from [] (cpu_down+0x24/0x40)
[] (cpu_down+0x24/0x40) from [] (store_online+0x2c/0x74)
[] (store_online+0x2c/0x74) from [] 
(dev_attr_store+0x18/0x24)
[] (dev_attr_store+0x18/0x24) from [] 
(sysfs_write_file+0x100/0x148)
[] (sysfs_write_file+0x100/0x148) from [] 
(vfs_write+0xcc/0x174)
[] (vfs_write+0xcc/0x174) from [] (SyS_write+0x38/0x64)
[] (SyS_write+0x38/0x64) from [] (ret_fast_syscall+0x0/0x30)

Signed-off-by: Stephen Boyd 
---

On 08/27, Viresh Kumar wrote:
> On 27 August 2013 04:15, Stephen Boyd  wrote:
> > +++ b/drivers/cpufreq/cpufreq_governor.c
> > @@ -133,7 +133,7 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
> > cpufreq_policy *policy,
> >  {
> > int i;
> >
> > -   if (!all_cpus) {
> > +   if (!all_cpus || !policy->governor_enabled) {
> > __gov_queue_work(smp_processor_id(), dbs_data, delay);
> > } else {
> > for_each_cpu(i, policy->cpus)
> 
> Shouldn't we simply do this instead at the top of this function?
> 
> > +   if (!policy->governor_enabled)
> > +  return;

Sure that works just as well. Here's a patch.

 drivers/cpufreq/cpufreq_governor.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/cpufreq/cpufreq_governor.c 
b/drivers/cpufreq/cpufreq_governor.c
index 7b839a8..b9b20fd 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -133,6 +133,9 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
cpufreq_policy *policy,
 {
int i;
 
+   if (!policy->governor_enabled)
+   return;
+
if (!all_cpus) {
__gov_queue_work(smp_processor_id(), dbs_data, delay);
} else {
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to 

Re: [PATCH] cpufreq: Fix timer/workqueue corruption due to double queueing

2013-08-27 Thread Viresh Kumar
On 27 August 2013 04:15, Stephen Boyd  wrote:
> +++ b/drivers/cpufreq/cpufreq_governor.c
> @@ -133,7 +133,7 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
> cpufreq_policy *policy,
>  {
> int i;
>
> -   if (!all_cpus) {
> +   if (!all_cpus || !policy->governor_enabled) {
> __gov_queue_work(smp_processor_id(), dbs_data, delay);
> } else {
> for_each_cpu(i, policy->cpus)

Shouldn't we simply do this instead at the top of this function?

> +   if (!policy->governor_enabled)
> +  return;
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] cpufreq: Fix timer/workqueue corruption due to double queueing

2013-08-27 Thread Viresh Kumar
On 27 August 2013 04:15, Stephen Boyd sb...@codeaurora.org wrote:
 +++ b/drivers/cpufreq/cpufreq_governor.c
 @@ -133,7 +133,7 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
 cpufreq_policy *policy,
  {
 int i;

 -   if (!all_cpus) {
 +   if (!all_cpus || !policy-governor_enabled) {
 __gov_queue_work(smp_processor_id(), dbs_data, delay);
 } else {
 for_each_cpu(i, policy-cpus)

Shouldn't we simply do this instead at the top of this function?

 +   if (!policy-governor_enabled)
 +  return;
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] cpufreq: Fix timer/workqueue corruption due to double queueing

2013-08-27 Thread Stephen Boyd
When a CPU is hot removed we'll cancel all the delayed work items
via gov_cancel_work(). Normally this will just cancel a delayed
timer on each CPU that the policy is managing and the work won't
run, but if the work is already running the workqueue code will
wait for the work to finish before continuing to prevent the
work items from re-queuing themselves like they normally do. This
scheme will work most of the time, except for the case where the
work function determines that it should adjust the delay for all
other CPUs that the policy is managing. If this scenario occurs,
the canceling CPU will cancel its own work but queue up the other
CPUs works to run. For example:

 CPU0CPU1
 
 cpu_down()
  ...
  __cpufreq_remove_dev()
   cpufreq_governor_dbs()
case CPUFREQ_GOV_STOP:
 gov_cancel_work(dbs_data, policy);
  cpu0 work is canceled
   timer is canceled
   cpu1 work is canceledwork runs
   waits for cpu1 od_dbs_timer()
 gov_queue_work(*, *, true);
  cpu0 work queued
  cpu1 work queued
  cpu2 work queued
  ...
   cpu1 work is canceled
   cpu2 work is canceled
   ...

At the end of the GOV_STOP case cpu0 still has a work queued to
run although the code is expecting all of the works to be
canceled. __cpufreq_remove_dev() will then proceed to
re-initialize all the other CPUs works except for the CPU that is
going down. The CPUFREQ_GOV_START case in cpufreq_governor_dbs()
will trample over the queued work and debugobjects will spit out
a warning:

WARNING: at lib/debugobjects.c:260 debug_print_object+0x94/0xbc()
ODEBUG: init active (active state 0) object type: timer_list hint: 
delayed_work_timer_fn+0x0/0x10
Modules linked in:
CPU: 0 PID: 1491 Comm: sh Tainted: GW3.10.0 #19
[c010c178] (unwind_backtrace+0x0/0x11c) from [c0109dec] 
(show_stack+0x10/0x14)
[c0109dec] (show_stack+0x10/0x14) from [c01904cc] 
(warn_slowpath_common+0x4c/0x6c)
[c01904cc] (warn_slowpath_common+0x4c/0x6c) from [c019056c] 
(warn_slowpath_fmt+0x2c/0x3c)
[c019056c] (warn_slowpath_fmt+0x2c/0x3c) from [c0388a7c] 
(debug_print_object+0x94/0xbc)
[c0388a7c] (debug_print_object+0x94/0xbc) from [c0388e34] 
(__debug_object_init+0x2d0/0x340)
[c0388e34] (__debug_object_init+0x2d0/0x340) from [c019e3b0] 
(init_timer_key+0x14/0xb0)
[c019e3b0] (init_timer_key+0x14/0xb0) from [c0635f78] 
(cpufreq_governor_dbs+0x3e8/0x5f8)
[c0635f78] (cpufreq_governor_dbs+0x3e8/0x5f8) from [c06325a0] 
(__cpufreq_governor+0xdc/0x1a4)
[c06325a0] (__cpufreq_governor+0xdc/0x1a4) from [c0633704] 
(__cpufreq_remove_dev.isra.10+0x3b4/0x434)
[c0633704] (__cpufreq_remove_dev.isra.10+0x3b4/0x434) from [c08989f4] 
(cpufreq_cpu_callback+0x60/0x80)
[c08989f4] (cpufreq_cpu_callback+0x60/0x80) from [c08a43c0] 
(notifier_call_chain+0x38/0x68)
[c08a43c0] (notifier_call_chain+0x38/0x68) from [c01938e0] 
(__cpu_notify+0x28/0x40)
[c01938e0] (__cpu_notify+0x28/0x40) from [c0892ad4] (_cpu_down+0x7c/0x2c0)
[c0892ad4] (_cpu_down+0x7c/0x2c0) from [c0892d3c] (cpu_down+0x24/0x40)
[c0892d3c] (cpu_down+0x24/0x40) from [c0893ea8] (store_online+0x2c/0x74)
[c0893ea8] (store_online+0x2c/0x74) from [c04519d8] 
(dev_attr_store+0x18/0x24)
[c04519d8] (dev_attr_store+0x18/0x24) from [c02a69d4] 
(sysfs_write_file+0x100/0x148)
[c02a69d4] (sysfs_write_file+0x100/0x148) from [c0255c18] 
(vfs_write+0xcc/0x174)
[c0255c18] (vfs_write+0xcc/0x174) from [c0255f70] (SyS_write+0x38/0x64)
[c0255f70] (SyS_write+0x38/0x64) from [c0106120] (ret_fast_syscall+0x0/0x30)

Signed-off-by: Stephen Boyd sb...@codeaurora.org
---

On 08/27, Viresh Kumar wrote:
 On 27 August 2013 04:15, Stephen Boyd sb...@codeaurora.org wrote:
  +++ b/drivers/cpufreq/cpufreq_governor.c
  @@ -133,7 +133,7 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
  cpufreq_policy *policy,
   {
  int i;
 
  -   if (!all_cpus) {
  +   if (!all_cpus || !policy-governor_enabled) {
  __gov_queue_work(smp_processor_id(), dbs_data, delay);
  } else {
  for_each_cpu(i, policy-cpus)
 
 Shouldn't we simply do this instead at the top of this function?
 
  +   if (!policy-governor_enabled)
  +  return;

Sure that works just as well. Here's a patch.

 drivers/cpufreq/cpufreq_governor.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/cpufreq/cpufreq_governor.c 
b/drivers/cpufreq/cpufreq_governor.c
index 7b839a8..b9b20fd 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -133,6 +133,9 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
cpufreq_policy *policy,
 {
int i;
 
+   if (!policy-governor_enabled)
+ 

Re: [PATCH] cpufreq: Fix timer/workqueue corruption due to double queueing

2013-08-27 Thread Viresh Kumar
On 28 August 2013 00:17, Stephen Boyd sb...@codeaurora.org wrote:
 Sure that works just as well. Here's a patch.

  drivers/cpufreq/cpufreq_governor.c | 3 +++
  1 file changed, 3 insertions(+)

 diff --git a/drivers/cpufreq/cpufreq_governor.c 
 b/drivers/cpufreq/cpufreq_governor.c
 index 7b839a8..b9b20fd 100644
 --- a/drivers/cpufreq/cpufreq_governor.c
 +++ b/drivers/cpufreq/cpufreq_governor.c
 @@ -133,6 +133,9 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
 cpufreq_policy *policy,
  {
 int i;

 +   if (!policy-governor_enabled)
 +   return;
 +
 if (!all_cpus) {
 __gov_queue_work(smp_processor_id(), dbs_data, delay);
 } else {

Acked-by: Viresh Kumar viresh.ku...@linaro.org
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] cpufreq: Fix timer/workqueue corruption due to double queueing

2013-08-26 Thread Stephen Boyd

When a CPU is hot removed we'll cancel all the delayed work items
via gov_cancel_work(). Normally this will just cancel a delayed
timer on each CPU that the policy is managing and the work won't
run, but if the work is already running the workqueue code will
wait for the work to finish before continuing to prevent the
work items from re-queuing themselves like they normally do. This
scheme will work most of the time, except for the case where the
work function determines that it should adjust the delay for all
other CPUs that the policy is managing. If this scenario occurs,
the canceling CPU will cancel its own work but queue up the other
CPUs works to run. For example:

 CPU0CPU1
 
 cpu_down()
  ...
  __cpufreq_remove_dev()
   cpufreq_governor_dbs()
case CPUFREQ_GOV_STOP:
 gov_cancel_work(dbs_data, policy);
  cpu0 work is canceled
   timer is canceled
   cpu1 work is canceled
od_dbs_timer()
 gov_queue_work(*, *, true);
  cpu0 work queued
  cpu1 work queued
  cpu2 work queued
  ...
   cpu1 work is canceled
   cpu2 work is canceled
   ...

At the end of the GOV_STOP case cpu0 still has a work queued to
run although the code is expecting all of the works to be
canceled. __cpufreq_remove_dev() will then proceed to
re-initialize all the other CPUs works except for the CPU that is
going down. The CPUFREQ_GOV_START case in cpufreq_governor_dbs()
will trample over the queued work and debugobjects will spit out
a warning:

WARNING: at lib/debugobjects.c:260 debug_print_object+0x94/0xbc()
ODEBUG: init active (active state 0) object type: timer_list hint: 
delayed_work_timer_fn+0x0/0x10
Modules linked in:
CPU: 0 PID: 1491 Comm: sh Tainted: GW3.10.0 #19
[] (unwind_backtrace+0x0/0x11c) from [] 
(show_stack+0x10/0x14)
[] (show_stack+0x10/0x14) from [] 
(warn_slowpath_common+0x4c/0x6c)
[] (warn_slowpath_common+0x4c/0x6c) from [] 
(warn_slowpath_fmt+0x2c/0x3c)
[] (warn_slowpath_fmt+0x2c/0x3c) from [] 
(debug_print_object+0x94/0xbc)
[] (debug_print_object+0x94/0xbc) from [] 
(__debug_object_init+0x2d0/0x340)
[] (__debug_object_init+0x2d0/0x340) from [] 
(init_timer_key+0x14/0xb0)
[] (init_timer_key+0x14/0xb0) from [] 
(cpufreq_governor_dbs+0x3e8/0x5f8)
[] (cpufreq_governor_dbs+0x3e8/0x5f8) from [] 
(__cpufreq_governor+0xdc/0x1a4)
[] (__cpufreq_governor+0xdc/0x1a4) from [] 
(__cpufreq_remove_dev.isra.10+0x3b4/0x434)
[] (__cpufreq_remove_dev.isra.10+0x3b4/0x434) from [] 
(cpufreq_cpu_callback+0x60/0x80)
[] (cpufreq_cpu_callback+0x60/0x80) from [] 
(notifier_call_chain+0x38/0x68)
[] (notifier_call_chain+0x38/0x68) from [] 
(__cpu_notify+0x28/0x40)
[] (__cpu_notify+0x28/0x40) from [] (_cpu_down+0x7c/0x2c0)
[] (_cpu_down+0x7c/0x2c0) from [] (cpu_down+0x24/0x40)
[] (cpu_down+0x24/0x40) from [] (store_online+0x2c/0x74)
[] (store_online+0x2c/0x74) from [] 
(dev_attr_store+0x18/0x24)
[] (dev_attr_store+0x18/0x24) from [] 
(sysfs_write_file+0x100/0x148)
[] (sysfs_write_file+0x100/0x148) from [] 
(vfs_write+0xcc/0x174)
[] (vfs_write+0xcc/0x174) from [] (SyS_write+0x38/0x64)
[] (SyS_write+0x38/0x64) from [] (ret_fast_syscall+0x0/0x30)

The simplest fix is to check and see if the governor is being
stopped and ignore the all_cpus flag so that only the work that's
being canceled has the chance to re-queue itself.

Signed-off-by: Stephen Boyd 
---

This should probably go to stable. I think this all started happening 
in commit 031299b3be30f3ec (cpufreq: governors: Avoid unnecessary per cpu
timer interrupts, 2013-02-27).

 drivers/cpufreq/cpufreq_governor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/cpufreq/cpufreq_governor.c 
b/drivers/cpufreq/cpufreq_governor.c
index 7b839a8..0375a3c 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -133,7 +133,7 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
cpufreq_policy *policy,
 {
int i;
 
-   if (!all_cpus) {
+   if (!all_cpus || !policy->governor_enabled) {
__gov_queue_work(smp_processor_id(), dbs_data, delay);
} else {
for_each_cpu(i, policy->cpus)
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH] cpufreq: Fix timer/workqueue corruption due to double queueing

2013-08-26 Thread Stephen Boyd

When a CPU is hot removed we'll cancel all the delayed work items
via gov_cancel_work(). Normally this will just cancel a delayed
timer on each CPU that the policy is managing and the work won't
run, but if the work is already running the workqueue code will
wait for the work to finish before continuing to prevent the
work items from re-queuing themselves like they normally do. This
scheme will work most of the time, except for the case where the
work function determines that it should adjust the delay for all
other CPUs that the policy is managing. If this scenario occurs,
the canceling CPU will cancel its own work but queue up the other
CPUs works to run. For example:

 CPU0CPU1
 
 cpu_down()
  ...
  __cpufreq_remove_dev()
   cpufreq_governor_dbs()
case CPUFREQ_GOV_STOP:
 gov_cancel_work(dbs_data, policy);
  cpu0 work is canceled
   timer is canceled
   cpu1 work is canceledwork runs
   waits for cpu1 od_dbs_timer()
 gov_queue_work(*, *, true);
  cpu0 work queued
  cpu1 work queued
  cpu2 work queued
  ...
   cpu1 work is canceled
   cpu2 work is canceled
   ...

At the end of the GOV_STOP case cpu0 still has a work queued to
run although the code is expecting all of the works to be
canceled. __cpufreq_remove_dev() will then proceed to
re-initialize all the other CPUs works except for the CPU that is
going down. The CPUFREQ_GOV_START case in cpufreq_governor_dbs()
will trample over the queued work and debugobjects will spit out
a warning:

WARNING: at lib/debugobjects.c:260 debug_print_object+0x94/0xbc()
ODEBUG: init active (active state 0) object type: timer_list hint: 
delayed_work_timer_fn+0x0/0x10
Modules linked in:
CPU: 0 PID: 1491 Comm: sh Tainted: GW3.10.0 #19
[c010c178] (unwind_backtrace+0x0/0x11c) from [c0109dec] 
(show_stack+0x10/0x14)
[c0109dec] (show_stack+0x10/0x14) from [c01904cc] 
(warn_slowpath_common+0x4c/0x6c)
[c01904cc] (warn_slowpath_common+0x4c/0x6c) from [c019056c] 
(warn_slowpath_fmt+0x2c/0x3c)
[c019056c] (warn_slowpath_fmt+0x2c/0x3c) from [c0388a7c] 
(debug_print_object+0x94/0xbc)
[c0388a7c] (debug_print_object+0x94/0xbc) from [c0388e34] 
(__debug_object_init+0x2d0/0x340)
[c0388e34] (__debug_object_init+0x2d0/0x340) from [c019e3b0] 
(init_timer_key+0x14/0xb0)
[c019e3b0] (init_timer_key+0x14/0xb0) from [c0635f78] 
(cpufreq_governor_dbs+0x3e8/0x5f8)
[c0635f78] (cpufreq_governor_dbs+0x3e8/0x5f8) from [c06325a0] 
(__cpufreq_governor+0xdc/0x1a4)
[c06325a0] (__cpufreq_governor+0xdc/0x1a4) from [c0633704] 
(__cpufreq_remove_dev.isra.10+0x3b4/0x434)
[c0633704] (__cpufreq_remove_dev.isra.10+0x3b4/0x434) from [c08989f4] 
(cpufreq_cpu_callback+0x60/0x80)
[c08989f4] (cpufreq_cpu_callback+0x60/0x80) from [c08a43c0] 
(notifier_call_chain+0x38/0x68)
[c08a43c0] (notifier_call_chain+0x38/0x68) from [c01938e0] 
(__cpu_notify+0x28/0x40)
[c01938e0] (__cpu_notify+0x28/0x40) from [c0892ad4] (_cpu_down+0x7c/0x2c0)
[c0892ad4] (_cpu_down+0x7c/0x2c0) from [c0892d3c] (cpu_down+0x24/0x40)
[c0892d3c] (cpu_down+0x24/0x40) from [c0893ea8] (store_online+0x2c/0x74)
[c0893ea8] (store_online+0x2c/0x74) from [c04519d8] 
(dev_attr_store+0x18/0x24)
[c04519d8] (dev_attr_store+0x18/0x24) from [c02a69d4] 
(sysfs_write_file+0x100/0x148)
[c02a69d4] (sysfs_write_file+0x100/0x148) from [c0255c18] 
(vfs_write+0xcc/0x174)
[c0255c18] (vfs_write+0xcc/0x174) from [c0255f70] (SyS_write+0x38/0x64)
[c0255f70] (SyS_write+0x38/0x64) from [c0106120] (ret_fast_syscall+0x0/0x30)

The simplest fix is to check and see if the governor is being
stopped and ignore the all_cpus flag so that only the work that's
being canceled has the chance to re-queue itself.

Signed-off-by: Stephen Boyd sb...@codeaurora.org
---

This should probably go to stable. I think this all started happening 
in commit 031299b3be30f3ec (cpufreq: governors: Avoid unnecessary per cpu
timer interrupts, 2013-02-27).

 drivers/cpufreq/cpufreq_governor.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/cpufreq/cpufreq_governor.c 
b/drivers/cpufreq/cpufreq_governor.c
index 7b839a8..0375a3c 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -133,7 +133,7 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
cpufreq_policy *policy,
 {
int i;
 
-   if (!all_cpus) {
+   if (!all_cpus || !policy-governor_enabled) {
__gov_queue_work(smp_processor_id(), dbs_data, delay);
} else {
for_each_cpu(i, policy-cpus)
-- 
The Qualcomm Innovation Center, Inc. is a member of the Code Aurora Forum,
hosted by The Linux Foundation

--
To unsubscribe from