Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-09 Thread Rafael J. Wysocki
On Wednesday, September 09, 2015 08:00:20 AM Viresh Kumar wrote:
> On 09-09-15, 03:06, Rafael J. Wysocki wrote:
> > I've just realized that if you combined this patch with the [6/9],
> > you wouldn't need to make any changes to gov_queue_work() at all,
> > because that patch removes the case in point entirely.
> 
> Yeah, but then these are really two separate issues at hand and so I
> solved them separately. Lemme know how you want to see that and I can
> change :)

I don't want to make artificial changes.

If you can address two problems in one go with one relatively simple patch,
why don't you do that?

Thanks,
Rafael

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-09 Thread Rafael J. Wysocki
On Wednesday, September 09, 2015 08:00:20 AM Viresh Kumar wrote:
> On 09-09-15, 03:06, Rafael J. Wysocki wrote:
> > I've just realized that if you combined this patch with the [6/9],
> > you wouldn't need to make any changes to gov_queue_work() at all,
> > because that patch removes the case in point entirely.
> 
> Yeah, but then these are really two separate issues at hand and so I
> solved them separately. Lemme know how you want to see that and I can
> change :)

I don't want to make artificial changes.

If you can address two problems in one go with one relatively simple patch,
why don't you do that?

Thanks,
Rafael

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-08 Thread Viresh Kumar
On 09-09-15, 03:06, Rafael J. Wysocki wrote:
> I've just realized that if you combined this patch with the [6/9],
> you wouldn't need to make any changes to gov_queue_work() at all,
> because that patch removes the case in point entirely.

Yeah, but then these are really two separate issues at hand and so I
solved them separately. Lemme know how you want to see that and I can
change :)

-- 
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-08 Thread Rafael J. Wysocki
On Tuesday, September 08, 2015 07:28:31 AM Viresh Kumar wrote:
> On 08-09-15, 03:11, Rafael J. Wysocki wrote:
> > There really are two cases, either you pass a CPU or gov_queue_work() has to
> > walk policy->cpus.
> 
> Right (At least for now, we are doing just that.)
> 
> > Doing it the way you did hides that IMO.
> 
> Maybe. But I see it otherwise. Adding special meaning to a variable
> (like int cpu == -1 being the special case to specify policy->cpus)
> hides things morei, as we need to look at how it is decoded finally in
> the routine gov_queue_work().

Oh well.

I've just realized that if you combined this patch with the [6/9],
you wouldn't need to make any changes to gov_queue_work() at all,
because that patch removes the case in point entirely.

Thanks,
Rafael

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-08 Thread Rafael J. Wysocki
On Tuesday, September 08, 2015 07:28:31 AM Viresh Kumar wrote:
> On 08-09-15, 03:11, Rafael J. Wysocki wrote:
> > There really are two cases, either you pass a CPU or gov_queue_work() has to
> > walk policy->cpus.
> 
> Right (At least for now, we are doing just that.)
> 
> > Doing it the way you did hides that IMO.
> 
> Maybe. But I see it otherwise. Adding special meaning to a variable
> (like int cpu == -1 being the special case to specify policy->cpus)
> hides things morei, as we need to look at how it is decoded finally in
> the routine gov_queue_work().

Oh well.

I've just realized that if you combined this patch with the [6/9],
you wouldn't need to make any changes to gov_queue_work() at all,
because that patch removes the case in point entirely.

Thanks,
Rafael

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-08 Thread Viresh Kumar
On 09-09-15, 03:06, Rafael J. Wysocki wrote:
> I've just realized that if you combined this patch with the [6/9],
> you wouldn't need to make any changes to gov_queue_work() at all,
> because that patch removes the case in point entirely.

Yeah, but then these are really two separate issues at hand and so I
solved them separately. Lemme know how you want to see that and I can
change :)

-- 
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-07 Thread Viresh Kumar
On 08-09-15, 03:11, Rafael J. Wysocki wrote:
> There really are two cases, either you pass a CPU or gov_queue_work() has to
> walk policy->cpus.

Right (At least for now, we are doing just that.)

> Doing it the way you did hides that IMO.

Maybe. But I see it otherwise. Adding special meaning to a variable
(like int cpu == -1 being the special case to specify policy->cpus)
hides things morei, as we need to look at how it is decoded finally in
the routine gov_queue_work().

But if we send a mask instead, it is very clear by reading the callers
site, what we are trying to do.

> I'd simply pass an int and use a special value to indicate that policy->cpus
> is to be walked.

Like cpu == -1 thing? Or something else?

> > -   if (!all_cpus) {
> > -   /*
> > -* Use raw_smp_processor_id() to avoid preemptible warnings.
> > -* We know that this is only called with all_cpus == false from
> > -* works that have been queued with *_work_on() functions and
> > -* those works are canceled during CPU_DOWN_PREPARE so they
> > -* can't possibly run on any other CPU.
> > -*/
> 
> This was a useful comment and it should be moved along the logic it was 
> supposed
> to explain and not just dropped.

Sigh

> > -   __gov_queue_work(raw_smp_processor_id(), dbs_data, delay);
> > -   } else {
> > -   for_each_cpu(i, policy->cpus)
> > -   __gov_queue_work(i, dbs_data, delay);
> > -   }
> > +   for_each_cpu(i, cpus)
> > +   __gov_queue_work(i, dbs_data, delay);
> >  
> >  out_unlock:
> > mutex_unlock(_governor_lock);
> > @@ -232,7 +221,8 @@ static void dbs_timer(struct work_struct *work)
> > struct cpufreq_policy *policy = shared->policy;
> > struct dbs_data *dbs_data = policy->governor_data;
> > unsigned int sampling_rate, delay;
> > -   bool modify_all = true;
> > +   const struct cpumask *cpus;
> 
> I don't think this local variable is necessary.
> 
> > +   bool load_eval;
> >  
> > mutex_lock(>timer_mutex);
> >  
> > @@ -246,11 +236,11 @@ static void dbs_timer(struct work_struct *work)
> > sampling_rate = od_tuners->sampling_rate;
> > }
> >  
> > -   if (!need_load_eval(cdbs->shared, sampling_rate))
> > -   modify_all = false;
> > +   load_eval = need_load_eval(cdbs->shared, sampling_rate);
> > +   cpus = load_eval ? policy->cpus : cpumask_of(raw_smp_processor_id());
> >  
> > -   delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all);
> > -   gov_queue_work(dbs_data, policy, delay, modify_all);
> > +   delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, load_eval);
> > +   gov_queue_work(dbs_data, policy, delay, cpus);

Avoiding that local variable would have made this a little longer, but
I can surely drop it :)

gov_queue_work(dbs_data, policy, delay,
   load_eval ? policy->cpus :
   cpumask_of(raw_smp_processor_id());

-- 
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-07 Thread Rafael J. Wysocki
On Monday, July 27, 2015 05:58:08 PM Viresh Kumar wrote:
> The sampling rate is updated with a call to update_sampling_rate(), and
> we process CPUs one by one here. While the work is canceled on per-cpu
> basis, it is getting queued (by mistake) for all policy->cpus.
> 
> This would result in wasting cpu cycles for queuing works which are
> already queued and never canceled.
> 
> This patch changes this behavior to queue work only on the cpu, for
> which it was canceled earlier.
> 
> To do that, replace 'modify_all' parameter to gov_queue_work() with a
> mask of CPUs.

There really are two cases, either you pass a CPU or gov_queue_work() has to
walk policy->cpus.  Doing it the way you did hides that IMO.

I'd simply pass an int and use a special value to indicate that policy->cpus
is to be walked.

> Also the last parameter to ->gov_dbs_timer() was named
> 'modify_all' earlier, but its purpose was to decide if load has to be
> evaluated again or not. Lets rename that to load_eval.
> 
> Fixes: 031299b3be30 ("cpufreq: governors: Avoid unnecessary per cpu timer 
> interrupts")
> Signed-off-by: Viresh Kumar 
> ---
>  drivers/cpufreq/cpufreq_conservative.c |  4 ++--
>  drivers/cpufreq/cpufreq_governor.c | 30 ++
>  drivers/cpufreq/cpufreq_governor.h |  4 ++--
>  drivers/cpufreq/cpufreq_ondemand.c |  7 ---
>  4 files changed, 18 insertions(+), 27 deletions(-)
> 
> diff --git a/drivers/cpufreq/cpufreq_conservative.c 
> b/drivers/cpufreq/cpufreq_conservative.c
> index 18bfbc313e48..1aa3bd46cea3 100644
> --- a/drivers/cpufreq/cpufreq_conservative.c
> +++ b/drivers/cpufreq/cpufreq_conservative.c
> @@ -116,11 +116,11 @@ static void cs_check_cpu(int cpu, unsigned int load)
>  }
>  
>  static unsigned int cs_dbs_timer(struct cpu_dbs_info *cdbs,
> -  struct dbs_data *dbs_data, bool modify_all)
> +  struct dbs_data *dbs_data, bool load_eval)
>  {
>   struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
>  
> - if (modify_all)
> + if (load_eval)
>   dbs_check_cpu(dbs_data, cdbs->shared->policy->cpu);
>  
>   return delay_for_sampling_rate(cs_tuners->sampling_rate);
> diff --git a/drivers/cpufreq/cpufreq_governor.c 
> b/drivers/cpufreq/cpufreq_governor.c
> index 750626d8fb03..a890450711bb 100644
> --- a/drivers/cpufreq/cpufreq_governor.c
> +++ b/drivers/cpufreq/cpufreq_governor.c
> @@ -167,7 +167,7 @@ static inline void __gov_queue_work(int cpu, struct 
> dbs_data *dbs_data,
>  }
>  
>  void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
> - unsigned int delay, bool all_cpus)
> + unsigned int delay, const struct cpumask *cpus)
>  {
>   int i;
>  
> @@ -175,19 +175,8 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
> cpufreq_policy *policy,
>   if (!policy->governor_enabled)
>   goto out_unlock;
>  
> - if (!all_cpus) {
> - /*
> -  * Use raw_smp_processor_id() to avoid preemptible warnings.
> -  * We know that this is only called with all_cpus == false from
> -  * works that have been queued with *_work_on() functions and
> -  * those works are canceled during CPU_DOWN_PREPARE so they
> -  * can't possibly run on any other CPU.
> -  */

This was a useful comment and it should be moved along the logic it was supposed
to explain and not just dropped.

> - __gov_queue_work(raw_smp_processor_id(), dbs_data, delay);
> - } else {
> - for_each_cpu(i, policy->cpus)
> - __gov_queue_work(i, dbs_data, delay);
> - }
> + for_each_cpu(i, cpus)
> + __gov_queue_work(i, dbs_data, delay);
>  
>  out_unlock:
>   mutex_unlock(_governor_lock);
> @@ -232,7 +221,8 @@ static void dbs_timer(struct work_struct *work)
>   struct cpufreq_policy *policy = shared->policy;
>   struct dbs_data *dbs_data = policy->governor_data;
>   unsigned int sampling_rate, delay;
> - bool modify_all = true;
> + const struct cpumask *cpus;

I don't think this local variable is necessary.

> + bool load_eval;
>  
>   mutex_lock(>timer_mutex);
>  
> @@ -246,11 +236,11 @@ static void dbs_timer(struct work_struct *work)
>   sampling_rate = od_tuners->sampling_rate;
>   }
>  
> - if (!need_load_eval(cdbs->shared, sampling_rate))
> - modify_all = false;
> + load_eval = need_load_eval(cdbs->shared, sampling_rate);
> + cpus = load_eval ? policy->cpus : cpumask_of(raw_smp_processor_id());
>  
> - delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all);
> - gov_queue_work(dbs_data, policy, delay, modify_all);
> + delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, load_eval);
> + gov_queue_work(dbs_data, policy, delay, cpus);
>  
>   mutex_unlock(>timer_mutex);
>  }
> @@ -474,7 +464,7 @@ static int 

Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-07 Thread Rafael J. Wysocki
On Monday, July 27, 2015 05:58:08 PM Viresh Kumar wrote:
> The sampling rate is updated with a call to update_sampling_rate(), and
> we process CPUs one by one here. While the work is canceled on per-cpu
> basis, it is getting queued (by mistake) for all policy->cpus.
> 
> This would result in wasting cpu cycles for queuing works which are
> already queued and never canceled.
> 
> This patch changes this behavior to queue work only on the cpu, for
> which it was canceled earlier.
> 
> To do that, replace 'modify_all' parameter to gov_queue_work() with a
> mask of CPUs.

There really are two cases, either you pass a CPU or gov_queue_work() has to
walk policy->cpus.  Doing it the way you did hides that IMO.

I'd simply pass an int and use a special value to indicate that policy->cpus
is to be walked.

> Also the last parameter to ->gov_dbs_timer() was named
> 'modify_all' earlier, but its purpose was to decide if load has to be
> evaluated again or not. Lets rename that to load_eval.
> 
> Fixes: 031299b3be30 ("cpufreq: governors: Avoid unnecessary per cpu timer 
> interrupts")
> Signed-off-by: Viresh Kumar 
> ---
>  drivers/cpufreq/cpufreq_conservative.c |  4 ++--
>  drivers/cpufreq/cpufreq_governor.c | 30 ++
>  drivers/cpufreq/cpufreq_governor.h |  4 ++--
>  drivers/cpufreq/cpufreq_ondemand.c |  7 ---
>  4 files changed, 18 insertions(+), 27 deletions(-)
> 
> diff --git a/drivers/cpufreq/cpufreq_conservative.c 
> b/drivers/cpufreq/cpufreq_conservative.c
> index 18bfbc313e48..1aa3bd46cea3 100644
> --- a/drivers/cpufreq/cpufreq_conservative.c
> +++ b/drivers/cpufreq/cpufreq_conservative.c
> @@ -116,11 +116,11 @@ static void cs_check_cpu(int cpu, unsigned int load)
>  }
>  
>  static unsigned int cs_dbs_timer(struct cpu_dbs_info *cdbs,
> -  struct dbs_data *dbs_data, bool modify_all)
> +  struct dbs_data *dbs_data, bool load_eval)
>  {
>   struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
>  
> - if (modify_all)
> + if (load_eval)
>   dbs_check_cpu(dbs_data, cdbs->shared->policy->cpu);
>  
>   return delay_for_sampling_rate(cs_tuners->sampling_rate);
> diff --git a/drivers/cpufreq/cpufreq_governor.c 
> b/drivers/cpufreq/cpufreq_governor.c
> index 750626d8fb03..a890450711bb 100644
> --- a/drivers/cpufreq/cpufreq_governor.c
> +++ b/drivers/cpufreq/cpufreq_governor.c
> @@ -167,7 +167,7 @@ static inline void __gov_queue_work(int cpu, struct 
> dbs_data *dbs_data,
>  }
>  
>  void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
> - unsigned int delay, bool all_cpus)
> + unsigned int delay, const struct cpumask *cpus)
>  {
>   int i;
>  
> @@ -175,19 +175,8 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
> cpufreq_policy *policy,
>   if (!policy->governor_enabled)
>   goto out_unlock;
>  
> - if (!all_cpus) {
> - /*
> -  * Use raw_smp_processor_id() to avoid preemptible warnings.
> -  * We know that this is only called with all_cpus == false from
> -  * works that have been queued with *_work_on() functions and
> -  * those works are canceled during CPU_DOWN_PREPARE so they
> -  * can't possibly run on any other CPU.
> -  */

This was a useful comment and it should be moved along the logic it was supposed
to explain and not just dropped.

> - __gov_queue_work(raw_smp_processor_id(), dbs_data, delay);
> - } else {
> - for_each_cpu(i, policy->cpus)
> - __gov_queue_work(i, dbs_data, delay);
> - }
> + for_each_cpu(i, cpus)
> + __gov_queue_work(i, dbs_data, delay);
>  
>  out_unlock:
>   mutex_unlock(_governor_lock);
> @@ -232,7 +221,8 @@ static void dbs_timer(struct work_struct *work)
>   struct cpufreq_policy *policy = shared->policy;
>   struct dbs_data *dbs_data = policy->governor_data;
>   unsigned int sampling_rate, delay;
> - bool modify_all = true;
> + const struct cpumask *cpus;

I don't think this local variable is necessary.

> + bool load_eval;
>  
>   mutex_lock(>timer_mutex);
>  
> @@ -246,11 +236,11 @@ static void dbs_timer(struct work_struct *work)
>   sampling_rate = od_tuners->sampling_rate;
>   }
>  
> - if (!need_load_eval(cdbs->shared, sampling_rate))
> - modify_all = false;
> + load_eval = need_load_eval(cdbs->shared, sampling_rate);
> + cpus = load_eval ? policy->cpus : cpumask_of(raw_smp_processor_id());
>  
> - delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all);
> - gov_queue_work(dbs_data, policy, delay, modify_all);
> + delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, load_eval);
> + gov_queue_work(dbs_data, policy, delay, cpus);
>  
>   mutex_unlock(>timer_mutex);
>  }
> @@ -474,7 +464,7 @@ 

Re: [PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-09-07 Thread Viresh Kumar
On 08-09-15, 03:11, Rafael J. Wysocki wrote:
> There really are two cases, either you pass a CPU or gov_queue_work() has to
> walk policy->cpus.

Right (At least for now, we are doing just that.)

> Doing it the way you did hides that IMO.

Maybe. But I see it otherwise. Adding special meaning to a variable
(like int cpu == -1 being the special case to specify policy->cpus)
hides things morei, as we need to look at how it is decoded finally in
the routine gov_queue_work().

But if we send a mask instead, it is very clear by reading the callers
site, what we are trying to do.

> I'd simply pass an int and use a special value to indicate that policy->cpus
> is to be walked.

Like cpu == -1 thing? Or something else?

> > -   if (!all_cpus) {
> > -   /*
> > -* Use raw_smp_processor_id() to avoid preemptible warnings.
> > -* We know that this is only called with all_cpus == false from
> > -* works that have been queued with *_work_on() functions and
> > -* those works are canceled during CPU_DOWN_PREPARE so they
> > -* can't possibly run on any other CPU.
> > -*/
> 
> This was a useful comment and it should be moved along the logic it was 
> supposed
> to explain and not just dropped.

Sigh

> > -   __gov_queue_work(raw_smp_processor_id(), dbs_data, delay);
> > -   } else {
> > -   for_each_cpu(i, policy->cpus)
> > -   __gov_queue_work(i, dbs_data, delay);
> > -   }
> > +   for_each_cpu(i, cpus)
> > +   __gov_queue_work(i, dbs_data, delay);
> >  
> >  out_unlock:
> > mutex_unlock(_governor_lock);
> > @@ -232,7 +221,8 @@ static void dbs_timer(struct work_struct *work)
> > struct cpufreq_policy *policy = shared->policy;
> > struct dbs_data *dbs_data = policy->governor_data;
> > unsigned int sampling_rate, delay;
> > -   bool modify_all = true;
> > +   const struct cpumask *cpus;
> 
> I don't think this local variable is necessary.
> 
> > +   bool load_eval;
> >  
> > mutex_lock(>timer_mutex);
> >  
> > @@ -246,11 +236,11 @@ static void dbs_timer(struct work_struct *work)
> > sampling_rate = od_tuners->sampling_rate;
> > }
> >  
> > -   if (!need_load_eval(cdbs->shared, sampling_rate))
> > -   modify_all = false;
> > +   load_eval = need_load_eval(cdbs->shared, sampling_rate);
> > +   cpus = load_eval ? policy->cpus : cpumask_of(raw_smp_processor_id());
> >  
> > -   delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all);
> > -   gov_queue_work(dbs_data, policy, delay, modify_all);
> > +   delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, load_eval);
> > +   gov_queue_work(dbs_data, policy, delay, cpus);

Avoiding that local variable would have made this a little longer, but
I can surely drop it :)

gov_queue_work(dbs_data, policy, delay,
   load_eval ? policy->cpus :
   cpumask_of(raw_smp_processor_id());

-- 
viresh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-07-27 Thread Viresh Kumar
The sampling rate is updated with a call to update_sampling_rate(), and
we process CPUs one by one here. While the work is canceled on per-cpu
basis, it is getting queued (by mistake) for all policy->cpus.

This would result in wasting cpu cycles for queuing works which are
already queued and never canceled.

This patch changes this behavior to queue work only on the cpu, for
which it was canceled earlier.

To do that, replace 'modify_all' parameter to gov_queue_work() with a
mask of CPUs. Also the last parameter to ->gov_dbs_timer() was named
'modify_all' earlier, but its purpose was to decide if load has to be
evaluated again or not. Lets rename that to load_eval.

Fixes: 031299b3be30 ("cpufreq: governors: Avoid unnecessary per cpu timer 
interrupts")
Signed-off-by: Viresh Kumar 
---
 drivers/cpufreq/cpufreq_conservative.c |  4 ++--
 drivers/cpufreq/cpufreq_governor.c | 30 ++
 drivers/cpufreq/cpufreq_governor.h |  4 ++--
 drivers/cpufreq/cpufreq_ondemand.c |  7 ---
 4 files changed, 18 insertions(+), 27 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_conservative.c 
b/drivers/cpufreq/cpufreq_conservative.c
index 18bfbc313e48..1aa3bd46cea3 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -116,11 +116,11 @@ static void cs_check_cpu(int cpu, unsigned int load)
 }
 
 static unsigned int cs_dbs_timer(struct cpu_dbs_info *cdbs,
-struct dbs_data *dbs_data, bool modify_all)
+struct dbs_data *dbs_data, bool load_eval)
 {
struct cs_dbs_tuners *cs_tuners = dbs_data->tuners;
 
-   if (modify_all)
+   if (load_eval)
dbs_check_cpu(dbs_data, cdbs->shared->policy->cpu);
 
return delay_for_sampling_rate(cs_tuners->sampling_rate);
diff --git a/drivers/cpufreq/cpufreq_governor.c 
b/drivers/cpufreq/cpufreq_governor.c
index 750626d8fb03..a890450711bb 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -167,7 +167,7 @@ static inline void __gov_queue_work(int cpu, struct 
dbs_data *dbs_data,
 }
 
 void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
-   unsigned int delay, bool all_cpus)
+   unsigned int delay, const struct cpumask *cpus)
 {
int i;
 
@@ -175,19 +175,8 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
cpufreq_policy *policy,
if (!policy->governor_enabled)
goto out_unlock;
 
-   if (!all_cpus) {
-   /*
-* Use raw_smp_processor_id() to avoid preemptible warnings.
-* We know that this is only called with all_cpus == false from
-* works that have been queued with *_work_on() functions and
-* those works are canceled during CPU_DOWN_PREPARE so they
-* can't possibly run on any other CPU.
-*/
-   __gov_queue_work(raw_smp_processor_id(), dbs_data, delay);
-   } else {
-   for_each_cpu(i, policy->cpus)
-   __gov_queue_work(i, dbs_data, delay);
-   }
+   for_each_cpu(i, cpus)
+   __gov_queue_work(i, dbs_data, delay);
 
 out_unlock:
mutex_unlock(_governor_lock);
@@ -232,7 +221,8 @@ static void dbs_timer(struct work_struct *work)
struct cpufreq_policy *policy = shared->policy;
struct dbs_data *dbs_data = policy->governor_data;
unsigned int sampling_rate, delay;
-   bool modify_all = true;
+   const struct cpumask *cpus;
+   bool load_eval;
 
mutex_lock(>timer_mutex);
 
@@ -246,11 +236,11 @@ static void dbs_timer(struct work_struct *work)
sampling_rate = od_tuners->sampling_rate;
}
 
-   if (!need_load_eval(cdbs->shared, sampling_rate))
-   modify_all = false;
+   load_eval = need_load_eval(cdbs->shared, sampling_rate);
+   cpus = load_eval ? policy->cpus : cpumask_of(raw_smp_processor_id());
 
-   delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, modify_all);
-   gov_queue_work(dbs_data, policy, delay, modify_all);
+   delay = dbs_data->cdata->gov_dbs_timer(cdbs, dbs_data, load_eval);
+   gov_queue_work(dbs_data, policy, delay, cpus);
 
mutex_unlock(>timer_mutex);
 }
@@ -474,7 +464,7 @@ static int cpufreq_governor_start(struct cpufreq_policy 
*policy,
}
 
gov_queue_work(dbs_data, policy, delay_for_sampling_rate(sampling_rate),
-  true);
+  policy->cpus);
return 0;
 }
 
diff --git a/drivers/cpufreq/cpufreq_governor.h 
b/drivers/cpufreq/cpufreq_governor.h
index 5621bb03e874..52665a0624b2 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -211,7 +211,7 @@ struct common_dbs_data {
void *(*get_cpu_dbs_info_s)(int cpu);
unsigned int (*gov_dbs_timer)(struct cpu_dbs_info *cdbs,
 

[PATCH V2 3/9] cpufreq: ondemand: only queue canceled works from update_sampling_rate()

2015-07-27 Thread Viresh Kumar
The sampling rate is updated with a call to update_sampling_rate(), and
we process CPUs one by one here. While the work is canceled on per-cpu
basis, it is getting queued (by mistake) for all policy-cpus.

This would result in wasting cpu cycles for queuing works which are
already queued and never canceled.

This patch changes this behavior to queue work only on the cpu, for
which it was canceled earlier.

To do that, replace 'modify_all' parameter to gov_queue_work() with a
mask of CPUs. Also the last parameter to -gov_dbs_timer() was named
'modify_all' earlier, but its purpose was to decide if load has to be
evaluated again or not. Lets rename that to load_eval.

Fixes: 031299b3be30 (cpufreq: governors: Avoid unnecessary per cpu timer 
interrupts)
Signed-off-by: Viresh Kumar viresh.ku...@linaro.org
---
 drivers/cpufreq/cpufreq_conservative.c |  4 ++--
 drivers/cpufreq/cpufreq_governor.c | 30 ++
 drivers/cpufreq/cpufreq_governor.h |  4 ++--
 drivers/cpufreq/cpufreq_ondemand.c |  7 ---
 4 files changed, 18 insertions(+), 27 deletions(-)

diff --git a/drivers/cpufreq/cpufreq_conservative.c 
b/drivers/cpufreq/cpufreq_conservative.c
index 18bfbc313e48..1aa3bd46cea3 100644
--- a/drivers/cpufreq/cpufreq_conservative.c
+++ b/drivers/cpufreq/cpufreq_conservative.c
@@ -116,11 +116,11 @@ static void cs_check_cpu(int cpu, unsigned int load)
 }
 
 static unsigned int cs_dbs_timer(struct cpu_dbs_info *cdbs,
-struct dbs_data *dbs_data, bool modify_all)
+struct dbs_data *dbs_data, bool load_eval)
 {
struct cs_dbs_tuners *cs_tuners = dbs_data-tuners;
 
-   if (modify_all)
+   if (load_eval)
dbs_check_cpu(dbs_data, cdbs-shared-policy-cpu);
 
return delay_for_sampling_rate(cs_tuners-sampling_rate);
diff --git a/drivers/cpufreq/cpufreq_governor.c 
b/drivers/cpufreq/cpufreq_governor.c
index 750626d8fb03..a890450711bb 100644
--- a/drivers/cpufreq/cpufreq_governor.c
+++ b/drivers/cpufreq/cpufreq_governor.c
@@ -167,7 +167,7 @@ static inline void __gov_queue_work(int cpu, struct 
dbs_data *dbs_data,
 }
 
 void gov_queue_work(struct dbs_data *dbs_data, struct cpufreq_policy *policy,
-   unsigned int delay, bool all_cpus)
+   unsigned int delay, const struct cpumask *cpus)
 {
int i;
 
@@ -175,19 +175,8 @@ void gov_queue_work(struct dbs_data *dbs_data, struct 
cpufreq_policy *policy,
if (!policy-governor_enabled)
goto out_unlock;
 
-   if (!all_cpus) {
-   /*
-* Use raw_smp_processor_id() to avoid preemptible warnings.
-* We know that this is only called with all_cpus == false from
-* works that have been queued with *_work_on() functions and
-* those works are canceled during CPU_DOWN_PREPARE so they
-* can't possibly run on any other CPU.
-*/
-   __gov_queue_work(raw_smp_processor_id(), dbs_data, delay);
-   } else {
-   for_each_cpu(i, policy-cpus)
-   __gov_queue_work(i, dbs_data, delay);
-   }
+   for_each_cpu(i, cpus)
+   __gov_queue_work(i, dbs_data, delay);
 
 out_unlock:
mutex_unlock(cpufreq_governor_lock);
@@ -232,7 +221,8 @@ static void dbs_timer(struct work_struct *work)
struct cpufreq_policy *policy = shared-policy;
struct dbs_data *dbs_data = policy-governor_data;
unsigned int sampling_rate, delay;
-   bool modify_all = true;
+   const struct cpumask *cpus;
+   bool load_eval;
 
mutex_lock(shared-timer_mutex);
 
@@ -246,11 +236,11 @@ static void dbs_timer(struct work_struct *work)
sampling_rate = od_tuners-sampling_rate;
}
 
-   if (!need_load_eval(cdbs-shared, sampling_rate))
-   modify_all = false;
+   load_eval = need_load_eval(cdbs-shared, sampling_rate);
+   cpus = load_eval ? policy-cpus : cpumask_of(raw_smp_processor_id());
 
-   delay = dbs_data-cdata-gov_dbs_timer(cdbs, dbs_data, modify_all);
-   gov_queue_work(dbs_data, policy, delay, modify_all);
+   delay = dbs_data-cdata-gov_dbs_timer(cdbs, dbs_data, load_eval);
+   gov_queue_work(dbs_data, policy, delay, cpus);
 
mutex_unlock(shared-timer_mutex);
 }
@@ -474,7 +464,7 @@ static int cpufreq_governor_start(struct cpufreq_policy 
*policy,
}
 
gov_queue_work(dbs_data, policy, delay_for_sampling_rate(sampling_rate),
-  true);
+  policy-cpus);
return 0;
 }
 
diff --git a/drivers/cpufreq/cpufreq_governor.h 
b/drivers/cpufreq/cpufreq_governor.h
index 5621bb03e874..52665a0624b2 100644
--- a/drivers/cpufreq/cpufreq_governor.h
+++ b/drivers/cpufreq/cpufreq_governor.h
@@ -211,7 +211,7 @@ struct common_dbs_data {
void *(*get_cpu_dbs_info_s)(int cpu);
unsigned int (*gov_dbs_timer)(struct