* Steve Muckle wrote:
> From: Michael Turquette
>
> Scheduler-driven CPU frequency selection hopes to exploit both
> per-task and global information in the scheduler to improve frequency
> selection policy, achieving lower power consumption,
* Steve Muckle wrote:
> From: Michael Turquette
>
> Scheduler-driven CPU frequency selection hopes to exploit both
> per-task and global information in the scheduler to improve frequency
> selection policy, achieving lower power consumption, improved
> responsiveness/performance, and less
On Tue, Mar 01, 2016 at 11:49:10PM -0800, Michael Turquette wrote:
>
> In my over-simplified view of the scheduler, it would be great if we
> could have a backdoor mechanism to place the frequency transition
> kthread onto a runqueue from within the schedule() context and dispense
> with the
On Tue, Mar 01, 2016 at 11:49:10PM -0800, Michael Turquette wrote:
>
> In my over-simplified view of the scheduler, it would be great if we
> could have a backdoor mechanism to place the frequency transition
> kthread onto a runqueue from within the schedule() context and dispense
> with the
On 02/03/16 19:50, Steve Muckle wrote:
> On 03/02/2016 06:49 PM, Rafael J. Wysocki wrote:
> > I'm not actually sure if RT is the right answer here. DL may be a
> > better choice. After all, we want the thing to happen shortly, but
> > not necessarily at full speed.
> >
> > So something like a
On 02/03/16 19:50, Steve Muckle wrote:
> On 03/02/2016 06:49 PM, Rafael J. Wysocki wrote:
> > I'm not actually sure if RT is the right answer here. DL may be a
> > better choice. After all, we want the thing to happen shortly, but
> > not necessarily at full speed.
> >
> > So something like a
On 03/02/2016 06:49 PM, Rafael J. Wysocki wrote:
> I'm not actually sure if RT is the right answer here. DL may be a
> better choice. After all, we want the thing to happen shortly, but
> not necessarily at full speed.
>
> So something like a DL workqueue would be quite useful here it seems.
On 03/02/2016 06:49 PM, Rafael J. Wysocki wrote:
> I'm not actually sure if RT is the right answer here. DL may be a
> better choice. After all, we want the thing to happen shortly, but
> not necessarily at full speed.
>
> So something like a DL workqueue would be quite useful here it seems.
On Wed, Mar 2, 2016 at 8:49 AM, Michael Turquette
wrote:
>
[cut]
> I do not have any data to back up a case for stalls caused by RT/DL
> starvation, but conceptually I would say that latency is fundamentally
> more important in a scheduler-driven cpu frequency selection
On Wed, Mar 2, 2016 at 8:49 AM, Michael Turquette
wrote:
>
[cut]
> I do not have any data to back up a case for stalls caused by RT/DL
> starvation, but conceptually I would say that latency is fundamentally
> more important in a scheduler-driven cpu frequency selection scenario,
> versus the
Hi,
I'm still catching up on the plurality of scheduler/cpufreq threads but
I thought I would chime in with some historical reasons for why
cpufreq_sched.c looks the way it does today.
Quoting Steve Muckle (2016-02-25 16:34:23)
> On 02/24/2016 07:55 PM, Rafael J. Wysocki wrote:
> > On Monday,
Hi,
I'm still catching up on the plurality of scheduler/cpufreq threads but
I thought I would chime in with some historical reasons for why
cpufreq_sched.c looks the way it does today.
Quoting Steve Muckle (2016-02-25 16:34:23)
> On 02/24/2016 07:55 PM, Rafael J. Wysocki wrote:
> > On Monday,
On Tue, Mar 1, 2016 at 3:31 PM, Peter Zijlstra wrote:
> On Sun, Feb 28, 2016 at 03:26:21AM +0100, Rafael J. Wysocki wrote:
>
>> > > That said I'm unconvinced about the approach still.
>> > >
>> > > Having more RT threads in a system that already is under RT pressure
>> > >
On Tue, Mar 1, 2016 at 3:31 PM, Peter Zijlstra wrote:
> On Sun, Feb 28, 2016 at 03:26:21AM +0100, Rafael J. Wysocki wrote:
>
>> > > That said I'm unconvinced about the approach still.
>> > >
>> > > Having more RT threads in a system that already is under RT pressure
>> > > seems like
>> > > a
On Tue, Mar 1, 2016 at 1:57 PM, Peter Zijlstra wrote:
> On Sat, Feb 27, 2016 at 01:08:02AM +0100, Rafael J. Wysocki wrote:
>> @@ -95,18 +98,20 @@ EXPORT_SYMBOL_GPL(cpufreq_set_update_uti
>> *
>> * This function is called by the scheduler on every invocation of
>> *
On Tue, Mar 1, 2016 at 1:57 PM, Peter Zijlstra wrote:
> On Sat, Feb 27, 2016 at 01:08:02AM +0100, Rafael J. Wysocki wrote:
>> @@ -95,18 +98,20 @@ EXPORT_SYMBOL_GPL(cpufreq_set_update_uti
>> *
>> * This function is called by the scheduler on every invocation of
>> * update_load_avg() on the
On Sun, Feb 28, 2016 at 03:26:21AM +0100, Rafael J. Wysocki wrote:
> > > That said I'm unconvinced about the approach still.
> > >
> > > Having more RT threads in a system that already is under RT pressure
> > > seems like
> > > a recipe for trouble. Moreover, it's likely that those new RT
On Sun, Feb 28, 2016 at 03:26:21AM +0100, Rafael J. Wysocki wrote:
> > > That said I'm unconvinced about the approach still.
> > >
> > > Having more RT threads in a system that already is under RT pressure
> > > seems like
> > > a recipe for trouble. Moreover, it's likely that those new RT
On Fri, Feb 26, 2016 at 08:17:46PM -0800, Steve Muckle wrote:
> > But then it would only make a difference if cpufreq_update_util() was not
> > used at all (ie. no callbacks installed for any policies by anyone). The
> > only reason why it may matter is that the total number of systems using
> >
On Fri, Feb 26, 2016 at 08:17:46PM -0800, Steve Muckle wrote:
> > But then it would only make a difference if cpufreq_update_util() was not
> > used at all (ie. no callbacks installed for any policies by anyone). The
> > only reason why it may matter is that the total number of systems using
> >
On Thu, Feb 25, 2016 at 04:34:23PM -0800, Steve Muckle wrote:
> >> + /*
> >> + * Ensure that all CPUs currently part of this policy are out
> >> + * of the hot path so that if this policy exits we can free gd.
> >> + */
> >> + preempt_disable();
> >> + smp_call_function_many(policy->cpus,
On Thu, Feb 25, 2016 at 04:34:23PM -0800, Steve Muckle wrote:
> >> + /*
> >> + * Ensure that all CPUs currently part of this policy are out
> >> + * of the hot path so that if this policy exits we can free gd.
> >> + */
> >> + preempt_disable();
> >> + smp_call_function_many(policy->cpus,
On Sat, Feb 27, 2016 at 01:08:02AM +0100, Rafael J. Wysocki wrote:
> @@ -95,18 +98,20 @@ EXPORT_SYMBOL_GPL(cpufreq_set_update_uti
> *
> * This function is called by the scheduler on every invocation of
> * update_load_avg() on the CPU whose utilization is being updated.
> + *
> + * It can
On Sat, Feb 27, 2016 at 01:08:02AM +0100, Rafael J. Wysocki wrote:
> @@ -95,18 +98,20 @@ EXPORT_SYMBOL_GPL(cpufreq_set_update_uti
> *
> * This function is called by the scheduler on every invocation of
> * update_load_avg() on the CPU whose utilization is being updated.
> + *
> + * It can
On Friday, February 26, 2016 08:17:46 PM Steve Muckle wrote:
> On 02/26/2016 06:39 PM, Rafael J. Wysocki wrote:
> >>> One thing I personally like in the RCU-based approach is its
> >>> universality. The
> >>> callbacks may be installed by different entities in a uniform way:
> >>> intel_pstate
On Friday, February 26, 2016 08:17:46 PM Steve Muckle wrote:
> On 02/26/2016 06:39 PM, Rafael J. Wysocki wrote:
> >>> One thing I personally like in the RCU-based approach is its
> >>> universality. The
> >>> callbacks may be installed by different entities in a uniform way:
> >>> intel_pstate
On 02/26/2016 06:39 PM, Rafael J. Wysocki wrote:
>>> One thing I personally like in the RCU-based approach is its universality.
>>> The
>>> callbacks may be installed by different entities in a uniform way:
>>> intel_pstate
>>> can do that, the old governors can do that, my experimental
On 02/26/2016 06:39 PM, Rafael J. Wysocki wrote:
>>> One thing I personally like in the RCU-based approach is its universality.
>>> The
>>> callbacks may be installed by different entities in a uniform way:
>>> intel_pstate
>>> can do that, the old governors can do that, my experimental
On Thursday, February 25, 2016 04:34:23 PM Steve Muckle wrote:
> On 02/24/2016 07:55 PM, Rafael J. Wysocki wrote:
> > Hi,
[cut]
> > One thing I personally like in the RCU-based approach is its universality.
> > The
> > callbacks may be installed by different entities in a uniform way:
> >
On Thursday, February 25, 2016 04:34:23 PM Steve Muckle wrote:
> On 02/24/2016 07:55 PM, Rafael J. Wysocki wrote:
> > Hi,
[cut]
> > One thing I personally like in the RCU-based approach is its universality.
> > The
> > callbacks may be installed by different entities in a uniform way:
> >
On Friday, February 26, 2016 10:18:43 AM Peter Zijlstra wrote:
> On Thu, Feb 25, 2016 at 10:08:48PM +0100, Rafael J. Wysocki wrote:
> > On Thursday, February 25, 2016 10:28:37 AM Peter Zijlstra wrote:
> > > Its vile though; one should not spray IPIs if one can avoid it. Such
> > > things are much
On Friday, February 26, 2016 10:18:43 AM Peter Zijlstra wrote:
> On Thu, Feb 25, 2016 at 10:08:48PM +0100, Rafael J. Wysocki wrote:
> > On Thursday, February 25, 2016 10:28:37 AM Peter Zijlstra wrote:
> > > Its vile though; one should not spray IPIs if one can avoid it. Such
> > > things are much
On Thu, Feb 25, 2016 at 10:08:48PM +0100, Rafael J. Wysocki wrote:
> On Thursday, February 25, 2016 10:28:37 AM Peter Zijlstra wrote:
> > Its vile though; one should not spray IPIs if one can avoid it. Such
> > things are much better done with RCU. Sure sync_sched() takes a little
> > longer, but
On Thu, Feb 25, 2016 at 10:08:48PM +0100, Rafael J. Wysocki wrote:
> On Thursday, February 25, 2016 10:28:37 AM Peter Zijlstra wrote:
> > Its vile though; one should not spray IPIs if one can avoid it. Such
> > things are much better done with RCU. Sure sync_sched() takes a little
> > longer, but
On 02/24/2016 07:55 PM, Rafael J. Wysocki wrote:
> Hi,
>
> I promised a review and here it goes.
Thanks Rafael for your detailed review.
>
> Let me focus on this one as the rest seems to depend on it.
>
> On Monday, February 22, 2016 05:22:43 PM Steve Muckle wrote:
>> From: Michael Turquette
On 02/24/2016 07:55 PM, Rafael J. Wysocki wrote:
> Hi,
>
> I promised a review and here it goes.
Thanks Rafael for your detailed review.
>
> Let me focus on this one as the rest seems to depend on it.
>
> On Monday, February 22, 2016 05:22:43 PM Steve Muckle wrote:
>> From: Michael Turquette
On Thursday, February 25, 2016 10:28:37 AM Peter Zijlstra wrote:
> On Thu, Feb 25, 2016 at 04:55:57AM +0100, Rafael J. Wysocki wrote:
> > > +static void dummy(void *info) {}
> > > +
> > > +static int cpufreq_sched_stop(struct cpufreq_policy *policy)
> > > +{
> > > + struct gov_data *gd =
On Thursday, February 25, 2016 10:28:37 AM Peter Zijlstra wrote:
> On Thu, Feb 25, 2016 at 04:55:57AM +0100, Rafael J. Wysocki wrote:
> > > +static void dummy(void *info) {}
> > > +
> > > +static int cpufreq_sched_stop(struct cpufreq_policy *policy)
> > > +{
> > > + struct gov_data *gd =
On Thursday, February 25, 2016 10:21:50 AM Peter Zijlstra wrote:
> On Thu, Feb 25, 2016 at 04:55:57AM +0100, Rafael J. Wysocki wrote:
> > Well, I'm not familiar with static keys and how they work, so you'll need to
> > explain this part to me.
>
> See include/linux/jump_label.h, it has lots of
On Thursday, February 25, 2016 10:21:50 AM Peter Zijlstra wrote:
> On Thu, Feb 25, 2016 at 04:55:57AM +0100, Rafael J. Wysocki wrote:
> > Well, I'm not familiar with static keys and how they work, so you'll need to
> > explain this part to me.
>
> See include/linux/jump_label.h, it has lots of
On Thursday, February 25, 2016 04:55:57 AM Rafael J. Wysocki wrote:
> Hi,
>
[cut]
> > + while (true) {
> > + set_current_state(TASK_INTERRUPTIBLE);
> > + if (kthread_should_stop()) {
> > + set_current_state(TASK_RUNNING);
> > + break;
>
On Thursday, February 25, 2016 04:55:57 AM Rafael J. Wysocki wrote:
> Hi,
>
[cut]
> > + while (true) {
> > + set_current_state(TASK_INTERRUPTIBLE);
> > + if (kthread_should_stop()) {
> > + set_current_state(TASK_RUNNING);
> > + break;
>
On Thu, Feb 25, 2016 at 04:55:57AM +0100, Rafael J. Wysocki wrote:
> > +static void dummy(void *info) {}
> > +
> > +static int cpufreq_sched_stop(struct cpufreq_policy *policy)
> > +{
> > + struct gov_data *gd = policy->governor_data;
> > + int cpu;
> > +
> > + /*
> > +* The schedfreq
On Thu, Feb 25, 2016 at 04:55:57AM +0100, Rafael J. Wysocki wrote:
> > +static void dummy(void *info) {}
> > +
> > +static int cpufreq_sched_stop(struct cpufreq_policy *policy)
> > +{
> > + struct gov_data *gd = policy->governor_data;
> > + int cpu;
> > +
> > + /*
> > +* The schedfreq
On Thu, Feb 25, 2016 at 04:55:57AM +0100, Rafael J. Wysocki wrote:
> Well, I'm not familiar with static keys and how they work, so you'll need to
> explain this part to me.
See include/linux/jump_label.h, it has lots of text on them. There is
also Documentation/static-keys.txt
On Thu, Feb 25, 2016 at 04:55:57AM +0100, Rafael J. Wysocki wrote:
> Well, I'm not familiar with static keys and how they work, so you'll need to
> explain this part to me.
See include/linux/jump_label.h, it has lots of text on them. There is
also Documentation/static-keys.txt
Hi,
I promised a review and here it goes.
Let me focus on this one as the rest seems to depend on it.
On Monday, February 22, 2016 05:22:43 PM Steve Muckle wrote:
> From: Michael Turquette
>
> Scheduler-driven CPU frequency selection hopes to exploit both
> per-task
Hi,
I promised a review and here it goes.
Let me focus on this one as the rest seems to depend on it.
On Monday, February 22, 2016 05:22:43 PM Steve Muckle wrote:
> From: Michael Turquette
>
> Scheduler-driven CPU frequency selection hopes to exploit both
> per-task and global information in
From: Michael Turquette
Scheduler-driven CPU frequency selection hopes to exploit both
per-task and global information in the scheduler to improve frequency
selection policy, achieving lower power consumption, improved
responsiveness/performance, and less reliance on
From: Michael Turquette
Scheduler-driven CPU frequency selection hopes to exploit both
per-task and global information in the scheduler to improve frequency
selection policy, achieving lower power consumption, improved
responsiveness/performance, and less reliance on heuristics and
tunables. For
50 matches
Mail list logo