David Vengerov wrote:
> Thanks, Bart. It seems then that there two types of policies that can be 
> deployed in a system. The first type of policy decreases CPU clock 
> frequency if the CPU utilization drops below 100% and increases the 
> frequency as the CPU utilization rises. The interesting question with 
> this policy is what frequency f should be used (as a fraction of the 
> maximum) when a certain CPU utilization is observed. 

Actually, for x86 this already defined for you. CPUs cannot necessarily 
be changed to an arbitrary frequency. Usually, there are a limited 
number of frequencies that are supported and those frequencies are 
exported to the OS via the ACPI _PSS objects.

> As Bart pointed 
> out, different workloads will respond differently to decreases in CPU 
> frequency (the CPU utilization may rise proportionately or it may not 
> rise at all). The best way to approach this problem, I think, is to ask 
> the user to specify (or choose from several options) a utility curve 
> describing the accepted performance degradation vs. a decrease in the 
> consumed power.

Yes, I believe the Hardware Abstraction Layer (HAL) specification takes 
this approach:

http://people.freedesktop.org/~david/hal-spec/hal-spec.html#interface-cpufreq 


See the [GS]etCPUFreqPerformance method.

  Then, the application feedback can be used to tune the
> policy that sets CPU frequency based on observed CPU utilization so as 
> to maximize the user utility. What do you think about this approach? Is 
> this something you would like to experiment with?

Sure. We should have a good start once I integrate the current Enhanced 
Speedstep support into Nevada (before the end of the month). I think 
we'd want to decouple the CPU driver from the Solaris Power Management 
framework (initially, anyway) so that we could have finer control.

> The second type of policy decides whether the load on several CPUs 
> should be "compacted" into fewer CPUs so as to create some idle CPUs 
> that can be kept running at the minimum frequency. This decision can be 
> made based on the current and recent utilization of the CPUs, their run 
> queue lengths, etc. The ultimate choice between policies of this type 
> should be made based on the application feedback and on the performance 
> vs. power utility curve specified, so that the policy that maximizes the 
> final utility should be chosen. Do you think this approach is also worth 
> evaluating?

Yes, I do. You seem to be implying that these two approaches are 
mutually exclusive? Why wouldn't we want to do both?

> 
> Bart Smaalders wrote:
> 
>> David Vengerov wrote:
>>
>>> Mark,
>>> Thank you, Mark, for the link describing the Intel Enhanced Intel 
>>> SpeedStep Technology. I noticed that they suggest for the kernel to 
>>> poll each CPU at regular time intervals, asking about its 
>>> utilization. Then, they plan to adjust the CPU frequency based on its 
>>> utilization.
>>>
>>> Do you guys think that the CPU frequency should be changed in 
>>> response to changes in CPU utilization? 
>>
>>
>> Sure - if the CPU is 100% busy and then becomes idle, reducing CPU 
>> frequency seems like a reasonable plan.  Of course, if the only
>> thing the user cares about is response latency to infrequent events,
>> this policy won't work well.  But as a default, it's a pretty good 
>> choice.
>>
>>> If so, should we expect the CPU utilization to rise as the CPU 
>>> frequency is decreased and vice versa?
>>
>>
>> Depends on workload.  If code is memory bound on a non-threaded CPU
>> when it runs, CPU freq. will have little effect on either throughput
>> or CPU utilization.
>>
>> On the other hand, if we're computing Pi periodically, I would
>> expect utilization to go up as freq. goes down.
>>
>> - Bart
>>
>>
> 
> 


Reply via email to