David Vengerov wrote:
> Mark Haywood wrote:
>
>> Again I'm not following. Can you describe a pure utility curve and 
>> explain how it would be presented to the system for its consumption? 
>
> The utility curve can be provided by the user/administrator by 
> clicking on one of the graphic images presented by the system, where 
> each image depicts a concave, convex, or linear curve starting at 
> (0,0) and ending at (1,1). The x-axis is the power level at which the 
> user wants the system to operate (as a fraction of the maximum power 
> level) and the y-axis is the performance the user wants the system to 
> achieve (as a fraction of the maximum performance). This curve 
> represents the set of (performance, power) points among which the user 
> is indifferent, (it can be called "indifference" curve, or "Efficient 
> Frontier" in economic terms) thus allowing the power management 
> algorithm to make tradeoffs such as deciding whether it is worth to 
> "compact" the workload into fewer CPUs, losing some performance but 
> saving some power as a result.

Ok. So this is more of an interface design question. The two points on 
the curve you are talking about are:
    - The low point, the SLA or minimum amount of performance you are 
willing to expect from the system/application/etc.
    - The high point, maximum amount of power you are willing to have 
the system consume.

Between those two points is where the system could optimize in a 
workload, platform, etc. specific sort of way, making trade offs where 
beneficial.
I think we can (and should) design the interface in a way that allows 
administrators to specify those curve points using terms they care about 
(system power, application performance, etc).
> In order to make such a tradeoff, the algorithm would compute the new 
> expected power level following such a decision, plug it into the 
> utility curve and observe the required performance at that power 
> level, and then the algorithm can actually make the "compacting" 
> decision if the predicted performance is higher than what is required 
> by the "indifference" curve. The big question is: how can the system's 
> performance be computed starting from a particular thread-to-CPU 
> allocation? That's where reinforcement learning (RL) comes in, as it 
> allows one to use performance feedback to *learn* the mapping from 
> currently observed variables (number of free CPUs, average utilization 
> of occupied CPUs, etc.) into expected future performance. 
Right. I agree that figuring out a way of observing application 
performance will be needed, and that's something we'll certainly be 
exploring as we move forward.

-Eric

Reply via email to