On Fri, Jun 02, 2017 at 11:20:20AM -0700, Rohit Jain wrote: > > On 06/01/2017 05:37 AM, Peter Zijlstra wrote: > > On Thu, Jun 01, 2017 at 02:28:27PM +0200, Peter Zijlstra wrote: > > > On Wed, May 31, 2017 at 03:19:46PM -0700, Rohit Jain wrote: > > > > > > > > > 2) This scaled capacity is normalized and mapped into buckets. > > > > > Why? > > And its not at all clear why you'd need > > that to begin with. > > Here is the problem I am trying to solve: > > The benchmark(s) have a high degree of variance when run multiple > times. > > We believe it is because of the scheduler not being aware of the scaled > down capacity of the CPUs because of IRQ/RT activity. > > This patch helps in solving the above problem. Do you have any thoughts > on solving this problem in any other way?
Why does determining if a CPU's capacity is scaled down need to involve global data? AFAICT its a purely CPU local affair.

