On Mon, Jan 11, 2021 at 03:36:57PM +0100, Vincent Guittot wrote:
> > > <SNIP>
> > >
> > > I think
> > > that we should decay it periodically to reflect there is less and less
> > > idle time (in fact no more)  on this busy CPU that never goes to idle.
> > > If a cpu was idle for a long period but then a long running task
> > > starts, the avg_idle will stay stalled to the large value which is
> > > becoming less and less relevant.
> >
> > While I get what you're saying, it does not help extrapolate what the
> > idleness of a domain is.
> 
> not but it gives a more up to date view of the idleness of the local
> cpu which is better than a stalled value
> 

Fair enough.

> >
> > > At the opposite, a cpu with a short running/idle period task will have
> > > a lower avg_idle whereas it is more often idle.
> > >
> > > Another thing that worries me, is that we use the avg_idle of the
> > > local cpu, which is obviously not idle otherwise it would have been
> > > selected, to decide how much time we should spend on looking for
> > > another idle CPU. I'm not sure that's the right metrics to use
> > > especially with a possibly stalled value.
> > >
> >
> > A better estimate requires heavy writes to sd_llc. The cost of that will
> > likely offset any benefit gained by a superior selection of a scan
> > depth.
> >
> > Treating a successful scan cost and a failed scan cost as being equal has
> > too many corner cases. If we do not want to weight the successful scan
> > cost, then the compromise is to keep the old behaviour that accounts for
> 
> I think that keeping the current way to scane_cost id the best option for now
> 

I sent a series that drops this patch for the moment as well as the
SIS_PROP for selecting a core.

-- 
Mel Gorman
SUSE Labs

Reply via email to