On Wed, Jan 20, 2021 at 02:00:18PM +0530, Gautham R Shenoy wrote:
> > @@ -6157,18 +6169,31 @@ static int select_idle_cpu(struct task_struct *p, 
> > struct sched_domain *sd, int t
> >     }
> > 
> >     for_each_cpu_wrap(cpu, cpus, target) {
> > -           if (!--nr)
> > -                   return -1;
> > -           if (available_idle_cpu(cpu) || sched_idle_cpu(cpu))
> > -                   break;
> > +           if (smt) {
> > +                   i = select_idle_core(p, cpu, cpus, &idle_cpu);
> > +                   if ((unsigned int)i < nr_cpumask_bits)
> > +                           return i;
> > +
> > +           } else {
> > +                   if (!--nr)
> > +                           return -1;
> > +                   i = __select_idle_cpu(cpu);
> > +                   if ((unsigned int)i < nr_cpumask_bits) {
> > +                           idle_cpu = i;
> > +                           break;
> > +                   }
> > +           }
> >     }
> > 
> > -   if (sched_feat(SIS_PROP)) {
> > +   if (smt)
> > +           set_idle_cores(this, false);
> 
> Shouldn't we set_idle_cores(false) only if this was the last idle
> core in the LLC ? 
> 

That would involve rechecking the cpumask bits that have not been
scanned to see if any of them are an idle core. As the existance of idle
cores can change very rapidly, it's not worth the cost.

-- 
Mel Gorman
SUSE Labs

Reply via email to