On Thu, Jul 25, 2019 at 10:32:49PM +0800, Aaron Lu wrote:
> +bool cfs_prio_less(struct task_struct *a, struct task_struct *b)
> +{
> +     struct sched_entity *sea = &a->se;
> +     struct sched_entity *seb = &b->se;
> +     bool samecpu = task_cpu(a) == task_cpu(b);
> +     struct task_struct *p;
> +     s64 delta;
> +
> +     if (samecpu) {
> +             /* vruntime is per cfs_rq */
> +             while (!is_same_group(sea, seb)) {
> +                     int sea_depth = sea->depth;
> +                     int seb_depth = seb->depth;
> +
> +                     if (sea_depth >= seb_depth)
> +                             sea = parent_entity(sea);
> +                     if (sea_depth <= seb_depth)
> +                             seb = parent_entity(seb);
> +             }
> +
> +             delta = (s64)(sea->vruntime - seb->vruntime);
> +             goto out;
> +     }
> +
> +     /* crosscpu: compare root level se's vruntime to decide priority */
> +     while (sea->parent)
> +             sea = sea->parent;
> +     while (seb->parent)
> +             seb = seb->parent;
> +     delta = (s64)(sea->vruntime - seb->vruntime);
> +
> +out:
> +     p = delta > 0 ? b : a;
> +     trace_printk("picked %s/%d %s: %Ld %Ld %Ld\n", p->comm, p->pid,
> +                     samecpu ? "samecpu" : "crosscpu",
> +                     sea->vruntime, seb->vruntime, delta);
> +
> +     return delta > 0;
>  }

Heh.. I suppose the good news is that Rik is trying very hard to kill
the nested runqueues, which would make this _much_ easier again.

Reply via email to