On Sun Nov 30 20:27:15 PST 2014, [email protected] wrote:
> That definitely seems incorrect to me. Since rebalance is only called
> on mach0, as it loops through the global run queue, it will skip
> processes that are not on mach0, so I think you are correct. (This was
> fixed on the mqs version of the nix scheduler; every mach calls
> rebalance to take care of their respective run queues.)

yes, that does fix that (yeah!) but i think there are two other issues.
(1) it doesn't recognize that it's not worth rebalancing if there are just
a few processes in the runq, and
(2) it quits before scanning them all.

the second is comprehensibly fixable, while the first ... i just used a
gnarly herustic.

without the benefit of your gsoc work, or enough thought, i came up
with the following.  although i like gotos just as much as the next guy,
this is a case where they were not helpful.  it's still not clear to me why
p->mp != m makes any difference.  (er, MACHP(m->machno) on 386).

static void
rebalance(void)
{
        Mpl pl;
        int pri, npri;
        Tick t;
        Sched *sch;
        Schedq *rq;
        Proc *p, *next;

        sch = m->sch;
        t = m->ticks;
        if(t - sch->balancetime < HZ/2)
                return;
        sch->balancetime = t;
        if(sch->nrdy < sch->nmach/2)            /* suspect hurestic */
                return;

        for(pri=0, rq=sch->runq; pri<Npriq; pri++, rq++){
                for(p = rq->head; p != nil; p = next){
                        next = p->rnext;
                        if(p->mp != m)
                                continue;
                        if(pri == p->basepri)
                                continue;
                        updatecpu(p);
                        npri = reprioritize(p);
                        if(npri != pri){
                                pl = splhi();
                                p = dequeueproc(sch, rq, p);
                                if(p != nil)
                                        queueproc(sch, &sch->runq[npri], p);
                                splx(pl);
                        }
                }
        }
}

- erik

Reply via email to