Thomas De Schampheleire wrote:
(in the non-NUMA case):
the situation you describe suggests that the 'next' cpu changes. I
thought that the cpu lists would stay the same, as long as no cpus are
offlined?
The "next" CPU stays the same. But "next" is different from each CPU's
perspective.
Perhaps. So one detail that may be important here, is the balancing is
performed
between the per priority queues, not the CPUs total run queue. The idea
is that
if you have some threads at priority 0, some at priority 10, and some at
20, that
you'll have a fairly even distribution of 0, 10, and 20 priority threads
waiting for
each CPU. Otherwise, all the low priority threads could pile up on and
run on one CPU, while
higher priority threads sit in another CPU's queue. The code tries to
achieve a "parfait"
sort of effect. :)
But even in this case, it makes no sense to have one processor
completely idle, right?
Right. idle() stealing plays a stronger role in that particular scenario
for load balancing.
Still, it would be good to know what the reason was for your observed
run queue imbalance.
If you put the workload in the fixed priority (FX) scheduling class,
that should help simplify
things.
-Eric
_______________________________________________
opensolaris-code mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code