2007/6/8, Eric Saxe <[EMAIL PROTECTED]>:
Thomas De Schampheleire wrote:
> (in the non-NUMA case):
> the situation you describe suggests that the 'next' cpu changes. I
> thought that the cpu lists would stay the same, as long as no cpus are
> offlined?

The "next" CPU stays the same. But "next" is different from each CPU's
perspective.

I see, I didn't think of the wrap-around at the last processor, even
though you mentioned it in the previous post.


>> Perhaps. So one detail that may be important here, is the balancing is
>> performed
>> between the per priority queues, not the CPUs total run queue. The idea
>> is that
>> if you have some threads at priority 0, some at priority 10, and some at
>> 20, that
>> you'll have a fairly even distribution of 0, 10, and 20 priority threads
>> waiting for
>> each CPU. Otherwise, all the low priority threads could pile up on and
>> run on one CPU, while
>> higher priority threads sit in another CPU's queue. The code tries to
>> achieve a "parfait"
>> sort of effect. :)
>
> But even in this case, it makes no sense to have one processor
> completely idle, right?

Right. idle() stealing plays a stronger role in that particular scenario
for load balancing.
Still, it would be good to know what the reason was for your observed
run queue imbalance.
If you put the workload in the fixed priority (FX) scheduling class,
that should help simplify
things.

If I start tasks at the command line, won't they get the same priority?
The tasks I am currently using are actually the same program (tachyon
raytrace), but the files that are processed are different, as are the
number of threads (4 or 8 currently).

In case they do not get the same priority, how can I put them in the FX class?

Thanks, Thomas
_______________________________________________
opensolaris-code mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/opensolaris-code

Reply via email to