Hi Abu,
Like others have said, if you just want the end effect of the change
then psradm is probably the easiest way.
But it sounds like you are experimenting with changing the code for a
proof-of-concept sort of experiment so you can get a feel for the
dispatcher, right?
So yes, disp_lowpri_cpu() is part of the overall puzzle, but there are a
number of different load balancing mechanisms that the dispatcher uses
to select a CPU.
In what we call the "push" path (that is, selecting a CPU for a newly
runnable thread. The first place to start looking is setbackdq(). It
calls cpu_choose(), which only very rarely calls disp_lowpri_cpu() to
walk the partition and select a CPU. Far more commonly, cpu_choose()
simply returns that last CPU upon which the thread ran. Only in the case
where (on a NUMA system) the thread was running outside of it's home
(cpu/memory node a.k.a lgroup), or where more than "rechoose_interval"
ticks have elapsed since the thread last ran, do we take the slower
disp_lowpri_cpu() code path.
After cpu_choose() you'll note that there's further opportunity for the
dispatcher to select a different CPU....since there may be some CMT
related load balancing to be done, or balancing across the run queues.
Finally, about midway through setbackdq() we go and enqueue the thread
on the CPU that we've chosen.
If you want to select CPU0 always for your experimental changes, the
easiest way to structure this probably would be to change the:
if (ncpus == 1) {
} elseif (!bound) {
...
} else {
...
}
construct to add a case for your change. Perhaps something like:
if (ncpus == 1) {
cp = tp->t_cpu;
} elseif (abu_disp) {
cp = abu_cp;
} elseif (!bound) {
...
You would need to do that for both setbackdq(), and setfrontdq(). Then
there's the "pull" side of things. Idle CPUs take runnable threads off
other CPU's run queues. So you would need to do something in the
disp_getwork() code path... :)
Now, for your changes, do you want the dispatcher to "prefer" scheduling
threads to run on a single core/CPU, or do you want to constrain it?
Asked another way, do you want threads to wait in the run queue for your
desired CPU rather than running somewhere else...or just make it so that
threads will run on your desired CPU if they can, but will run somewhere
else if that CPU isn't available?
Thanks,
-Eric
Abu Saad wrote:
> Hi,
> I am doing academic research on Process Variations in
> Multi-Core systems. As part of my project I want to
> schedule all my threads to a single core to get a feel
> of developing opensolaris. I am using a Dual Core
> Sytem and have installed opensolaris on a Virtual
> Machine. I tried dispatching all threads to a single
> core but am not getting the desired results.
> The changes I have made to the source code is as
> follows:
>
> 1)In the usr/src/uts/common/disp/disp.c , I have
> replaced the function disp_lowpri_cpu with my own
> function shown below
>
> /*
> * disp_lowpower_cpu - schedule all threads to CPU0
> */
> cpu_t *disp_lowpower_cpu(cpu_t *hint)
> {
> cpu_t *bestcpu;
> cpu_t *cp;
>
>
> ASSERT(hint != NULL);
>
> /*
> * Select CPU0
> */
> bestcpu = hint;
>
> cp = hint;
> do {
> if (cp->cpu_seqid == 0){
> ASSERT((cp->cpu_flags & CPU_QUIESCED) == 0);
> return(cp);
> }
> cp = cp->cpu_next;
> } while (cp != hint);
>
>
> /*
> * Return the best CPU .
> */
> ASSERT((bestcpu->cpu_flags & CPU_QUIESCED) == 0);
> return (bestcpu);
> }
>
> 2) In lines 1307, 1460, 1613 and in the function
> cpu_choose (of disp.c) I have replaced the call from
> disp_lowpri_cpu to my function namely
> disp_lowpower_cpu.
>
> 3) In the file usr/src/uts/common/sys/disp.h I have
> added my function as
> extern struct cpu *disp_lowpower_cpu(struct cpu *);
>
> Now after these changes I ran the command
> nightly -i ./opensolaris.sh
> and finally did a BFU.
>
> I am expecting to see all the threads dispatched to
> cpu0 but i can see a few of them are still running in
> cpu1. I think I have missed out something very simple
> but am not able to figure it out. Can anyone help me
> in disptaching all the threads to a single core so
> that I get some confidence in building my own
> opensolaris.
>
> Thank You,
> Abu Saad Papa
>
>
> Forgot the famous last words? Access your message archive online at
> http://in.messenger.yahoo.com/webmessengerpromo.php
> _______________________________________________
> tesla-dev mailing list
> tesla-dev at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/tesla-dev
>