How does Solaris load up its tasks and know when to say "stop, no more please"?
I am curious about this and figured I'd toss it out here. I could spend a
life time digging into "The Magic Garden Explained" or "Solaris Internals"
as well as man pages but the only person that would benefit is me.
If I drag the question out here, then maybe someone else can benefit also.
In order to see what's on my mind we need this brief segment of the vmstat
manpage :
vmstat(1M) says that the first few fields reported by vmstat are :
kthr Report the number of kernel threads in each
of the three following states:
r the number of kernel threads in run
queue
b the number of blocked kernel
threads that are waiting for
resources I/O, paging, and so forth
w the number of swapped out light-
weight processes (LWPs) that are
waiting for processing resources to
finish.
I have this :
# uname -a
SunOS phobos 5.11 snv_49 i86pc i386 i86pc
And I write a script that has hundreds of files to unzip. Each file is
variable in length and I unzip each file thus :
priocntl -e -c FX -m 0 -p 0 unzip -q foo.zip &
So there is a script that has hundreds of lines of that for hundreds of
little files called foo_N.zip
I fire off that script, and toss it into the background with
./un.sh &
Then I watch vmstat pile up the threads in the run queue :
# vmstat 5
kthr memory page disk faults cpu
r b w swap free re mf pi po fr de sr f0 s2 -- -- in sy cs us sy id
0 0 0 1800088 841272 9 39 57 17 17 0 13 0 11 0 0 603 646 324 3 4 93
8 0 0 1782680 765176 741 2387 0 2548 2548 0 0 0 38 0 0 700 4385 126 51 49 0
5 0 0 1764040 762392 784 2839 6 3285 3285 0 0 0 86 0 0 950 5074 247 45 55 1
29 0 0 1752320 767168 513 65 2 4645 4645 0 0 0 93 0 0 1003 3400 197 59 34 7
26 0 0 1755160 775216 300 6 0 3567 3567 0 0 0 143 0 0 1310 4104 333 46 38 16
34 0 0 1761392 772416 363 3 2 4899 4899 0 0 0 106 0 0 1082 2448 229 60 30 10
29 0 0 1765464 771760 304 2 0 4941 4941 0 0 0 135 0 0 1272 4571 253 47 37 15
9 0 0 1769480 768136 458 1 0 3123 3123 0 0 0 100 0 0 1038 2231 153 71 29 0
27 0 0 1771360 747872 484 1 2 2680 2680 0 0 0 107 0 0 1088 2248 137 68 30 2
6 0 0 1774816 749224 320 2 2 4830 4830 0 0 0 161 0 0 1463 3178 153 51 33 16
12 0 0 1777544 760936 523 1 8 3350 3350 0 0 0 111 0 0 1121 2941 138 65 31 5
6 0 0 1778688 768336 261 1 13 5963 5963 0 0 0 377 0 0 2572 1589 88 30 21 49
5 0 0 1779112 786672 424 0 70 5070 5070 0 0 0 225 0 0 1764 2004 122 49 26 24
You see that? I had a maximum of 34 threads in the run queue.
But I have hundreds of tasks to run and they are all low priority fixed
class processes.
Why don't I see a number of 200 in that first column ? Or once we hit some
peak like 34 then I begin to see "blocked" kernel threads pile up until
resources are available?
In my mind I have this vision of three buckets. The first bucket has four
people sitting around with ladles. They ladle out ( ladle : noun, a
long-handled utensil with a cup-shaped bowl for dipping or conveying liquids
| verb: to use a ladle to convey a liquid ) the water as fast as they can
but each of them only has one ladle. If the bucket starts to fill up then
water will spill out a little overflow hole at the top and into the second
bucket. If that second bucket gets full then water proceeds into the last,
third bucket. If the first bucket gets nearly empty or one of the ladel
people becomes idle then water will be taken out of the second bucket and
dumped back into the first bucket.
The last bucket is a real quandry. Water from there will only be addressed
if there is nothing else to do.
This is a terrible way to even begin to explain "priority" of the water in
the buckets or variable classes of priority.
There .. I actually laid that all out. It was what was in my head and I was
even going to say that a multi-core person could have a ladel in each hand.
Possibly eight hands in the case of an UltraSparc T1.
back to reality ...
At what point does Solaris push back and say "no more, I'm busy" ? I am now
going to go to amazon.com and order a copy of "Solaris Internals".
I really hope that someone can shed some light on this for we unwashed
masses :-)
Dennis
_______________________________________________
opensolaris-discuss mailing list
[email protected]