On Jan 5, 2011, at 10:47, Bob Cronin wrote:

> The arrival rate is always high.
>  
"High" isn't very quantitative.  A high rate to a human being might
be a low rate to a computer.

Are all the workers in the same LPAR?  How many CPUs does that
LPAR have?  At what number of workers (per LPAR) do you reach
a point of diminishing returns, where paging overhead outweighs
the value of concurrent processing?  If all the workers are
busy 100% of the time, the arrival rate is greater than the
service rate and the queue(s) will grow without bounds.  Many
such questions should be considered ahead of whatever esthetic
value lies in randomly distributing the workload.

Of course, if each of your workers competes 1-for-1 with workloads
of other departments, you can get a bigger share by assigning
more workers.  And they can retaliate by assigning more servers
to their workloads.  This is known as "The Tragedy of the Commons".

> On Wed, Jan 5, 2011 at 12:07 PM, Mark Wheeler <mwheele...@hotmail.com>wrote:
>> 
>> If you have "enough" workers defined, then much of the time there will be
>> multiple workers with NO spool files. By randomly distributing the load (or
>> round-robining), you keep all the workers "active" from a VM perspective. If
>> the arrival rate is high enough, all the workers workingsets would stay in
>> storage (which could be substantial because you indicated this is a very
>> large application). If the arrival rate is low, the workers could experience
>> a lot of thrashing as they continually get paged out and then back in when
>> new work arrives. Better IMO to use some other algorithm (alphabetical
>> sort?) to let as many workers as possible stay idle (and eventually paged
>> out).

-- gil

Reply via email to