Think of my love of using the random approach this way ... let's say all the
workers are idle ALL the time. If I did not pick one at random then the
first one on my list would always get ALL the work. I realize it doesn't
make a difference as long as the work gets done, but for me personally, I
would rather see all the servers doing approximately the same amount of
work. I know, its insane, but I am funny that way, and I am not going to get
over my personal peccadilloes this late in life, so humor me, willya? ;-)

Now I certainly appreciate all the elegant solutions that would work so
wonderfully and optimally were I starting from scratch, but I am not. The
workers I am distributing work to are standard Rexx WAKEUP-driven do-loops
with thousands and thousands of lines of complicated code that I am simply
NOT going to touch. Ditto the server thats distributing the work. All I want
to do is add a small bit of logic to the distributor to choose those workers
who have the smallest reader queues and then, if there are more than one
such, to pick one of those at random (no matter how silly you all think that
is).

I believe I can do that with the idea to extend the sort key using the
random stage, and I plan to try that. SO, thanks all. It has been
fascinating.
--
bc

On Wed, Jan 5, 2011 at 8:25 AM, Glenn Knickerbocker <n...@bestweb.net>wrote:

> On Tue, 4 Jan 2011 23:32:59 -0700, gil wrote:
> >Better to implement a true single-queue multi-server protocol.
>
> Given that he's stuck with the limitations of using spool files, the one
> advantage of transferring them to the workers immediately is that it
> provides extra queueing space.  Instead of being limited to 10k files at
> peak times, he can accommodate a backlog of 10k per worker (plus 10k for
> the stalled queue manager while it waits to be able to transfer more to
> the workers).
>
> ¬R                  Blather, Rinse, Repeat.
> http://users.bestweb.net/~notr/telecom.html
>

Reply via email to