Luke,

Thanks for your suggestion.  I did use that design originally in my
project.  There are two main drawbacks.  Both the workers and the
client need prior knowledge about which machine has the queue device
and you have that machine as a single point of failure.

The queue per machine design has some advantages too.  One can mix
machines in the cloud with local machines, which would be hard to do
if there were only one queue device.  The client just needs a list of
all the workers it will use.  Typically, I use my own workstation, a
few of our other local servers, and ec2 if it's a big job.

The big problem I'm trying to deal with is uneven usage, which I
thought hwm=1 would fix, but router drops on hwm.  So, clearly that's
not going to work.  It's a shame, the simple queue device would have
been a really elegant solution.

-Whit


On Wed, Dec 14, 2011 at 4:06 PM, Lucas Hope <[email protected]> wrote:
> On Thu, Dec 15, 2011 at 1:58 AM, Whit Armstrong <[email protected]>
> wrote:
>>
>> Well, let me explain what I'm trying to do.  Perhaps someone can show
>> me a better way.
>>
>> I have a client using a dealer socket. Talking to a mixed server
>> environment,  a couple of 6 core machines and a 12 core machine.
>>
>> Each of the servers uses a simple queue device to fan out the jobs to
>> the workers over ipc:
>
>
> Why do you have a queue device per server? Is it feasible to just have one
> queue device and connect the workers on each machine to that device over
> TCP? I have success with that model, using heartbeating between workers and
> queue.
>
> -Luke
>
>
> _______________________________________________
> zeromq-dev mailing list
> [email protected]
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to