On Tue, 2017-12-05 at 09:03 +0100, Gyorgy Szekely wrote:
> Hi ZeroMQ community,
> In our application we use ZeroMQ for communication between backend
> services
> and it works quite well (thanks for the awesome library). Up to now
> we
> relied on the request/reply pattern only (a majordomo derivative
> protocol),
> where a broker distributes tasks among workers. Everything runs in
> it's own
> container, and scaling works like charm: if workers can't keep up
> with the
> load, we can simply start some more, and the protocol handles the
> rest. So
> far so good.
> 
> Now, we would like to use pub/sub: a component produces some data,
> and
> publishes an event about it. It doesn't care (and potentially can't
> even
> know) who needs it. Interested peers subscribe to the topic. What I'm
> puzzled with is scaling. If a subscriber can't keep up with the load
> I
> would like to scale it up just like the workers. But in this case the
> events won't be distributed, but all instances receive the same set,
> increasing CPU load, but not throughput.
> 
> I would like a pub/sub where load is distributed among identical
> instances.
> ZeroMQ has all kinds of fancy patterns (pirates and stuff), is there
> something for this problem?
> 
> What I had in mind is equipping subscribers with a "groupId", which
> is the
> same in scaling groups. Subscribers send their id's on connection to
> the
> broker, which publishes the topics to only one subscriber in each
> group.
> This means I can't use pub/sub sockets, but I have to reimplement the
> behavior on router/dealer, but that's ok.
> 
> What do you think, is there a better way?
> 
> Regards,
>   Gyorgy

Sounds like what you want is similar to push-pull - load balancing is
embedded in that pattern, have a look at the docs

-- 
Kind regards,
Luca Boccassi

Attachment: signature.asc
Description: This is a digitally signed message part

_______________________________________________
zeromq-dev mailing list
[email protected]
https://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to