Hello Marko,

The difference is that the publisher has to know before hand to which queues to 
publish. With the AMQ model, you have a more decoupled model: The publisher 
only know the Exchange, and the consumers register themselves without the 
producer knowing about them. Therefore, let’s say you want to have a new 
service that will consume existing events. With the AMQ abstraction, you don’t 
have to change the producer whatsoever, you just bind a new consumer to the 
exchange. Without this decoupling, you’d have to modify the producer to publish 
to the new queues (or, at least, have the list of queues to publish to in a 
configuration file and add the new one to it, should you design the application 
with expandability in mind).

For small applications that is not a big hurdle, but when you have several 
applications, written by different teams, with different update schedules and 
roadmaps, it’s good to have this decoupling.

But like Ali said, the way to achieve such a decoupling with Kafka would be to 
have a separate service doing the mapping.

I just asked because someone else could have done such a mapper and could have 
open-sourced it, so I wouldn’t have to reinvent the wheel.

I hope the scenario is more clear now.


> Am 15.09.2016 um 19:49 schrieb Marko Bonaći <marko.bon...@sematext.com>:
> 
> 1. You can create N topics
> 2. You control from producer where each message goes
> 3. You have consumer that fetches from M different topics:
> https://kafka.apache.org/0100/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#subscribe(java.util.Collection)
> 
> Isn't this architecture flexible enough for any type of use case? What do
> you think cannot be achieved?
> 
> Marko Bonaći
> Monitoring | Alerting | Anomaly Detection | Centralized Log Management
> Solr & Elasticsearch Support
> Sematext <http://sematext.com/> | Contact
> <http://sematext.com/about/contact.html>
> 
> On Thu, Sep 15, 2016 at 11:01 AM, Ali Akhtar <ali.rac...@gmail.com> wrote:
> 
>> It sounds like you can implement the 'mapping service'  component yourself
>> using Kafka.
>> 
>> Have all of your messages go to one kafka topic. Have one consumer group
>> listening to this 'everything goes here' topic. This consumer group acts as
>> your mapping service. It looks at each message, and based on your rules, it
>> sends that message to a different topic for those specific rules.
>> 
>> Then you have your consumers listening to the specific topics that they
>> need to. Your mapping service does the job of redirecting messages from the
>> 'everything' topic to the specific topics based on your rules.
>> 
>> On Thu, Sep 15, 2016 at 1:43 PM, Luiz Cordeiro <
>> luiz.corde...@mobilityhouse.com> wrote:
>> 
>>> Hello,
>>> 
>>> We’re considering migrating an AMQ-based platform to Kafka. However our
>>> application logic needs an AMQ feature called Dynamic Binding, that is,
>> on
>>> AMQ one publishes messages to an Exchange, which can be dynamically
>>> configured to deliver a copy of the message to several queues, based on
>>> binding rules. So when a new client comes alive, it may create its
>> binding
>>> rules to specify a set of topics to listen to, and receive all the
>> messages
>>> from these topics on a private queue.
>>> 
>>> I understand that Kafka neither provides this nor will, as it is not its
>>> objective, but I was wondering if there’s another component, an overlay
>> to
>>> Kafka, that could provide this feature while using Kafka behind the
>> scenes
>>> for the persistence, something like this:
>>> 
>>> Publisher --> Mapping Service --> Kafka <-- Consumers
>>>                     ^                          |
>>>                     |       Binding rules      |
>>>                     \--------------------------/
>>> 
>>> Are you aware of such a component? Otherwise, how would you solve this
>>> issue of publish to 1 place and have it replicated on N topics.
>>> 
>>> Best Regards,
>>> Luiz
>>> 
>> 

Reply via email to