Hi Antony,

1) Kafka only allows one consumer per partition, to guarantee order within 
partition

2) Only one Kafka broker can be leader for one partition.


 Based on these 2 building blocks you could split your topic into number of 
partitions and that way kafka server load gets distributed across different 
nodes. You could have multiple consumers running on different machines with 
single consumer in every group (I am assuming you need lot of consumers but the 
amount of data consumption per consumer is not huge so you dont need group)


Sunil

________________________________
From: Antony Vo <antony.van...@gmail.com>
Sent: Wednesday, February 22, 2017 9:17:13 AM
To: users@kafka.apache.org
Subject: Scaling To Many Kafka Consumers For Particular Topic

Hello,

I am currently using a single publisher to publish to a single topic with a
single partition.  I would like to support many consumers that listen to
the same data and cache the topic's data in-memory.  What would be the best
approach for scaling to potentially hundreds of consumers for this data?
Right now, it seems that if I use a separate consumer group for each cache,
they would all go to the same leader for the partition and potentially
overload that node.

Is the solution to separate out into multiple partitions for the topic so
that each partition's node is serving out a smaller amount of data?  Or
maybe duplicating the data across multiple topics could work?  Is it
potentially the case that Kafka isn't built to support an arbitrary number
of consumers (each consuming the same data) for a given topic?  Suggestions
appreciated!

Thanks,
Antony

Reply via email to