Hello,

I am currently using a single publisher to publish to a single topic with a
single partition.  I would like to support many consumers that listen to
the same data and cache the topic's data in-memory.  What would be the best
approach for scaling to potentially hundreds of consumers for this data?
Right now, it seems that if I use a separate consumer group for each cache,
they would all go to the same leader for the partition and potentially
overload that node.

Is the solution to separate out into multiple partitions for the topic so
that each partition's node is serving out a smaller amount of data?  Or
maybe duplicating the data across multiple topics could work?  Is it
potentially the case that Kafka isn't built to support an arbitrary number
of consumers (each consuming the same data) for a given topic?  Suggestions
appreciated!

Thanks,
Antony

Reply via email to