I have following Kafka cluster:
- Broker#: 13 (1 Broker : 14 cores & 36GB memory ) - Kafka cluster version: 2.0.0 - Kafka Java client version: 2.0.0 - Number topics: ~15 (3 replica, 2 min replica, 8 partitions) - Number of consumers: 7K (all independent and manually assigned all partitions of a topic to a consumers. One consumer is consuming all partitions from a topic only) - CPU utilisation (per broker): ~15% - Memory Utilisation: ~ 20% - Bytes Out: ~30MBps - Message produce rate: ~20K Message is very small and produce rate very low. That's why there is no load on brokers. I wanted to understand of having large number of consumers (~300 to ~500) consuming independently (manual partition assigned) for a topic then what problem can come in future like: - will there lag build up when spike come and Kafka starts not behaving properly? - Presently, I am not able to open kafka manager to check lag because it getting timeout due to huge number of consumer per topic. - Some errors are also logged but as a INFO. INFO [FetchSessionHandler:440] [Consumer clientId=XXXX#2321d06ae56a#test, groupId=XXXX#2321d06ae56a#test] Error sending fetch request (sessionId=1534027518, epoch=1) to node 4: org.apache.kafka.common.errors.DisconnectException.17:09:09,161 INFO [FetchSessionHandler:440] [Consumer clientId=XXXX#2321d06ae56a#test, groupId=XXXX#2321d06ae56a#test] Error sending fetch request (sessionId=1999034475, epoch=INITIAL) to node 5: org.apache.kafka.common.errors.DisconnectException.17:09:09,161 INFO [FetchSessionHandler:440] [Consumer clientId=XXXX#2321d06ae56a#test, groupId=XXXX#2321d06ae56a#test] Error sending fetch request (sessionId=1788862064, epoch=1) to node 9: org.apache.kafka.common.errors.DisconnectException. On Tue, Oct 22, 2019 at 2:50 PM M. Manna <manme...@gmail.com> wrote: > Everything has impact. You cannot keep churning loads of messages under the > same operating condition, and expect nothing to change. > > You have know find out (via load testing) an optimum operating condition > (e.g. partition, batch.size etc.) for you producer/consumer to work > correctly. Remember that more topics/partition you have, the more the > complexity. Based on how many topics/partitions/consumers/producers you're > creating - the tuning of brokers may need to change accordingly. > > Thanks, > > On Tue, 22 Oct 2019 at 10:09, Hrishikesh Mishra <sd.hri...@gmail.com> > wrote: > > > I wanted to understand whether broker will be unstable with large number > of > > consumers or will consume face some issue like lag will increase? > > > > > > On Mon, Oct 21, 2019 at 6:55 PM Shyam P <shyamabigd...@gmail.com> wrote: > > > > > What are you trying to do here ? whats your objective ? > > > > > > On Sat, Oct 19, 2019 at 8:45 PM Hrishikesh Mishra <sd.hri...@gmail.com > > > > > wrote: > > > > > > > Can anyone please help me with this? > > > > > > > > On Fri, 18 Oct 2019 at 2:58 PM, Hrishikesh Mishra < > sd.hri...@gmail.com > > > > > > > wrote: > > > > > > > > > Hi all, > > > > > > > > > > I wanted to understand, having large numbers of consumers on > > > > > producer latency and brokers. I have around 7K independent > consumers. > > > > Each > > > > > consumer is consuming all partitions of a topic. I have manually > > > assigned > > > > > partitions of a topic to a consumer, not using consumer groups. > Each > > > > > consumer is consuming msg only one topic. Message size is small > less > > > > than a > > > > > KB. Produce throughput is also low. But when produce spike come > then > > > > > produce latency (here ack=1). Brokers resource is also very low. I > > want > > > > to > > > > > understand the impact on having large number of consumers on Kafka. > > > > > > > > > > > > > > > *Cluster Details: * > > > > > > > > > > Broker#: 13 (1 Broker : 14 cores & 36GB memory ) > > > > > > > > > > Kafka cluster version: 2.0.0 > > > > > Kafka Java client version: 2.0.0 > > > > > Number topics: ~15. > > > > > Number of consumers: 7K (all independent and manually assigned all > > > > > partitions of a topic to a consumers. One consumer is consuming all > > > > > partitions from a topic only) > > > > > > > > > > > > > > > > > > > > Regards, > > > > > Hrishikesh > > > > > > > > > > > > > > > > > > > >