Hi Johnny,
If committing offsets puts this much load on the cluster, you might want to
consider committing them elsewhere. Maybe a key value store. Or if you send
the data you read from Kafka to a transactional store, you can write the
offsets there.
I hope this helps,
Andras
On Wed, Mar 21,
Hi Andras,
Thanks for that information, handcraft group.id to make sure it spread
across broker is a way, I will give that a go.
I understand the benefit of consumer group, my concern at the moment is the
potential to create a hot spot on one or the broker...
Thanks,
Johnny Luo
On
Hi Johnny,
As you already mentioned, it depends on the group.id which broker will be
the group leader.
You can change the group.id to modify which _consumer_offsets partition the
group will belong to, thus change which broker will manage a group. You can
check which partition a group.id is
Hello,
We are running a 16 nodes kafka cluster on AWS, each node is a m4.xLarge
EC2 instance, with 2TB EBS(ST1) disk. Kafka version is 0.10.1.0, we have
about 100 topics at the moment. Some busy topics will have about 2 billion
events every day, some low volume topics will only have thousands