We have some simple Kafka streams apps that populate GlobalKTables to use
as caches for topic contents. When running them with info-level logging
enabled, I noticed unexpected activity around group coordination (joining,
rebalancing, leaving, rejoining) that I didn't expect given that they need
to consume from all topic partitions vs. use the group load balancing
feature.

I tracked this down to the way the consumer config. is generated for
a GlobalKTable consumer -- the groupId is set to the Kafka streams
application id instead of to null -- the consumer needlessly creates a
ConsumerCoordinator and thus intiiates all the needless associated
messaging and overhead.

I was going to file a bug for this but per the contributing page am
bringing this up here first. Is there a reason why GlobalKTable consumers
should bear this group coordination overhead or should I go ahead and file
a ticket to remove it?

thanks,
Chris

Reply via email to