Hi All,
I am running a Kafka consumer(Single threaded) on kubernetes . Application
is polling the records and accummulating in memory . There is a scheduled
write of these records to S3 . Only after that i am committing the offsets
back to Kafka.
There are 4 partitions and 4 consumers(4 kubernetes pods) are there in the

I have a question on the commit behaviour when kafka rebalancing happens .
After a kafka rebalance , will the consumers get reassigned and get
duplicate records .
Do i need to clear my in memory buffer on a rebalance event


Reply via email to