You should set the reset to latest, commit offsets manually using a
rebalance listener. In this way upon seek() you should get all data right.
Also when you say “Uncommitted” offset, that means you haven’t really
processed them. So you should determine such failure, manually control
offset
Than you. In my case i am receiving messages , doing a small transformation
and sending to a output topic .
If i am running 4 consumers against 4 partitions and one of the consumer
dies , will there be duplicate messages sent in this case
Since when the new consumer comes up , it will again
This is actually quite nicely explained by Jason Gustafson on this article
-
https://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0-9-consumer-client/
It's technically up to the application on how to determine whether message
is fully received. If you have database txn
Hi,
I am running a poll loop for kafka consumer and the app is deployed in
kubernetes.I am using manual commits.Have couple of questions on exception
handling in the poll loop
1) Do i need to handle consumer rebalance scenario(when any of the consumer
pod dies) by adding a listener or will the