Hey,
       My team is new to Kafka and we are using the examples found at.

http://www.confluent.io/blog/tutorial-getting-started-with-the-new-apache-kafka-0.9-consumer-client

We process messages from kafka and persist them to Mongo.
If Mongo is unavailable we are wondering how we can re-consume the messages
while we wait for Mongo to come back up.

Right now we commit after the messages for each partition are processed
(Following the example).
I have tried a few approaches.

1. Catch the application exception and skip the kafka commit. However the
next poll does not re consume the messages.
2. Allow the consumer to fail and restart the consumer. This works but
causes a rebalance.

Should I attempt to store the offset and parition (in memory) instead and
attempt to reseek in order to re consume the messages?

Whats the best practice approach in this kind of situation? My priority is
to never loose a message and to ensure it makes it to Mongo. (Redelivery is
ok)

Thanks for any help or pointers in the right direction.

Michael

Reply via email to