These failed message would not be lost because updated kafka offsets are never committed to zookeeper in a failure scenario. If a spout dies the failed messages are still in kafka so the next spout instance will attempt to process those failed messages again.
On Tue, Dec 2, 2014 at 2:19 PM, Sergey Zelvenskiy <[email protected]> wrote: > Based on what I see in the code, kafka-spout keeps the failed messages in > memory buffer. > > Would not these failed messages be lost in case of process or machine > failure? > > > https://github.com/apache/storm/blob/master/external/storm-kafka/src/jvm/storm/kafka/PartitionManager.java#L206-L221 > > Is there anything done to make it more resilient? >
