Hi all,

  We are into a situation where our topology generated java.io.Exception :
FAILED_TO_UNCOMPRESS (5) when it reached a given offset. It is no longer
consuming data from any partition.

  We identified the broker / partition / offset and ran a tool to find out
the range of offset that are unfetchable. We also  ran the DumpSegmentLog
utility and confirmed that the segment contained invalid bytes.

Should the KafkaSpout be skipping the faulting offset(s)?
Is there way to tell it to do so?

Our topology is stuck at that offset, can we delete, the faulty segment,
restart Kafka and re-activate the topology?

One other option would be the overwrite consume offset in Zookeeper, and
re-activate the topology, but I hope there is an easier solution.

Thanks a lot for your help
François

Reply via email to