I'm using storm-kafka-client 1.1.1-SNAPSHOT build. After topology start
kafka spout read all partitions from kafka exept one:

IdTopicPartitionLatest OffsetSpout Committed OffsetLag
Kafka Spout gtp 0 5726714188 5726700216 13972
Kafka Spout gtp 1 5716936379 5716922137 14242
Kafka Spout gtp 2 5725709217 5484094447 241614770
Kafka Spout gtp 3 5713385013 5713370624 14389
Kafka Spout gtp 4 5721077118 5721062942 14176
Kafka Spout gtp 5 5717492246 5717478013 14233
Kafka Spout gtp 6 5716438459 5716424263 14196
Kafka Spout gtp 7 5719165064 5719150543 14521

Reading partition #2 fails with:

2017-03-02 23:06:32.013 o.a.k.c.c.i.Fetcher [INFO] [51] [Thread-8-Kafka
Spout-executor[9 9]] Fetch offset 5484094448 is out of range for partition
gtp-2, resetting offset

This is common problem for kafka consumer and can be solved by
setting auto.offset.reset to 'earliest' or 'latest' value. So did I:

props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,
getValueDeserializer().getCanonicalName());


After redeploy topology nothing has changed. I still get this info message
'offset is out of range' and same commited offset for 2nd partition.

Reply via email to