Hi - I'm running some load tests on Kafka - two brokers, one producer, one consumer (locally, just wanted to test the partitioning). I'm using the default configuration but each broker has been changed to have globally 8 partitions.
After some time I start seeing more and more of these errors: [2011-12-09 10:39:11,059] ERROR error when processing request topic:d3, part:3 offset:9223372036854775807 maxSize:307200 (kafka.server.KafkaRequestHandlers) kafka.common.OffsetOutOfRangeException: offset 9223372036854775807 is out of range at kafka.log.Log$.findRange(Log.scala:47) at kafka.log.Log.read(Log.scala:223) at kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:124) at kafka.server.KafkaRequestHandlers$$anonfun$3.apply(KafkaRequestHandlers.scala:115) at kafka.server.KafkaRequestHandlers$$anonfun$3.apply(KafkaRequestHandlers.scala:114) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206) at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206) at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34) at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34) at scala.collection.TraversableLike$class.map(TraversableLike.scala:206) at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34) at kafka.server.KafkaRequestHandlers.handleMultiFetchRequest(KafkaRequestHandlers.scala:114) at kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:43) at kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:43) at kafka.network.Processor.handle(SocketServer.scala:268) at kafka.network.Processor.read(SocketServer.scala:291) at kafka.network.Processor.run(SocketServer.scala:202) at java.lang.Thread.run(Thread.java:680) Any idea on why the offset falls out of range? I'm using the 0.6 release version. Thanks, Florian