Hi Jay, Actually, I had another question on this subject. Say we just created a new topic on a broker by writing a message to it. The appropriate consumer then creates the offsets entry and initializes it to the 2^63-1 placeholder. Two questions:
1. Why does it bother with the out-of-range fetch at all, since it's an established impossible value? It's always going to fail. 2. When it does fail and then resets to the latest offset -- didn't it miss the first message that was sent that created the topic? Thank you. Dave On Fri, Dec 9, 2011 at 2:48 PM, Jay Kreps <jay.kr...@gmail.com> wrote: > Yeah this is really just bad logging. Our way of initializing a client that > has no position in the log (no existing offset) was to try an impossible > offset and reset based on the client settings (e.g. reset to the latest > offset). The problem is the way it is logged it looks like an error. > > Here is the JIRA: > https://issues.apache.org/jira/browse/KAFKA-89 > > It is fixed on trunk. > > -Jay > > On Fri, Dec 9, 2011 at 10:52 AM, Florian Leibert <f...@leibert.de> wrote: > > > Hi - > > I'm running some load tests on Kafka - two brokers, one producer, one > > consumer (locally, just wanted to test the partitioning). > > I'm using the default configuration but each broker has been changed to > > have globally 8 partitions. > > > > After some time I start seeing more and more of these errors: > > > > [2011-12-09 10:39:11,059] ERROR error when processing request topic:d3, > > part:3 offset:9223372036854775807 maxSize:307200 > > (kafka.server.KafkaRequestHandlers) > > kafka.common.OffsetOutOfRangeException: offset 9223372036854775807 is out > > of range > > at kafka.log.Log$.findRange(Log.scala:47) > > at kafka.log.Log.read(Log.scala:223) > > at > > > > > kafka.server.KafkaRequestHandlers.kafka$server$KafkaRequestHandlers$$readMessageSet(KafkaRequestHandlers.scala:124) > > at > > > > > kafka.server.KafkaRequestHandlers$$anonfun$3.apply(KafkaRequestHandlers.scala:115) > > at > > > > > kafka.server.KafkaRequestHandlers$$anonfun$3.apply(KafkaRequestHandlers.scala:114) > > at > > > > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206) > > at > > > > > scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:206) > > at > > > > > scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34) > > at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34) > > at scala.collection.TraversableLike$class.map(TraversableLike.scala:206) > > at scala.collection.mutable.ArrayOps.map(ArrayOps.scala:34) > > at > > > > > kafka.server.KafkaRequestHandlers.handleMultiFetchRequest(KafkaRequestHandlers.scala:114) > > at > > > > > kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:43) > > at > > > > > kafka.server.KafkaRequestHandlers$$anonfun$handlerFor$3.apply(KafkaRequestHandlers.scala:43) > > at kafka.network.Processor.handle(SocketServer.scala:268) > > at kafka.network.Processor.read(SocketServer.scala:291) > > at kafka.network.Processor.run(SocketServer.scala:202) > > at java.lang.Thread.run(Thread.java:680) > > > > > > Any idea on why the offset falls out of range? I'm using the 0.6 release > > version. > > > > Thanks, > > Florian > > >