Nicolas, It seems that you started a consumer from the earliest offset, then shut it down for a long time, and tried restarting it again. At this time, you will see OffsetOutOfRange exceptions, since the offset that your consumer is trying to fetch has been garbage collected from the server (due to it being too old). If you are using the high level consumer (ZookeeperConsumerConnector), the consumer will automatically reset the offset to the earliest or latest depending on the autooffset.reset config value.
Which consumer are you using in this test ? Thanks, Neha On Mon, Mar 11, 2013 at 2:12 AM, Nicolas Berthet <nicolasbert...@maaii.com>wrote: > Hi, > > > > I'm currently seeing a lot of OffsetOutOfRangeException in my server logs > (it's not something that appeared recently, I simply didn't use Kafka > before). I tried to find information on the mailing-list, but nothing seems > to match my case. > > > > ERROR error when processing request FetchRequest(topic:test-topic, part:0 > offset:3004960 maxSize:1048576) (kafka.server.KafkaRequestHandlers) > > kafka.common.OffsetOutOfRangeException: offset 3004960 is out of range > > > > I understand that, at startup, consumers will ask for a MAX_VALUE offset to > trigger this exception and detect the correct offset, right ? > > > > In my case, it's just too often (much more than the number of consumer > connections), but I also noticed it seems to happen in particular for > topics > with a "0" retention. Did anybody else suffer from the same symptoms ? > > > > Although it seems not critical (everything seems to work), it's probably > far > from optimal, and the log is just full of those. > > > > Regards, > > > > Nicolas > >