Thanks to Jun Rao.I use DumpLogSegment tool to check log file. And I find that 
2983409308 is not the startoffset of one message.It is a index of positin in 
one message.How did it happen? Why does the consumed offset like this? 
----- Original Message -----
From: Jun Rao <jun...@gmail.com>
To: kafka-users@incubator.apache.org, zlai_2...@sina.com
Subject: Re: There is an excepiton when consumer get data from one broker.
Date: 2012-07-10 23:02

This can be caused by either log corruption or a bug in kafka that uses an
incorrect offset. Could you use the DumpLogSegment tool to see if
offset 2983409308 for topic Memcache2Sql partition 0 at broker 1 is valid?
Thanks,
Jun
On Tue, Jul 10, 2012 at 5:24 AM, <zlai_2...@sina.com> wrote:
>  Hi,all
>         There is an exception when consumer get data from one broker. And
> can not get new data from this broker. But there are some data on this
> broker that are not consumed. The exception is
> like:[kafka.consumer.FetcherRunnable$$anonfun$run$3.apply(FetcherRunnable.scala:91)]:
> FetchRunnable-0 kafka.consumer.FetcherRunnable - error in FetcherRunnable
> for Memcache2Sql:1-0: fetched offset = 2983409308: consumed offset =
> 2983409308kafka.common.InvalidMessageSizeException: invalid message size:
> -1592784872 only received bytes: 307196 at 2983409308( possible causes (1)
> a single message larger than the fetch size; (2) log corruption )        at
> kafka.message.ByteBufferMessageSet$$anon$1.makeNextOuter(ByteBufferMessageSet.scala:103)
>        at
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:138)
>        at
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:82)
>        at
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:59)
>      at
> kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:51)......
>
>
> And the kafka.tools.ConsumerOffsetChecker result
> is:MemcacheProducerManager,Memcache2Sql,1-0
> (Group,Topic,BrokerId-PartitionId)            Owner = null  Consumer offset
> = 2983409308                  = 2,983,409,308 (2.78G)         Log size =
> 2984795244                  = 2,984,795,244 (2.78G)     Consumer lag =
> 1385936                  = 1,385,936 (0.00G)
>
> What is the reason? How can I resolve it?Thanks.

Reply via email to