It is reproducible.  I've created a ticket with the logs attached.  Thanks
for your help.

https://issues.apache.org/jira/browse/KAFKA-397

d

On Tue, Jul 10, 2012 at 12:53 PM, Neha Narkhede <neha.narkh...@gmail.com>wrote:

> David,
>
> I suspect this is a bug with the MultiFetchResponse. Is this reproducible ?
> Would you mind filing a bug with the log files for the topic attached ?
>
> Thanks,
> Neha
>
>
> On Tue, Jul 10, 2012 at 8:41 AM, David Siegel <dsie...@knewton.com> wrote:
> > I've copied the relevant configuration options below.  Doesn't this
> > exception normally print out a useful error instead of just "null"?
> >
> > producer config:
> >   max.message.size: 1000000
> > consumer config:
> >   fetch.size: 2072000
> >
> > On Tue, Jul 10, 2012 at 10:57 AM, Jun Rao <jun...@gmail.com> wrote:
> >
> >> David,
> >>
> >> The fetch request gets an InvalidMessageSizeException. This means either
> >> log corruption or the fetch size is smaller than the largest message.
> Could
> >> you check your fetch size?
> >>
> >> Thanks,
> >>
> >> Jun
> >>
> >> On Mon, Jul 9, 2012 at 9:02 PM, David Siegel <dsie...@knewton.com>
> wrote:
> >>
> >> > I've just gotten the following error while running the zookeeper
> >> consumer.
> >> >
> >> > I made a backup of the kafka log directory and wiped the logs.  I
> >> > restarting kafka and the consumer.  After processing a few hundred
> >> > messages successfully I got the same error again.  I restarted the
> >> consumer
> >> > again and got the same error immediately.
> >> >
> >> > I'm running Kafka 0.7.1
> >> >
> >> > I've included a sample of DumpLogSegments.  The rest of the dumps
> looked
> >> > the same.
> >> >
> >> > Thanks for your help.
> >> >
> >> > -David Siegel
> >> >
> >> > 2012-07-10 02:31:21,998 ERROR [Consumer1]
> >> > c.k.h.c.k.KafkaConsumerServiceWorker: Failed to get next student event
> >> > kafka.common.InvalidMessageSizeException: null
> >> >         at
> sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> >> > Method) ~[na:1.6.0_30]
> >> >         at
> >> >
> >> >
> >>
> sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
> >> > ~[na:1.6.0_30]
> >> >         at
> >> >
> >> >
> >>
> sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
> >> > ~[na:1.6.0_30]
> >> >         at
> >> java.lang.reflect.Constructor.newInstance(Constructor.java:513)
> >> > ~[na:1.6.0_30]
> >> >         at java.lang.Class.newInstance0(Class.java:355) ~[na:1.6.0_30]
> >> >         at java.lang.Class.newInstance(Class.java:308) ~[na:1.6.0_30]
> >> >         at
> >> > kafka.common.ErrorMapping$.maybeThrowException(ErrorMapping.scala:53)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> >> >
> >> >
> >>
> kafka.message.ByteBufferMessageSet.kafka$message$ByteBufferMessageSet$$internalIterator(ByteBufferMessageSet.scala:99)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> >> >
> >>
> kafka.message.ByteBufferMessageSet.iterator(ByteBufferMessageSet.scala:82)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> >> > kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:81)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> >> > kafka.consumer.ConsumerIterator.makeNext(ConsumerIterator.scala:32)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> >> >
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:59)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> >> kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:51)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> kafka.utils.IteratorTemplate.next(IteratorTemplate.scala:36)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> >> kafka.consumer.ConsumerIterator.next(ConsumerIterator.scala:43)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at java.lang.Thread.run(Thread.java:662) [na:1.6.0_30]
> >> > 2012-07-10 02:31:21,998 ERROR [Consumer1]
> >> > c.k.h.c.k.KafkaConsumerServiceWorker: Iterator got into bad state.
> >>  Thread
> >> > exiting
> >> > java.lang.IllegalStateException: Iterator is in failed state
> >> >         at
> >> kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:47)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> kafka.utils.IteratorTemplate.next(IteratorTemplate.scala:36)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at
> >> kafka.consumer.ConsumerIterator.next(ConsumerIterator.scala:43)
> >> > ~[KPIP-0.4.birdy.jar:na]
> >> >         at java.lang.Thread.run(Thread.java:662) [na:1.6.0_30]
> >> >
> >> > Dumping /mnt/spool/kafka/tmp-0/00000000000000000000.kafka
> >> > Starting offset: 0
> >> > offset: 0 isvalid: true payloadsize: 1233 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 1243 isvalid: true payloadsize: 1232 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 2485 isvalid: true payloadsize: 1713 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 4208 isvalid: true payloadsize: 1181 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 5399 isvalid: true payloadsize: 1601 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 7010 isvalid: true payloadsize: 125 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 7145 isvalid: true payloadsize: 244 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 7399 isvalid: true payloadsize: 125 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 7534 isvalid: true payloadsize: 244 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 7788 isvalid: true payloadsize: 125 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 7923 isvalid: true payloadsize: 244 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 8177 isvalid: true payloadsize: 125 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 8312 isvalid: true payloadsize: 244 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 8566 isvalid: true payloadsize: 125 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > offset: 8701 isvalid: true payloadsize: 244 magic: 1 compresscodec:
> >> > NoCompressionCodec
> >> > tail of the log is at offset: 8955
> >> >
> >>
>

Reply via email to