[ 
https://issues.apache.org/jira/browse/KAFKA-3764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15307885#comment-15307885
 ] 

Martin Nowak commented on KAFKA-3764:
-------------------------------------

Well this bug report is mostly about the fact that a producer that worked with 
0.9.0.1 breaks when updating to 0.10.0.0. Whether this is an issue of Kafka or 
the client isn't that interesting, but for sure something changed on the server 
side.
Might be interesting for http://kafka.apache.org/documentation.html#upgrade_10.

I'll try to debug this in more detail in the next few days. Any gut feeling 
what the client might do wrong?


> Error processing append operation on partition
> ----------------------------------------------
>
>                 Key: KAFKA-3764
>                 URL: https://issues.apache.org/jira/browse/KAFKA-3764
>             Project: Kafka
>          Issue Type: Bug
>    Affects Versions: 0.10.0.0
>            Reporter: Martin Nowak
>
> After updating Kafka from 0.9.0.1 to 0.10.0.0 I'm getting plenty of `Error 
> processing append operation on partition` errors. This happens with 
> ruby-kafka as producer and enabled snappy compression.
> {noformat}
> [2016-05-27 20:00:11,074] ERROR [Replica Manager on Broker 2]: Error 
> processing append operation on partition m2m-0 (kafka.server.ReplicaManager)
> kafka.common.KafkaException: 
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:159)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:85)
>         at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
>         at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNextOuter(ByteBufferMessageSet.scala:357)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:369)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$2.makeNext(ByteBufferMessageSet.scala:324)
>         at 
> kafka.utils.IteratorTemplate.maybeComputeNext(IteratorTemplate.scala:64)
>         at kafka.utils.IteratorTemplate.hasNext(IteratorTemplate.scala:56)
>         at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>         at kafka.utils.IteratorTemplate.foreach(IteratorTemplate.scala:30)
>         at 
> kafka.message.ByteBufferMessageSet.validateMessagesAndAssignOffsets(ByteBufferMessageSet.scala:427)
>         at kafka.log.Log.liftedTree1$1(Log.scala:339)
>         at kafka.log.Log.append(Log.scala:338)
>         at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:443)
>         at kafka.cluster.Partition$$anonfun$11.apply(Partition.scala:429)
>         at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:231)
>         at kafka.utils.CoreUtils$.inReadLock(CoreUtils.scala:237)
>         at kafka.cluster.Partition.appendMessagesToLeader(Partition.scala:429)
>         at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:406)
>         at 
> kafka.server.ReplicaManager$$anonfun$appendToLocalLog$2.apply(ReplicaManager.scala:392)
>         at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>         at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
>         at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>         at 
> scala.collection.mutable.HashMap$$anonfun$foreach$1.apply(HashMap.scala:99)
>         at 
> scala.collection.mutable.HashTable$class.foreachEntry(HashTable.scala:230)
>         at scala.collection.mutable.HashMap.foreachEntry(HashMap.scala:40)
>         at scala.collection.mutable.HashMap.foreach(HashMap.scala:99)
>         at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
>         at scala.collection.AbstractTraversable.map(Traversable.scala:104)
>         at 
> kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:392)
>         at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:328)
>         at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:405)
>         at kafka.server.KafkaApis.handle(KafkaApis.scala:76)
>         at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>         at java.lang.Thread.run(Thread.java:745)
> Caused by: java.io.IOException: failed to read chunk
>         at 
> org.xerial.snappy.SnappyInputStream.hasNextChunk(SnappyInputStream.java:433)
>         at 
> org.xerial.snappy.SnappyInputStream.read(SnappyInputStream.java:167)
>         at java.io.DataInputStream.readFully(DataInputStream.java:195)
>         at java.io.DataInputStream.readLong(DataInputStream.java:416)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.readMessageFromStream(ByteBufferMessageSet.scala:118)
>         at 
> kafka.message.ByteBufferMessageSet$$anon$1.makeNext(ByteBufferMessageSet.scala:153)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to