[ 
https://issues.apache.org/jira/browse/KAFKA-6739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16503733#comment-16503733
 ] 

Dhruvil Shah edited comment on KAFKA-6739 at 6/6/18 6:42 PM:
-------------------------------------------------------------

[~tgbeck] this issue would reproduce when brokers contain V2 message format 
with headers, and consumers are on 0.10 or older versions. V2 message format 
was introduced in 0.11. A possible workaround could be to upgrade the 
consumers. When consumers understand V2 message format, we do not require any 
down-conversion on the broker.


was (Author: dhruvilshah):
[~tgbeck] this issue would reproduce when brokers contain V2 message format 
with headers, and consumers are on 0.10 or older versions. V2 message format 
was introduced in 0.11. A possible workaround could be to upgrade the consumers 
to 0.11 or beyond. When consumers understand V2 message format, we do not 
require any down-conversion on the broker.

> Down-conversion fails for records with headers
> ----------------------------------------------
>
>                 Key: KAFKA-6739
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6739
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 1.0.0
>            Reporter: Koelli Mungee
>            Assignee: Dhruvil Shah
>            Priority: Critical
>             Fix For: 2.0.0, 1.0.2, 1.1.1
>
>
> A broker running at 1.0.0 with the following properties 
>  
> {code:java}
> log.message.format.version=1.0
> inter.broker.protocol.version=1.0
> {code}
> receives this ERROR while handling fetch request for a message with a header
> {code:java}
> [2018-03-23 01:48:03,093] ERROR [KafkaApi-1] Error when handling request 
> {replica_id=-1,max_wait_time=100,min_bytes=1,topics=[{topic=test=[{partition=11,fetch_offset=20645,max_bytes=1048576}]}]}
>  (kafka.server.KafkaApis) java.lang.IllegalArgumentException: Magic v0 does 
> not support record headers 
> at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.appendWithOffset(MemoryRecordsBuilder.java:403)
>  
> at 
> org.apache.kafka.common.record.MemoryRecordsBuilder.append(MemoryRecordsBuilder.java:586)
>  
> at 
> org.apache.kafka.common.record.AbstractRecords.convertRecordBatch(AbstractRecords.java:134)
>  
> at 
> org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:109)
>  
> at 
> org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253) 
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:520)
>  
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:518)
>  
> at scala.Option.map(Option.scala:146) 
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:518)
>  
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:508)
>  
> at scala.Option.flatMap(Option.scala:171) 
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:508)
>  
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:556)
>  
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:555)
>  
> at scala.collection.Iterator$class.foreach(Iterator.scala:891) 
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334) 
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72) 
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54) 
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:555)
>  
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:569)
>  
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:569)
>  
> at 
> kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2034)
>  
> at 
> kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:52)
>  
> at kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2033) 
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:569)
>  
> at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:588)
>  
> at 
> kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:175)
>  
> at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:587)
>  
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:604)
>  
> at 
> kafka.server.KafkaApis$$anonfun$handleFetchRequest$3.apply(KafkaApis.scala:604)
>  
> at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820) 
> at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:596) 
> at kafka.server.KafkaApis.handle(KafkaApis.scala:100) 
> at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:65) 
> at java.lang.Thread.run(Thread.java:745)
> {code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to