[ https://issues.apache.org/jira/browse/KAFKA-7384?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16610326#comment-16610326 ]
Vasilis Tsamis edited comment on KAFKA-7384 at 9/11/18 9:09 AM: ---------------------------------------------------------------- Hey Ismael No, we haven't found this kind of error when the consumer is 1.1.x . When working with consumers 0.10.2.1 & 0.10.0.1, many messages cause this kind of errors. Eventually, the consumer stops polling. The exception occurring last (and also the most common one) is about "Unknown compression type id". We are trying to reproduce this error in our dev env but it's difficult because it seems to be an internal one. We are wondering whether any recently introduced headers are causing it. was (Author: vtsamis): Hey Ismael No, we haven't found this kind of error when the consumer is 1.1.x . When working with consumers 0.10.2.1 & 0.10.0.1, many messages cause this kind of errors. Eventually, the consumer stops polling. The exception occurring last (and also the most common one) is about "Unknown compression type id". We are trying to reproduce this error but is difficult because it seems to be an internal one. We are wondering whether any recently introduced headers are causing it. > Compatibility issues between Kafka Brokers 1.1.0 and older kafka clients > ------------------------------------------------------------------------ > > Key: KAFKA-7384 > URL: https://issues.apache.org/jira/browse/KAFKA-7384 > Project: Kafka > Issue Type: Bug > Components: consumer > Affects Versions: 1.1.0 > Reporter: Vasilis Tsamis > Priority: Blocker > Attachments: logs2.txt > > > Hello > After upgrading the Kafka Brokers from 0.10.2.1 to 1.1.0, I am getting the > following error logs thrown by the kafka clients 0.10.2.1 & 0.10.0.1. This > seems to be some kind of incompatibility issue for the older clients although > this shouldn't be true according to the following [doc > 1|https://docs.confluent.io/current/installation/upgrade.html#preparation], > [doc2|https://cwiki-test.apache.org/confluence/display/KAFKA/KIP-35+-+Retrieving+protocol+version] > and > [thread|https://lists.apache.org/thread.html/9bc87a2c683d13fda27f01a635dba822520113cfd8fb50f3a3e82fcf@%3Cusers.kafka.apache.org%3E]. > Can someone please help on this issue? Does this mean that I have to upgrade > all kafka-clients to 1.1.0? > > (Please also check the attached log, some extra compression type ids are also > occurring) > > {noformat} > java.lang.IllegalArgumentException: Unknown compression type id: 4 > at > org.apache.kafka.common.record.CompressionType.forId(CompressionType.java:46) > at > org.apache.kafka.common.record.Record.compressionType(Record.java:260) > at > org.apache.kafka.common.record.LogEntry.isCompressed(LogEntry.java:89) > at > org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:70) > at > org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785) > at > org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995) > at > org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > ------------------- Another kind of exception due to same reason > java.lang.IndexOutOfBoundsException: null > at java.nio.Buffer.checkIndex(Buffer.java:546) > at java.nio.HeapByteBuffer.getInt(HeapByteBuffer.java:365) > at org.apache.kafka.common.utils.Utils.sizeDelimited(Utils.java:784) > at org.apache.kafka.common.record.Record.value(Record.java:268) > at > org.apache.kafka.common.record.RecordsIterator$DeepRecordsIterator.<init>(RecordsIterator.java:149) > at > org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:79) > at > org.apache.kafka.common.record.RecordsIterator.makeNext(RecordsIterator.java:34) > at > org.apache.kafka.common.utils.AbstractIterator.maybeComputeNext(AbstractIterator.java:79) > at > org.apache.kafka.common.utils.AbstractIterator.hasNext(AbstractIterator.java:45) > at > org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:785) > at > org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:480) > at > org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1037) > at > org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:995) > at > org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:130) > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > at java.util.concurrent.FutureTask.run(FutureTask.java:266) > at > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) > at > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) > at java.lang.Thread.run(Thread.java:745) > {noformat} -- This message was sent by Atlassian JIRA (v7.6.3#76005)