[ 
https://issues.apache.org/jira/browse/KAFKA-15603?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot updated KAFKA-15603:
--------------------------------
    Description: 
I needed to dump an __consumer_offset partition so I used `kafka-dump-log 
--offsets-decoder --files <path>` to do it but I got the following error:
{code:java}
Exception in thread "main" 
org.apache.kafka.common.protocol.types.SchemaException: Buffer underflow while 
parsing consumer protocol's header
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeVersion(ConsumerProtocol.java:72)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:179)
at 
kafka.coordinator.group.GroupMetadataManager$.$anonfun$parseGroupMetadata$2(GroupMetadataManager.scala:1562)
at 
kafka.coordinator.group.GroupMetadataManager$.parseGroupMetadata(GroupMetadataManager.scala:1560)
at 
kafka.coordinator.group.GroupMetadataManager$.formatRecordKeyAndValue(GroupMetadataManager.scala:1526)
at 
kafka.tools.DumpLogSegments$OffsetsMessageParser.parse(DumpLogSegments.scala:416)
at kafka.tools.DumpLogSegments$.$anonfun$dumpLog$2(DumpLogSegments.scala:327)
at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$2$adapted(DumpLogSegments.scala:285)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
at kafka.tools.DumpLogSegments$.$anonfun$dumpLog$1(DumpLogSegments.scala:285)
at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$1$adapted(DumpLogSegments.scala:282)
at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
at kafka.tools.DumpLogSegments$.dumpLog(DumpLogSegments.scala:282)
at kafka.tools.DumpLogSegments$.$anonfun$main$1(DumpLogSegments.scala:70)
at kafka.tools.DumpLogSegments$.main(DumpLogSegments.scala:61)
at kafka.tools.DumpLogSegments.main(DumpLogSegments.scala)
Caused by: java.nio.BufferUnderflowException
at java.base/java.nio.Buffer.nextGetIndex(Buffer.java:707)
at java.base/java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:383)
at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeVersion(ConsumerProtocol.java:70)
... 19 more{code}
The issue is that it is never guaranteed that the metadata associated with each 
member in a consumer group really follow the format used by the official Apache 
Kafka Consumer. They may have another format and use the same protocol type.

I think that it would be better to handle those errors. No being able to dump a 
segment is really annoying.

  was:
I needed to dump an __consumer_offset partition so I used `kafka-dump-log 
--offsets-decoder --files <path>` to do it but I got the following error:

 
Exception in thread "main" 
org.apache.kafka.common.protocol.types.SchemaException: Buffer underflow while 
parsing consumer protocol's header
        at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeVersion(ConsumerProtocol.java:72)
        at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:179)
        at 
kafka.coordinator.group.GroupMetadataManager$.$anonfun$parseGroupMetadata$2(GroupMetadataManager.scala:1562)
        at 
kafka.coordinator.group.GroupMetadataManager$.parseGroupMetadata(GroupMetadataManager.scala:1560)
        at 
kafka.coordinator.group.GroupMetadataManager$.formatRecordKeyAndValue(GroupMetadataManager.scala:1526)
        at 
kafka.tools.DumpLogSegments$OffsetsMessageParser.parse(DumpLogSegments.scala:416)
        at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$2(DumpLogSegments.scala:327)
        at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$2$adapted(DumpLogSegments.scala:285)
        at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
        at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
        at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$1(DumpLogSegments.scala:285)
        at 
kafka.tools.DumpLogSegments$.$anonfun$dumpLog$1$adapted(DumpLogSegments.scala:282)
        at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
        at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
        at kafka.tools.DumpLogSegments$.dumpLog(DumpLogSegments.scala:282)
        at 
kafka.tools.DumpLogSegments$.$anonfun$main$1(DumpLogSegments.scala:70)
        at kafka.tools.DumpLogSegments$.main(DumpLogSegments.scala:61)
        at kafka.tools.DumpLogSegments.main(DumpLogSegments.scala)
Caused by: java.nio.BufferUnderflowException
        at java.base/java.nio.Buffer.nextGetIndex(Buffer.java:707)
        at java.base/java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:383)
        at 
org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeVersion(ConsumerProtocol.java:70)
        ... 19 more
The issue is that it is never guaranteed that the metadata associated with each 
member in a consumer group really follow the format used by the official Apache 
Kafka Consumer. They may have another format and use the same protocol type.

I think that it would be better to handle those errors. No being able to dump a 
segment is really annoying.


> kafka-dump-log --offsets-decoder should handle parsing errors
> -------------------------------------------------------------
>
>                 Key: KAFKA-15603
>                 URL: https://issues.apache.org/jira/browse/KAFKA-15603
>             Project: Kafka
>          Issue Type: Bug
>            Reporter: David Jacot
>            Priority: Major
>
> I needed to dump an __consumer_offset partition so I used `kafka-dump-log 
> --offsets-decoder --files <path>` to do it but I got the following error:
> {code:java}
> Exception in thread "main" 
> org.apache.kafka.common.protocol.types.SchemaException: Buffer underflow 
> while parsing consumer protocol's header
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeVersion(ConsumerProtocol.java:72)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeAssignment(ConsumerProtocol.java:179)
> at 
> kafka.coordinator.group.GroupMetadataManager$.$anonfun$parseGroupMetadata$2(GroupMetadataManager.scala:1562)
> at 
> kafka.coordinator.group.GroupMetadataManager$.parseGroupMetadata(GroupMetadataManager.scala:1560)
> at 
> kafka.coordinator.group.GroupMetadataManager$.formatRecordKeyAndValue(GroupMetadataManager.scala:1526)
> at 
> kafka.tools.DumpLogSegments$OffsetsMessageParser.parse(DumpLogSegments.scala:416)
> at kafka.tools.DumpLogSegments$.$anonfun$dumpLog$2(DumpLogSegments.scala:327)
> at 
> kafka.tools.DumpLogSegments$.$anonfun$dumpLog$2$adapted(DumpLogSegments.scala:285)
> at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
> at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
> at kafka.tools.DumpLogSegments$.$anonfun$dumpLog$1(DumpLogSegments.scala:285)
> at 
> kafka.tools.DumpLogSegments$.$anonfun$dumpLog$1$adapted(DumpLogSegments.scala:282)
> at scala.collection.IterableOnceOps.foreach(IterableOnce.scala:575)
> at scala.collection.IterableOnceOps.foreach$(IterableOnce.scala:573)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:933)
> at kafka.tools.DumpLogSegments$.dumpLog(DumpLogSegments.scala:282)
> at kafka.tools.DumpLogSegments$.$anonfun$main$1(DumpLogSegments.scala:70)
> at kafka.tools.DumpLogSegments$.main(DumpLogSegments.scala:61)
> at kafka.tools.DumpLogSegments.main(DumpLogSegments.scala)
> Caused by: java.nio.BufferUnderflowException
> at java.base/java.nio.Buffer.nextGetIndex(Buffer.java:707)
> at java.base/java.nio.HeapByteBuffer.getShort(HeapByteBuffer.java:383)
> at 
> org.apache.kafka.clients.consumer.internals.ConsumerProtocol.deserializeVersion(ConsumerProtocol.java:70)
> ... 19 more{code}
> The issue is that it is never guaranteed that the metadata associated with 
> each member in a consumer group really follow the format used by the official 
> Apache Kafka Consumer. They may have another format and use the same protocol 
> type.
> I think that it would be better to handle those errors. No being able to dump 
> a segment is really annoying.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to