[ 
https://issues.apache.org/jira/browse/KAFKA-6952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16491899#comment-16491899
 ] 

evan einst commented on KAFKA-6952:
-----------------------------------

I run this command on __consumer_offsets-30(same error occured later) because I 
cleared the __consumer_offsets-42 trying to avoid this problem. 

-bash-4.2$ /alidata1/admin/kafka_2.12-0.11.0.2/bin/kafka-run-class.sh 
kafka.tools.DumpLogSegments --files 
00000000000000000000.log,00000000000000459675.log,00000000000000580628.log,00000000000000740247.log,00000000000000911141.log,00000000000001027083.log,00000000000001477993.log,00000000000001963502.log,00000000000002447301.log,00000000000002931129.log,00000000000003414939.log,00000000000003898752.log,00000000000004427227.log,00000000000005057718.log,00000000000005640681.log,00000000000006112841.log,00000000000006513799.log,00000000000006876659.log,00000000000007308632.log,00000000000007623404.log
 >logcheck.log Exception in thread "main" 
{color:#14892c}org.apache.kafka.common.errors.CorruptRecordException: Record 
size is smaller than minimum record overhead (14).{color}

{color:#333333} I attached the command log(logcheck.log) and those log 
segments. please upziped in a same folder. Thanks.{color}

{color:#333333}[^__consumer_offsets-30_1.zip] 
[^__consumer_offsets-30-2.zip][^__consumer_offsets-30-3.zip]{color}

> LogCleaner stopped due to 
> org.apache.kafka.common.errors.CorruptRecordException
> -------------------------------------------------------------------------------
>
>                 Key: KAFKA-6952
>                 URL: https://issues.apache.org/jira/browse/KAFKA-6952
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 0.11.0.2
>            Reporter: evan einst
>            Priority: Major
>         Attachments: __consumer_offsets-30-2.zip, 
> __consumer_offsets-30-3.zip, __consumer_offsets-30_1.zip
>
>
> This error occur our prd cluster 0.10.2.1 upgrading to the 0.11.0.2 version,  
> here said(https://issues.apache.org/jira/browse/KAFKA-5431)
> that in the 0.11.0.0 version is fixed this error, we decided to upgrade to 
> 0.11.0.2 version, but after upgrading the code of a server, this problem 
> still occured.
> ------------------------------------------------------------------------------------
> {{[2018-05-26 13:23:58,029] INFO Cleaner 0: Beginning cleaning of log 
> __consumer_offsets-42. (kafka.log.LogCleaner) [2018-05-26 13:23:58,029] INFO 
> Cleaner 0: Building offset map for __consumer_offsets-42... 
> (kafka.log.LogCleaner) [2018-05-26 13:23:58,050] INFO Cleaner 0: Building 
> offset map for log __consumer_offsets-42 for 19 segments in offset range [0, 
> 6919353). (kafka.log.LogCleaner) [2018-05-26 13:23:58,300] ERROR 
> [kafka-log-cleaner-thread-0]: Error due to (kafka.log.LogCleaner) 
> org.apache.kafka.common.errors.CorruptRecordException: Record size is less 
> than the minimum record overhead (14) [2018-05-26 13:23:58,301] INFO 
> [kafka-log-cleaner-thread-0]: Stopped (kafka.log.LogCleaner)}}
> ------------------------------------------------------------------------------------
> Please help me resolve this problem, thank you very much!



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to