[ 
https://issues.apache.org/jira/browse/KAFKA-9613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17899098#comment-17899098
 ] 

Chia-Ping Tsai commented on KAFKA-9613:
---------------------------------------

[~ted1yan] Please noted that the start offset advanced by `DeleteRecord` APIs 
must be "larger" than the last offset of the segment file. That means Kafka 
remove the segment file only if its all records are small than the "new" start 
offset.

for example:

the corrupted record (`185972321`) is in the file `00000000000185492119.log` 
and assume the next segment file is `00000000000186000000.log`. thus, you can 
use `DeleteRecord` to advance the start offset to `186000000` to make sure 
`00000000000185492119.log` can be deleted.

> CorruptRecordException: Found record size 0 smaller than minimum record 
> overhead
> --------------------------------------------------------------------------------
>
>                 Key: KAFKA-9613
>                 URL: https://issues.apache.org/jira/browse/KAFKA-9613
>             Project: Kafka
>          Issue Type: Bug
>          Components: core
>    Affects Versions: 2.6.2
>            Reporter: Amit Khandelwal
>            Assignee: hudeqi
>            Priority: Major
>         Attachments: image-2024-11-13-14-02-45-768.png
>
>
> 20200224;21:01:38: [2020-02-24 21:01:38,615] ERROR [ReplicaManager broker=0] 
> Error processing fetch with max size 1048576 from consumer on partition 
> SANDBOX.BROKER.NEWORDER-0: (fetchOffset=211886, logStartOffset=-1, 
> maxBytes=1048576, currentLeaderEpoch=Optional.empty) 
> (kafka.server.ReplicaManager)
> 20200224;21:01:38: org.apache.kafka.common.errors.CorruptRecordException: 
> Found record size 0 smaller than minimum record overhead (14) in file 
> /data/tmp/kafka-topic-logs/SANDBOX.BROKER.NEWORDER-0/00000000000000000000.log.
> 20200224;21:05:48: [2020-02-24 21:05:48,711] INFO [GroupMetadataManager 
> brokerId=0] Removed 0 expired offsets in 1 milliseconds. 
> (kafka.coordinator.group.GroupMetadataManager)
> 20200224;21:10:22: [2020-02-24 21:10:22,204] INFO [GroupCoordinator 0]: 
> Member 
> xxxxxxxx_011-9e61d2c9-ce5a-4231-bda1-f04e6c260dc0-StreamThread-1-consumer-27768816-ee87-498f-8896-191912282d4f
>  in group yyyyyyyyy_011 has failed, removing it from the group 
> (kafka.coordinator.group.GroupCoordinator)
>  
> [https://stackoverflow.com/questions/60404510/kafka-broker-issue-replica-manager-with-max-size#]
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to