[ https://issues.apache.org/jira/browse/KAFKA-19613?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18017536#comment-18017536 ]
FliegenKLATSCH commented on KAFKA-19613: ---------------------------------------- Indeed the error would probably not be filled in the FetchCollector#initialize method, not sure if this can happen. If the broker would not be able to persist the message, I hope the producer doesn't get an ACK. And on reading the client should validate the CRC checksum and the broker probably doesn't need to care. But it would not hurt to handle it the same way. The CorruptRecordException would be thrown in FetchCollector#fetchRecords, so CompletedFetch#maybeEnsureValid has to be modified to add the information there (2 times and 2 times in ShareCompletedFetch, for the record and the batch each). In CompletedFetch#fetchRecords it's still encapsulated in a KafkaException. Not sure if that is good/needed, but it also doesn't hurt, we just need to unwrap on the client side. And the note about seeking makes sense. > Expose consumer CorruptRecordException as case of KafkaException > ---------------------------------------------------------------- > > Key: KAFKA-19613 > URL: https://issues.apache.org/jira/browse/KAFKA-19613 > Project: Kafka > Issue Type: Improvement > Components: clients > Reporter: Uladzislau Blok > Assignee: Uladzislau Blok > Priority: Minor > Labels: need-kip > Attachments: corrupted_records.excalidraw.png > > > As part of analysis of KAFKA-19430 , we decided it would be useful to expose > root case of consumer request failure (e.g. currently we see just > KafkaException instead of CorruptRecordException) > The idea is to not change public API, but expose root case as a filed of > KafkaException -- This message was sent by Atlassian Jira (v8.20.10#820010)