[ https://issues.apache.org/jira/browse/KAFKA-19430?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18013769#comment-18013769 ]
Uladzislau Blok commented on KAFKA-19430: ----------------------------------------- I'm not fully sure for now what would be the best way to handle the error. About CONTINUE case: exception is threw by Kafka consumer when we're trying read butch of messages, not sure how we can skip only one message, if we don't know which one from the butch is corrupted. What is clear, I guess: we can't realize if exception is exception is retriable or not, because we see just KafkaException. Should we probably expose root case to client (and use that in streams for current case)? > Don't fail on RecordCorruptedException > -------------------------------------- > > Key: KAFKA-19430 > URL: https://issues.apache.org/jira/browse/KAFKA-19430 > Project: Kafka > Issue Type: Improvement > Components: streams > Reporter: Matthias J. Sax > Assignee: Uladzislau Blok > Priority: Major > > From [https://github.com/confluentinc/kafka-streams-examples/issues/524] > Currently, the existing `DeserializationExceptionHandler` is applied when > de-serializing the record key/value byte[] inside Kafka Streams. This implies > that a `RecordCorruptedException` is not handled. > We should explore to not let Kafka Streams crash, but maybe retry this error > automatically (as `RecordCorruptedException extends RetriableException`), and > find a way to pump the error into the existing exception handler. > If the error is transient, user can still use `REPLACE_THREAD` in the > uncaught exception handler, but this is a rather heavy weight approach. -- This message was sent by Atlassian Jira (v8.20.10#820010)