[jira] [Commented] (KAFKA-7525) Handling corrupt records

2021-03-12 Thread Alexandre Dupriez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17300379#comment-17300379
 ] 

Alexandre Dupriez commented on KAFKA-7525:
--

It was fixed in 2.6.0: KAFKA-9206

> Handling corrupt records
> 
>
> Key: KAFKA-7525
> URL: https://issues.apache.org/jira/browse/KAFKA-7525
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, core
>Affects Versions: 1.1.0
>Reporter: Katarzyna Solnica
>Priority: Major
>
> When Java consumer encounters a corrupt record on a partition it reads from, 
> it throws:
> {code:java}
> org.apache.kafka.common.KafkaException: Received exception when fetching the 
> next record from XYZ. If needed, please seek past the record to continue 
> consumption.
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1125)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:993)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:527)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:488)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...)
> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size 
> is less than the minimum record overhead (14){code}
> or:
> {code:java}
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:936)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:485)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...){code}
> 1. Could you consider throwing CorruptRecordException from 
> parseCompletedFetch() when error == Errors.CORRUPT_MESSAGE?
> 2. Seeking past the corrupt record means losing data. I've noticed that the 
> record is often correct on a follower ISR, and manual change of the partition 
> leader to the follower node solves the issue in case partition is used by a 
> single consumer group. Couldn't Kafka server discover such situations and 
> recover corrupt records from logs available on other ISRs somehow?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7525) Handling corrupt records

2019-10-18 Thread Tobias Neubert (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16954472#comment-16954472
 ] 

Tobias Neubert commented on KAFKA-7525:
---

Encountered the same behaviour with the IllegalStateException in verison 2.2.0. 
Is this "clear bug" fixed in 2.3.0?

> Handling corrupt records
> 
>
> Key: KAFKA-7525
> URL: https://issues.apache.org/jira/browse/KAFKA-7525
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, core
>Affects Versions: 1.1.0
>Reporter: Katarzyna Solnica
>Priority: Major
>
> When Java consumer encounters a corrupt record on a partition it reads from, 
> it throws:
> {code:java}
> org.apache.kafka.common.KafkaException: Received exception when fetching the 
> next record from XYZ. If needed, please seek past the record to continue 
> consumption.
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1125)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:993)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:527)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:488)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...)
> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size 
> is less than the minimum record overhead (14){code}
> or:
> {code:java}
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:936)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:485)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...){code}
> 1. Could you consider throwing CorruptRecordException from 
> parseCompletedFetch() when error == Errors.CORRUPT_MESSAGE?
> 2. Seeking past the corrupt record means losing data. I've noticed that the 
> record is often correct on a follower ISR, and manual change of the partition 
> leader to the follower node solves the issue in case partition is used by a 
> single consumer group. Couldn't Kafka server discover such situations and 
> recover corrupt records from logs available on other ISRs somehow?
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (KAFKA-7525) Handling corrupt records

2018-10-22 Thread Ismael Juma (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658756#comment-16658756
 ] 

Ismael Juma commented on KAFKA-7525:


That IllegalStateException is a clear bug. Please test with a more recent 
version as it may have been fixed already.

> Handling corrupt records
> 
>
> Key: KAFKA-7525
> URL: https://issues.apache.org/jira/browse/KAFKA-7525
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, core
>Affects Versions: 1.1.0
>Reporter: Katarzyna Solnica
>Priority: Major
>
> When Java consumer encounters a corrupt record on a partition it reads from, 
> it throws:
> {code:java}
> org.apache.kafka.common.KafkaException: Received exception when fetching the 
> next record from XYZ. If needed, please seek past the record to continue 
> consumption.
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1125)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:993)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:527)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:488)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...)
> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size 
> is less than the minimum record overhead (14){code}
> or:
> {code:java}
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:936)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:485)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...){code}
> 1. Could you consider throwing CorruptRecordException from 
> parseCompletedFetch() when error == Errors.CORRUPT_MESSAGE?
> 2. Seeking past the corrupt record means losing data. I've noticed that the 
> record is often correct on a follower ISR, and manual change of the partition 
> leader to the follower node solves the issue in case partition is used by a 
> single consumer group. Couldn't Kafka server discover such situations and 
> recover corrupt records from logs available on other ISRs somehow?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (KAFKA-7525) Handling corrupt records

2018-10-22 Thread Stanislav Kozlovski (JIRA)


[ 
https://issues.apache.org/jira/browse/KAFKA-7525?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16658747#comment-16658747
 ] 

Stanislav Kozlovski commented on KAFKA-7525:


 

 

 

Hi [~Solnica] , thanks for the report!

Regarding 1. - there has been some work that is ongoing which changes what 
errors are thrown in the case where message corruption has been detected. The 
issue we currently have is that we don't provide an easy way to seek past the 
corrupt records itself. Here is the 
[KIP-344|https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=87297793]

 

> Handling corrupt records
> 
>
> Key: KAFKA-7525
> URL: https://issues.apache.org/jira/browse/KAFKA-7525
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer, core
>Affects Versions: 1.1.0
>Reporter: Katarzyna Solnica
>Priority: Major
>
> When Java consumer encounters a corrupt record on a partition it reads from, 
> it throws:
> {code:java}
> org.apache.kafka.common.KafkaException: Received exception when fetching the 
> next record from XYZ. If needed, please seek past the record to continue 
> consumption.
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1125)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:993)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:527)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:488)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...)
> Caused by: org.apache.kafka.common.errors.CorruptRecordException: Record size 
> is less than the minimum record overhead (14){code}
> or:
> {code:java}
> java.lang.IllegalStateException: Unexpected error code 2 while fetching data
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.parseCompletedFetch(Fetcher.java:936)
>     at 
> org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:485)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155)
>     at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115)
>     (...){code}
> 1. Could you consider throwing CorruptRecordException from 
> parseCompletedFetch() when error == Errors.CORRUPT_MESSAGE?
> 2. Seeking past the corrupt record means losing data. I've noticed that the 
> record is often correct on a follower ISR, and manual change of the partition 
> leader to the follower node solves the issue in case partition is used by a 
> single consumer group. Couldn't Kafka server discover such situations and 
> recover corrupt records from logs available on other ISRs somehow?
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)