[
https://issues.apache.org/jira/browse/FLINK-18017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17327806#comment-17327806
]
Flink Jira Bot commented on FLINK-18017:
----------------------------------------
This major issue is unassigned and itself and all of its Sub-Tasks have not
been updated for 30 days. So, it has been labeled "stale-major". If this ticket
is indeed "major", please either assign yourself or give an update. Afterwards,
please remove the label. In 7 days the issue will be deprioritized.
> have Kafka connector report metrics on null records
> ----------------------------------------------------
>
> Key: FLINK-18017
> URL: https://issues.apache.org/jira/browse/FLINK-18017
> Project: Flink
> Issue Type: Improvement
> Components: Connectors / Kafka
> Affects Versions: 1.9.1
> Reporter: Yu Yang
> Priority: Major
> Labels: stale-major
>
> Corrupted messages can get into the message pipeline for various reasons.
> When a Flink deserializer fails to deserialize the message, and throw an
> exception due to corrupted message, the flink application will be blocked
> until we update the deserializer to handle the exception.
> AbstractFetcher.emitRecordsWithTimestamps skips null records. We need to add
> an metric on # of null records so that the users can measure # of null
> records that KafkaConnector encounters, and set up monitoring & alerting
> based on that.
> [https://github.com/apache/flink/blob/1cd696d92c3e088a5bd8e5e11b54aacf46e92ae8/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/AbstractFetcher.java#L350]
>
>
>
--
This message was sent by Atlassian Jira
(v8.3.4#803005)