[ 
https://issues.apache.org/jira/browse/FLINK-18017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Flink Jira Bot updated FLINK-18017:
-----------------------------------
      Labels: auto-deprioritized-major auto-deprioritized-minor  (was: 
auto-deprioritized-major stale-minor)
    Priority: Not a Priority  (was: Minor)

This issue was labeled "stale-minor" 7 days ago and has not received any 
updates so it is being deprioritized. If this ticket is actually Minor, please 
raise the priority and ask a committer to assign you the issue or revive the 
public discussion.


> have Kafka connector report metrics on null records 
> ----------------------------------------------------
>
>                 Key: FLINK-18017
>                 URL: https://issues.apache.org/jira/browse/FLINK-18017
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Kafka
>    Affects Versions: 1.9.1
>            Reporter: Yu Yang
>            Priority: Not a Priority
>              Labels: auto-deprioritized-major, auto-deprioritized-minor
>
> Corrupted messages can get into the message pipeline for various reasons.  
> When a Flink deserializer fails to deserialize the message, and throw an 
> exception due to corrupted message, the flink application will be blocked 
> until we update the deserializer to handle the exception.  
> AbstractFetcher.emitRecordsWithTimestamps skips null records.  We need to add 
> an metric on # of null records so that the users can measure # of null 
> records that KafkaConnector encounters, and set up monitoring & alerting 
> based on that. 
> [https://github.com/apache/flink/blob/1cd696d92c3e088a5bd8e5e11b54aacf46e92ae8/flink-connectors/flink-connector-kafka-base/src/main/java/org/apache/flink/streaming/connectors/kafka/internals/AbstractFetcher.java#L350]
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to