[ 
https://issues.apache.org/jira/browse/FLINK-24542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17431068#comment-17431068
 ] 

zlzhang0122 commented on FLINK-24542:
-------------------------------------

[~renqs] Uhh, yes, it is a derived metric instead of a kafka standard metric, 
IMO kafka just want to care about the key metrics about itself, but this metric 
care more about the consumer side like kafka-connector, etc..

> Expose the freshness metrics for kafka connector
> ------------------------------------------------
>
>                 Key: FLINK-24542
>                 URL: https://issues.apache.org/jira/browse/FLINK-24542
>             Project: Flink
>          Issue Type: Improvement
>          Components: Connectors / Kafka
>    Affects Versions: 1.12.2, 1.14.0, 1.13.1
>            Reporter: zlzhang0122
>            Priority: Major
>             Fix For: 1.15.0
>
>
> When we start a flink job to consume apache kafka, we usually use offsetLag, 
> which can be calulated by current-offsets minus committed-offsets, but 
> sometimes the offsetLag is hard to understand, we can hardly to judge wether 
> the value is normal or not. Kafka have proposed a new metric: freshness(see 
> [a-guide-to-kafka-consumer-freshness|https://www.jesseyates.com/2019/11/04/kafka-consumer-freshness-a-guide.html?trk=article_share_wechat&from=timeline&isappinstalled=0]).
> So we can also expose the freshness metric for kafka connector to improve the 
> user experience.From this freshness metric, user can easily know wether the 
> kafka message is backlog and need to deal with it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to