[
https://issues.apache.org/jira/browse/FLINK-11912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796788#comment-16796788
]
Jiangjie Qin commented on FLINK-11912:
--------------------------------------
[~suez1224] Thanks for bringing this up. The current proposal (and the WIP
patch) seems still having metric leak. Even if the metric objects are removed
from the {{manualRegisteredMetricSet}}, the metric will still be there until
{{MetricRegistry.unregister()}} is invoked. Unfortunately, the {{MetricGroup}}
interface does not expose a method to explicitly remove a single metric. At
this point, in order to unregister the metrics, the metric group has to be
closed.
So it seems we need to add a {{remove(String MetricName)}} method to the
{{MetricGroup}} interface to ensure the metrics are properly unregistered.
> Expose per partition Kafka lag metric in Flink Kafka connector
> --------------------------------------------------------------
>
> Key: FLINK-11912
> URL: https://issues.apache.org/jira/browse/FLINK-11912
> Project: Flink
> Issue Type: New Feature
> Components: Connectors / Kafka
> Affects Versions: 1.6.4, 1.7.2
> Reporter: Shuyi Chen
> Assignee: Shuyi Chen
> Priority: Major
>
> In production, it's important that we expose the Kafka lag by partition
> metric in order for users to diagnose which Kafka partition is lagging.
> However, although the Kafka lag by partition metrics are available in
> KafkaConsumer after 0.10.2, Flink was not able to properly register it
> because the metrics are only available after the consumer start polling data
> from partitions. I would suggest the following fix:
> 1) In KafkaConsumerThread.run(), allocate a manualRegisteredMetricSet.
> 2) in the fetch loop, as KafkaConsumer discovers new partitions, manually add
> MetricName for those partitions that we want to register into
> manualRegisteredMetricSet.
> 3) in the fetch loop, check if manualRegisteredMetricSet is empty. If not,
> try to search for the metrics available in KafkaConsumer, and if found,
> register it and remove the entry from manualRegisteredMetricSet.
> The overhead of the above approach is bounded and only incur when discovering
> new partitions, and registration is done once the KafkaConsumer have the
> metrics exposed.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)