[ 
https://issues.apache.org/jira/browse/KAFKA-13936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17544647#comment-17544647
 ] 

Matthias J. Sax commented on KAFKA-13936:
-----------------------------------------

As mentioned above, offsets are by default committed every 30 seconds and thus 
the difference between committed offset to end offset (as reported by Kafka CLI 
tools, and presumably by provectus) are expected to be larger than the metric 
directly reported by the consumer that report the difference of it's current 
(not yet committed) position to end offset.

If you reduce the commit interval (not necessarily recommended), the difference 
should be smaller as the CLI should report a smaller number.

Overall, it seems you report expected behavior, and I don't see a bug. – Of 
course, we _could_ add a new consumer metric that report the difference between 
committed offset and end offset, but what would we gain? In the end, the 
consumer reported metric is more accurate compared to what the CLI reports.

Maybe you can explain why it is a problem that the numbers are not the same?

> Invalid consumer lag when monitoring from a kafka streams application
> ---------------------------------------------------------------------
>
>                 Key: KAFKA-13936
>                 URL: https://issues.apache.org/jira/browse/KAFKA-13936
>             Project: Kafka
>          Issue Type: Bug
>          Components: streams
>            Reporter: Prashanth Joseph Babu
>            Priority: Major
>
> I have a kafka streams application and I'm trying to monitor the consumer lag 
> via stream metrics.
> Here's some code snippet
> {code:java}
> metrics = streams.metrics();
>             lag = 0;
>             for (Metric m : metrics.values()) {
>                 tags = m.metricName().tags();
>                 if ( m.metricName().name().equals(MONITOR_CONSUMER_LAG) && 
> tags.containsKey(MONTOR_TAG_TOPIC) && 
>                     tags.get(MONTOR_TAG_TOPIC).equals(inputTopic) ) {
>                     partitionLag = 
> Float.valueOf(m.metricValue().toString()).floatValue();
>                     if ( !partitionLag.isNaN() ) {
>                         lag += partitionLag;
>                     }
>                 }
>             }
> {code}
> Here MONITOR_CONSUMER_LAG is {{{}records-lag-max{}}}.
> However these numbers dont match with the consumer lag we see in the kafka UI 
> . is records-lag-max the right metric to track for a kafka streams 
> application when the objective is to get consumer lag?



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

Reply via email to