[ 
https://issues.apache.org/jira/browse/KAFKA-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16811215#comment-16811215
 ] 

Guozhang Wang commented on KAFKA-8194:
--------------------------------------

AFAIK only when transactional messaging are enabled or log compaction is 
enabled then the records may have no incremental offsets: note for the former 
case, it is not actually producer skipped some offsets, but just that consumers 
will actually filter those internal message like txn markers and hence 
illustrate "holes" of the record.

With that, I think the behavior of numMessages above are still valid, because 
there are indeed some records appended in between, just that these messages are 
used for brokers themselves and not user's produced records, and hence not 
returned to consumers as well. If we want the metrics to explicitly exclude 
those messages --- I can understand that from monitoring pov it may be 
confusing to users --- then I think we should file a KIP to discuss this public 
behavior change.

> MessagesInPerSec incorrect value when Stream produce messages
> -------------------------------------------------------------
>
>                 Key: KAFKA-8194
>                 URL: https://issues.apache.org/jira/browse/KAFKA-8194
>             Project: Kafka
>          Issue Type: Bug
>          Components: metrics
>    Affects Versions: 1.1.0, 2.2.0
>            Reporter: Odyldzhon Toshbekov
>            Priority: Trivial
>         Attachments: Screen Shot 2019-04-05 at 17.51.03.png, Screen Shot 
> 2019-04-05 at 17.52.22.png
>
>
> Looks like metric
> {code:java}
> kafka.server:type=BrokerTopicMetrics,name=MessagesInPerSec{code}
> has incorrect value when messages come via Kafka Stream API.
> I noticed that offset for every message from Kafka Stream can be increased by 
> 1,2,... However if messages come to Broker from Kafka producer it's always 
> incremented by 1.
> Unfortunately the metric mentioned above calculated based on offset changes 
> and as result we cannot use streams because metric will be always incorrect.
> For Kafka 2.2.0
> !Screen Shot 2019-04-05 at 17.51.03.png|width=100%!
>  
> [https://github.com/apache/kafka/blob/2.2.0/core/src/main/scala/kafka/server/ReplicaManager.scala]
> And this is the method used to get "numAppendedMessages"
>  !Screen Shot 2019-04-05 at 17.52.22.png|width=100%!
> https://github.com/apache/kafka/blob/2.2.0/core/src/main/scala/kafka/log/Log.scala



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to