hachikuji commented on a change in pull request #4204:
URL: https://github.com/apache/kafka/pull/4204#discussion_r418702601



##########
File path: core/src/main/scala/kafka/server/ReplicaManager.scala
##########
@@ -1035,11 +1035,13 @@ class ReplicaManager(val config: KafkaConfig,
       val partitionFetchSize = fetchInfo.maxBytes
       val followerLogStartOffset = fetchInfo.logStartOffset
 
-      brokerTopicStats.topicStats(tp.topic).totalFetchRequestRate.mark()
-      brokerTopicStats.allTopicsStats.totalFetchRequestRate.mark()
-
       val adjustedMaxBytes = math.min(fetchInfo.maxBytes, limitBytes)
       try {
+        brokerTopicStats.allTopicsStats.totalFetchRequestRate.mark()
+        if (allPartitions.contains(tp)) {

Review comment:
       I think the problem here is that the metric is created on demand. We 
need to tie it to partition lifecycles a little more closely. The thought I had 
is basically just to create the topic metric whenever we receive a LeaderAndIsr 
request so that creation/deletion are protected by `replicaStateChangeLock`. We 
can then just ignore updates to the metric if it doesn't exist rather than 
letting it be recreated.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to