kfaraz commented on code in PR #16873:
URL: https://github.com/apache/druid/pull/16873#discussion_r1710859680


##########
docs/operations/metrics.md:
##########
@@ -382,6 +382,9 @@ These metrics are emitted by the Druid Coordinator in every 
run of the correspon
 |`metadatacache/finalizedSchemaPayload/count`|Number of finalized segment 
schema cached.||Depends on the number of distinct schema in the cluster.|
 |`metadatacache/temporaryMetadataQueryResults/count`|Number of segments for 
which schema was fetched by executing segment metadata query.||Eventually it 
should be 0.|
 |`metadatacache/temporaryPublishedMetadataQueryResults/count`|Number of 
segments for which schema is cached after back filling in the database.||This 
value gets reset after each database poll. Eventually it should be 0.|
+|`metadatacache/cold/segment/count`|Number of cold segments.|`dataSource`||
+|`metadatacache/cold/refresh/count`|Number of cold segments with cached 
schema.|`dataSource`||

Review Comment:
   "cold segment" is not a term that exists anywhere outside of 
`CoordinatorSegmentMetadataCache`.
   It would be better to use a term that is more easily relatable.
   
   Also, the metadatacache metrics seems all over the place with their 
prefixes. It is unavoidable for metrics across multiple features to not have a 
consistent naming scheme. But metrics within the same feature should be 
consistent. Since this feature is still nascent, I would advise we revisit all 
the metric names for this feature and categorize them nicely with proper 
prefixes. (not necessarily in this PR)
   
   cc: @cryptoe 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to