kfaraz commented on code in PR #14661:
URL: https://github.com/apache/druid/pull/14661#discussion_r1274616530


##########
docs/operations/metrics.md:
##########
@@ -217,28 +217,28 @@ batch ingestion emit the following metrics. These metrics 
are deltas for each em
 |`ingest/events/duplicate`|Number of events rejected because the events are 
duplicated.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|0|
 |`ingest/input/bytes`|Number of bytes read from input sources, after 
decompression but prior to parsing. This covers all data read, including data 
that does not end up being fully processed and ingested. For example, this 
includes data that ends up being rejected for being unparseable or filtered 
out.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
amount of data read.|
 |`ingest/rows/output`|Number of Druid rows persisted.|`dataSource`, `taskId`, 
`taskType`, `groupId`|Your number of events with rollup.|
-|`ingest/persists/count`|Number of times persist occurred.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|Depends on configuration.|
-|`ingest/persists/time`|Milliseconds spent doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
-|`ingest/persists/cpu`|Cpu time in Nanoseconds spent on doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
+|`ingest/persists/count`|Number of times persist occurred.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|Depends on the configuration.|
+|`ingest/persists/time`|Milliseconds spent doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
+|`ingest/persists/cpu`|CPU time in Nanoseconds spent on doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
 |`ingest/persists/backPressure`|Milliseconds spent creating persist tasks and 
blocking waiting for them to finish.|`dataSource`, `taskId`, `taskType`, 
`groupId`, `tags`|0 or very low|
 |`ingest/persists/failed`|Number of persists that failed.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|0|
 |`ingest/handoff/failed`|Number of handoffs that failed.|`dataSource`, 
`taskId`, `taskType`, `groupId`,`tags`|0|
-|`ingest/merge/time`|Milliseconds spent merging intermediate 
segments.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
-|`ingest/merge/cpu`|Cpu time in Nanoseconds spent on merging intermediate 
segments.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
+|`ingest/merge/time`|Milliseconds spent merging intermediate 
segments.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
+|`ingest/merge/cpu`|CPU time in Nanoseconds spent on merging intermediate 
segments.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
 |`ingest/handoff/count`|Number of handoffs that happened.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|Varies. Generally greater than 0 once 
every segment granular period if cluster operating normally.|
-|`ingest/sink/count`|Number of sinks not handoffed.|`dataSource`, `taskId`, 
`taskType`, `groupId`, `tags`|1~3|
-|`ingest/events/messageGap`|Time gap in milliseconds between the latest 
ingested event timestamp and the current system timestamp of metrics emission. 
If the value is increasing but lag is low, Druid may not be receiving new data. 
This metric is reset as new tasks spawn up.|`dataSource`, `taskId`, `taskType`, 
`groupId`, `tags`|Greater than 0, depends on the time carried in event. |
+|`ingest/sink/count`|Number of sinks not handed off.|`dataSource`, `taskId`, 
`taskType`, `groupId`, `tags`|1~3|

Review Comment:
   Nice one 😂 



##########
docs/operations/metrics.md:
##########
@@ -217,28 +217,28 @@ batch ingestion emit the following metrics. These metrics 
are deltas for each em
 |`ingest/events/duplicate`|Number of events rejected because the events are 
duplicated.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|0|
 |`ingest/input/bytes`|Number of bytes read from input sources, after 
decompression but prior to parsing. This covers all data read, including data 
that does not end up being fully processed and ingested. For example, this 
includes data that ends up being rejected for being unparseable or filtered 
out.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
amount of data read.|
 |`ingest/rows/output`|Number of Druid rows persisted.|`dataSource`, `taskId`, 
`taskType`, `groupId`|Your number of events with rollup.|
-|`ingest/persists/count`|Number of times persist occurred.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|Depends on configuration.|
-|`ingest/persists/time`|Milliseconds spent doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
-|`ingest/persists/cpu`|Cpu time in Nanoseconds spent on doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
+|`ingest/persists/count`|Number of times persist occurred.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|Depends on the configuration.|
+|`ingest/persists/time`|Milliseconds spent doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
+|`ingest/persists/cpu`|CPU time in Nanoseconds spent on doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|

Review Comment:
   ```suggestion
   |`ingest/persists/cpu`|CPU time in nanoseconds spent on doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
   ```



##########
docs/operations/metrics.md:
##########
@@ -303,14 +303,14 @@ These metrics are for the Druid Coordinator and are reset 
each time the Coordina
 |`segment/size`|Total size of used segments in a data source. Emitted only for 
data sources to which at least one used segment belongs.|`dataSource`|Varies|
 |`segment/count`|Number of used segments belonging to a data source. Emitted 
only for data sources to which at least one used segment 
belongs.|`dataSource`|< max|
 |`segment/overShadowed/count`|Number of segments marked as unused due to being 
overshadowed.| |Varies|
-|`segment/unavailable/count`|Number of segments (not including replicas) left 
to load until segments that should be loaded in the cluster are available for 
queries.|`dataSource`|0|
-|`segment/underReplicated/count`|Number of segments (including replicas) left 
to load until segments that should be loaded in the cluster are available for 
queries.|`tier`, `dataSource`|0|
+|`segment/unavailable/count`|Number of segments, not including replicas, left 
to load until segments that should be loaded in the cluster are available for 
queries.|`dataSource`|0|
+|`segment/underReplicated/count`|Number of segments, including replicas, left 
to load until segments that should be loaded in the cluster are available for 
queries.|`tier`, `dataSource`|0|
 |`tier/historical/count`|Number of available historical nodes in each 
tier.|`tier`|Varies|
 |`tier/replication/factor`|Configured maximum replication factor in each 
tier.|`tier`|Varies|
 |`tier/required/capacity`|Total capacity in bytes required in each 
tier.|`tier`|Varies|
 |`tier/total/capacity`|Total capacity in bytes available in each 
tier.|`tier`|Varies|
 |`compact/task/count`|Number of tasks issued in the auto compaction run.| 
|Varies|
-|`compactTask/maxSlot/count`|Max number of task slots that can be used for 
auto compaction tasks in the auto compaction run.| |Varies|
+|`compactTask/maxSlot/count`|Max number of task slots available for auto 
compaction tasks in the auto compaction run.| |Varies|

Review Comment:
   ```suggestion
   |`compactTask/maxSlot/count`|Maximum number of task slots available for auto 
compaction tasks in the auto compaction run.| |Varies|
   ```



##########
docs/operations/metrics.md:
##########
@@ -375,9 +375,9 @@ For more information, see [Enabling 
Metrics](../configuration/index.md#enabling-
 
 ### ZooKeeper
 
-These metrics are available unless `druid.zk.service.enabled = false`.
+These metrics are available when `druid.zk.service.enabled = true`.

Review Comment:
   ```suggestion
   These metrics are available only when `druid.zk.service.enabled = true`.
   ```



##########
docs/operations/metrics.md:
##########
@@ -321,28 +321,28 @@ These metrics are for the Druid Coordinator and are reset 
each time the Coordina
 |`segment/skipCompact/bytes`|Total bytes of this datasource that are skipped 
(not eligible for auto compaction) by the auto compaction.|`dataSource`|Varies|
 |`segment/skipCompact/count`|Total number of segments of this datasource that 
are skipped (not eligible for auto compaction) by the auto 
compaction.|`dataSource`|Varies|
 |`interval/skipCompact/count`|Total number of intervals of this datasource 
that are skipped (not eligible for auto compaction) by the auto 
compaction.|`dataSource`|Varies|
-|`coordinator/time`|Approximate Coordinator duty runtime in milliseconds. The 
duty dimension is the string alias of the Duty that is being run.|`duty`|Varies|
-|`coordinator/global/time`|Approximate runtime of a full coordination cycle in 
milliseconds. The `dutyGroup` dimension indicates what type of coordination 
this run was. i.e. Historical Management vs Indexing|`dutyGroup`|Varies|
-|`metadata/kill/supervisor/count`|Total number of terminated supervisors that 
were automatically deleted from metadata store per each Coordinator kill 
supervisor duty run. This metric can help adjust 
`druid.coordinator.kill.supervisor.durationToRetain` configuration based on 
whether more or less terminated supervisors need to be deleted per cycle. Note 
that this metric is only emitted when `druid.coordinator.kill.supervisor.on` is 
set to true.| |Varies|
-|`metadata/kill/audit/count`|Total number of audit logs that were 
automatically deleted from metadata store per each Coordinator kill audit duty 
run. This metric can help adjust 
`druid.coordinator.kill.audit.durationToRetain` configuration based on whether 
more or less audit logs need to be deleted per cycle. Note that this metric is 
only emitted when `druid.coordinator.kill.audit.on` is set to true.| |Varies|
-|`metadata/kill/compaction/count`|Total number of compaction configurations 
that were automatically deleted from metadata store per each Coordinator kill 
compaction configuration duty run. Note that this metric is only emitted when 
`druid.coordinator.kill.compaction.on` is set to true.| |Varies|
-|`metadata/kill/rule/count`|Total number of rules that were automatically 
deleted from metadata store per each Coordinator kill rule duty run. This 
metric can help adjust `druid.coordinator.kill.rule.durationToRetain` 
configuration based on whether more or less rules need to be deleted per cycle. 
Note that this metric is only emitted when `druid.coordinator.kill.rule.on` is 
set to true.| |Varies|
-|`metadata/kill/datasource/count`|Total number of datasource metadata that 
were automatically deleted from metadata store per each Coordinator kill 
datasource duty run (Note: datasource metadata only exists for datasource 
created from supervisor). This metric can help adjust 
`druid.coordinator.kill.datasource.durationToRetain` configuration based on 
whether more or less datasource metadata need to be deleted per cycle. Note 
that this metric is only emitted when `druid.coordinator.kill.datasource.on` is 
set to true.| |Varies|
-|`init/serverview/time`|Time taken to initialize the coordinator server 
view.||Depends on the number of segments|
-|`segment/serverview/sync/healthy`|Sync status of the Coordinator with a 
segment-loading server such as a Historical or Peon. Emitted only when 
[HTTP-based server view](../configuration/index.md#segment-management) is 
enabled. This metric can be used in conjunction with 
`segment/serverview/sync/unstableTime` to debug slow startup of the 
Coordinator.|`server`, `tier`|1 for fully synced servers, 0 otherwise|
+|`coordinator/time`|Approximate Coordinator duty runtime in milliseconds. The 
duty dimension is the string alias of the duty that is running.|`duty`|Varies|
+|`coordinator/global/time`|Approximate runtime of a full coordination cycle in 
milliseconds. The `dutyGroup` dimension indicates what type of coordination 
this run was. For example: Historical Management or 
Indexing.|`dutyGroup`|Varies|
+|`metadata/kill/supervisor/count`|Total number of terminated supervisors that 
were automatically deleted from metadata store per each Coordinator kill 
supervisor duty run. This metric can help adjust 
`druid.coordinator.kill.supervisor.durationToRetain` configuration based on 
whether more or less terminated supervisors need to be deleted per cycle. This 
metric is only emitted when `druid.coordinator.kill.supervisor.on` is set to 
true.| |Varies|
+|`metadata/kill/audit/count`|Total number of audit logs that were 
automatically deleted from metadata store per each Coordinator kill audit duty 
run. This metric can help adjust 
`druid.coordinator.kill.audit.durationToRetain` configuration based on whether 
more or less audit logs need to be deleted per cycle. This metric is only 
emitted when `druid.coordinator.kill.audit.on` is set to true.| |Varies|

Review Comment:
   ```suggestion
   |`metadata/kill/audit/count`|Total number of audit logs that were 
automatically deleted from metadata store per each Coordinator kill audit duty 
run. This metric can help adjust 
`druid.coordinator.kill.audit.durationToRetain` configuration based on whether 
more or less audit logs need to be deleted per cycle. This metric is emitted 
only when `druid.coordinator.kill.audit.on` is set to true.| |Varies|
   ```



##########
docs/operations/metrics.md:
##########
@@ -303,14 +303,14 @@ These metrics are for the Druid Coordinator and are reset 
each time the Coordina
 |`segment/size`|Total size of used segments in a data source. Emitted only for 
data sources to which at least one used segment belongs.|`dataSource`|Varies|
 |`segment/count`|Number of used segments belonging to a data source. Emitted 
only for data sources to which at least one used segment 
belongs.|`dataSource`|< max|
 |`segment/overShadowed/count`|Number of segments marked as unused due to being 
overshadowed.| |Varies|
-|`segment/unavailable/count`|Number of segments (not including replicas) left 
to load until segments that should be loaded in the cluster are available for 
queries.|`dataSource`|0|
-|`segment/underReplicated/count`|Number of segments (including replicas) left 
to load until segments that should be loaded in the cluster are available for 
queries.|`tier`, `dataSource`|0|
+|`segment/unavailable/count`|Number of segments, not including replicas, left 
to load until segments that should be loaded in the cluster are available for 
queries.|`dataSource`|0|

Review Comment:
   ```suggestion
   |`segment/unavailable/count`|Number of unique segments left to load until 
all used segments are available for queries.|`dataSource`|0|
   ```



##########
docs/operations/metrics.md:
##########
@@ -321,28 +321,28 @@ These metrics are for the Druid Coordinator and are reset 
each time the Coordina
 |`segment/skipCompact/bytes`|Total bytes of this datasource that are skipped 
(not eligible for auto compaction) by the auto compaction.|`dataSource`|Varies|
 |`segment/skipCompact/count`|Total number of segments of this datasource that 
are skipped (not eligible for auto compaction) by the auto 
compaction.|`dataSource`|Varies|
 |`interval/skipCompact/count`|Total number of intervals of this datasource 
that are skipped (not eligible for auto compaction) by the auto 
compaction.|`dataSource`|Varies|
-|`coordinator/time`|Approximate Coordinator duty runtime in milliseconds. The 
duty dimension is the string alias of the Duty that is being run.|`duty`|Varies|
-|`coordinator/global/time`|Approximate runtime of a full coordination cycle in 
milliseconds. The `dutyGroup` dimension indicates what type of coordination 
this run was. i.e. Historical Management vs Indexing|`dutyGroup`|Varies|
-|`metadata/kill/supervisor/count`|Total number of terminated supervisors that 
were automatically deleted from metadata store per each Coordinator kill 
supervisor duty run. This metric can help adjust 
`druid.coordinator.kill.supervisor.durationToRetain` configuration based on 
whether more or less terminated supervisors need to be deleted per cycle. Note 
that this metric is only emitted when `druid.coordinator.kill.supervisor.on` is 
set to true.| |Varies|
-|`metadata/kill/audit/count`|Total number of audit logs that were 
automatically deleted from metadata store per each Coordinator kill audit duty 
run. This metric can help adjust 
`druid.coordinator.kill.audit.durationToRetain` configuration based on whether 
more or less audit logs need to be deleted per cycle. Note that this metric is 
only emitted when `druid.coordinator.kill.audit.on` is set to true.| |Varies|
-|`metadata/kill/compaction/count`|Total number of compaction configurations 
that were automatically deleted from metadata store per each Coordinator kill 
compaction configuration duty run. Note that this metric is only emitted when 
`druid.coordinator.kill.compaction.on` is set to true.| |Varies|
-|`metadata/kill/rule/count`|Total number of rules that were automatically 
deleted from metadata store per each Coordinator kill rule duty run. This 
metric can help adjust `druid.coordinator.kill.rule.durationToRetain` 
configuration based on whether more or less rules need to be deleted per cycle. 
Note that this metric is only emitted when `druid.coordinator.kill.rule.on` is 
set to true.| |Varies|
-|`metadata/kill/datasource/count`|Total number of datasource metadata that 
were automatically deleted from metadata store per each Coordinator kill 
datasource duty run (Note: datasource metadata only exists for datasource 
created from supervisor). This metric can help adjust 
`druid.coordinator.kill.datasource.durationToRetain` configuration based on 
whether more or less datasource metadata need to be deleted per cycle. Note 
that this metric is only emitted when `druid.coordinator.kill.datasource.on` is 
set to true.| |Varies|
-|`init/serverview/time`|Time taken to initialize the coordinator server 
view.||Depends on the number of segments|
-|`segment/serverview/sync/healthy`|Sync status of the Coordinator with a 
segment-loading server such as a Historical or Peon. Emitted only when 
[HTTP-based server view](../configuration/index.md#segment-management) is 
enabled. This metric can be used in conjunction with 
`segment/serverview/sync/unstableTime` to debug slow startup of the 
Coordinator.|`server`, `tier`|1 for fully synced servers, 0 otherwise|
+|`coordinator/time`|Approximate Coordinator duty runtime in milliseconds. The 
duty dimension is the string alias of the duty that is running.|`duty`|Varies|

Review Comment:
   Nit: The second sentence is probably not needed. Seems self explanatory.
   ```suggestion
   |`coordinator/time`|Approximate Coordinator duty runtime in milliseconds. 
|`duty`|Varies|
   ```



##########
docs/operations/metrics.md:
##########
@@ -217,28 +217,28 @@ batch ingestion emit the following metrics. These metrics 
are deltas for each em
 |`ingest/events/duplicate`|Number of events rejected because the events are 
duplicated.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|0|
 |`ingest/input/bytes`|Number of bytes read from input sources, after 
decompression but prior to parsing. This covers all data read, including data 
that does not end up being fully processed and ingested. For example, this 
includes data that ends up being rejected for being unparseable or filtered 
out.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
amount of data read.|
 |`ingest/rows/output`|Number of Druid rows persisted.|`dataSource`, `taskId`, 
`taskType`, `groupId`|Your number of events with rollup.|
-|`ingest/persists/count`|Number of times persist occurred.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|Depends on configuration.|
-|`ingest/persists/time`|Milliseconds spent doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
-|`ingest/persists/cpu`|Cpu time in Nanoseconds spent on doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
+|`ingest/persists/count`|Number of times persist occurred.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|Depends on the configuration.|
+|`ingest/persists/time`|Milliseconds spent doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
+|`ingest/persists/cpu`|CPU time in Nanoseconds spent on doing intermediate 
persist.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
 |`ingest/persists/backPressure`|Milliseconds spent creating persist tasks and 
blocking waiting for them to finish.|`dataSource`, `taskId`, `taskType`, 
`groupId`, `tags`|0 or very low|
 |`ingest/persists/failed`|Number of persists that failed.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|0|
 |`ingest/handoff/failed`|Number of handoffs that failed.|`dataSource`, 
`taskId`, `taskType`, `groupId`,`tags`|0|
-|`ingest/merge/time`|Milliseconds spent merging intermediate 
segments.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
-|`ingest/merge/cpu`|Cpu time in Nanoseconds spent on merging intermediate 
segments.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on 
configuration. Generally a few minutes at most.|
+|`ingest/merge/time`|Milliseconds spent merging intermediate 
segments.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
+|`ingest/merge/cpu`|CPU time in Nanoseconds spent on merging intermediate 
segments.|`dataSource`, `taskId`, `taskType`, `groupId`, `tags`|Depends on the 
configuration. Generally a few minutes at most.|
 |`ingest/handoff/count`|Number of handoffs that happened.|`dataSource`, 
`taskId`, `taskType`, `groupId`, `tags`|Varies. Generally greater than 0 once 
every segment granular period if cluster operating normally.|
-|`ingest/sink/count`|Number of sinks not handoffed.|`dataSource`, `taskId`, 
`taskType`, `groupId`, `tags`|1~3|
-|`ingest/events/messageGap`|Time gap in milliseconds between the latest 
ingested event timestamp and the current system timestamp of metrics emission. 
If the value is increasing but lag is low, Druid may not be receiving new data. 
This metric is reset as new tasks spawn up.|`dataSource`, `taskId`, `taskType`, 
`groupId`, `tags`|Greater than 0, depends on the time carried in event. |
+|`ingest/sink/count`|Number of sinks not handed off.|`dataSource`, `taskId`, 
`taskType`, `groupId`, `tags`|1~3|

Review Comment:
   Nice one 😂 



##########
docs/operations/metrics.md:
##########
@@ -303,14 +303,14 @@ These metrics are for the Druid Coordinator and are reset 
each time the Coordina
 |`segment/size`|Total size of used segments in a data source. Emitted only for 
data sources to which at least one used segment belongs.|`dataSource`|Varies|
 |`segment/count`|Number of used segments belonging to a data source. Emitted 
only for data sources to which at least one used segment 
belongs.|`dataSource`|< max|
 |`segment/overShadowed/count`|Number of segments marked as unused due to being 
overshadowed.| |Varies|
-|`segment/unavailable/count`|Number of segments (not including replicas) left 
to load until segments that should be loaded in the cluster are available for 
queries.|`dataSource`|0|
-|`segment/underReplicated/count`|Number of segments (including replicas) left 
to load until segments that should be loaded in the cluster are available for 
queries.|`tier`, `dataSource`|0|
+|`segment/unavailable/count`|Number of segments, not including replicas, left 
to load until segments that should be loaded in the cluster are available for 
queries.|`dataSource`|0|
+|`segment/underReplicated/count`|Number of segments, including replicas, left 
to load until segments that should be loaded in the cluster are available for 
queries.|`tier`, `dataSource`|0|

Review Comment:
   ```suggestion
   |`segment/underReplicated/count`|Number of segments, including replicas, 
left to load until all used segments are available for queries.|`tier`, 
`dataSource`|0|
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to