kfaraz commented on code in PR #17919:
URL: https://github.com/apache/druid/pull/17919#discussion_r2044320081
##########
extensions-core/kafka-indexing-service/src/main/java/org/apache/druid/indexing/kafka/KafkaConsumerMonitor.java:
##########
@@ -60,18 +131,33 @@ public boolean doMonitor(final ServiceEmitter emitter)
{
for (final Map.Entry<MetricName, ? extends Metric> entry :
consumer.metrics().entrySet()) {
final MetricName metricName = entry.getKey();
+ final KafkaConsumerMetric kafkaConsumerMetric =
METRICS.get(metricName.name());
+
+ if (kafkaConsumerMetric != null &&
+
kafkaConsumerMetric.getDimensions().equals(metricName.tags().keySet())) {
Review Comment:
Instead of requiring both the sets to be equal, should we just check for a
contains relationship?
```java
kafkaConsumerMetric != null &&
metricName.tags().keySet().containsAll(kafkaConsumerMetric.getDimensions())
```
##########
extensions-core/kafka-indexing-service/src/main/java/org/apache/druid/indexing/kafka/KafkaConsumerMonitor.java:
##########
@@ -60,18 +131,33 @@ public boolean doMonitor(final ServiceEmitter emitter)
{
for (final Map.Entry<MetricName, ? extends Metric> entry :
consumer.metrics().entrySet()) {
final MetricName metricName = entry.getKey();
+ final KafkaConsumerMetric kafkaConsumerMetric =
METRICS.get(metricName.name());
+
+ if (kafkaConsumerMetric != null &&
+
kafkaConsumerMetric.getDimensions().equals(metricName.tags().keySet())) {
Review Comment:
Please also add a comment on why we need to check for the dimensions. The
older code had this comment:
```java
// Certain metrics are emitted both as grand totals and broken down by
topic; we want to ignore the grand total and
// only look at the per-topic metrics. See
https://kafka.apache.org/documentation/#consumer_fetch_monitoring.
```
##########
extensions-core/kafka-indexing-service/src/main/java/org/apache/druid/indexing/kafka/KafkaConsumerMonitor.java:
##########
@@ -60,18 +131,33 @@ public boolean doMonitor(final ServiceEmitter emitter)
{
for (final Map.Entry<MetricName, ? extends Metric> entry :
consumer.metrics().entrySet()) {
final MetricName metricName = entry.getKey();
+ final KafkaConsumerMetric kafkaConsumerMetric =
METRICS.get(metricName.name());
+
+ if (kafkaConsumerMetric != null &&
+
kafkaConsumerMetric.getDimensions().equals(metricName.tags().keySet())) {
+ final Number newValue = (Number) entry.getValue().metricValue();
+ final Number emitValue;
- if (METRICS.containsKey(metricName.name()) && isTopicMetric(metricName))
{
- final String topic = metricName.tags().get(TOPIC_TAG);
- final long newValue = ((Number)
entry.getValue().metricValue()).longValue();
- final long priorValue =
- counters.computeIfAbsent(metricName.name(), ignored -> new
AtomicLong())
- .getAndSet(newValue);
+ if (kafkaConsumerMetric.getMetricType() ==
KafkaConsumerMetric.MetricType.GAUGE || newValue == null) {
+ emitValue = newValue;
+ } else if (kafkaConsumerMetric.getMetricType() ==
KafkaConsumerMetric.MetricType.COUNTER) {
+ final long newValueAsLong = newValue.longValue();
+ final long priorValue =
+ counters.computeIfAbsent(metricName, ignored -> new AtomicLong())
+ .getAndSet(newValueAsLong);
+ emitValue = newValueAsLong - priorValue;
+ } else {
+ throw DruidException.defensive("Unexpected metric type[%s]",
kafkaConsumerMetric.getMetricType());
Review Comment:
Maybe we should also log an error and then throw the exception, because I am
not sure where this exception would be caught.
##########
extensions-core/kafka-indexing-service/src/main/java/org/apache/druid/indexing/kafka/KafkaConsumerMonitor.java:
##########
@@ -33,22 +31,95 @@
import java.util.Map;
import java.util.Set;
import java.util.concurrent.atomic.AtomicLong;
+import java.util.function.Function;
+import java.util.stream.Collectors;
+import java.util.stream.Stream;
public class KafkaConsumerMonitor extends AbstractMonitor
{
private volatile boolean stopAfterNext = false;
- // Kafka metric name -> Druid metric name
- private static final Map<String, String> METRICS =
- ImmutableMap.<String, String>builder()
- .put("bytes-consumed-total", "kafka/consumer/bytesConsumed")
- .put("records-consumed-total",
"kafka/consumer/recordsConsumed")
- .build();
- private static final String TOPIC_TAG = "topic";
- private static final Set<String> TOPIC_METRIC_TAGS =
ImmutableSet.of("client-id", TOPIC_TAG);
+ private static final String CLIENT_ID_TAG = "client-id";
+
+ // Kafka metric name -> Kafka metric descriptor
+ private static final Map<String, KafkaConsumerMetric> METRICS =
+ Stream.of(
+ new KafkaConsumerMetric(
+ "bytes-consumed-total",
+ "kafka/consumer/bytesConsumed",
+ Set.of(CLIENT_ID_TAG, "topic"),
+ KafkaConsumerMetric.MetricType.COUNTER
+ ),
+ new KafkaConsumerMetric(
+ "records-consumed-total",
+ "kafka/consumer/recordsConsumed",
+ Set.of(CLIENT_ID_TAG, "topic"),
+ KafkaConsumerMetric.MetricType.COUNTER
+ ),
+ new KafkaConsumerMetric(
+ "fetch-total",
+ "kafka/consumer/fetch",
+ Set.of(CLIENT_ID_TAG),
+ KafkaConsumerMetric.MetricType.COUNTER
+ ),
+ new KafkaConsumerMetric(
+ "fetch-rate",
+ "kafka/consumer/fetchRate",
+ Set.of(CLIENT_ID_TAG),
+ KafkaConsumerMetric.MetricType.GAUGE
+ ),
+ new KafkaConsumerMetric(
+ "fetch-latency-avg",
+ "kafka/consumer/fetchLatencyAvg",
+ Set.of(CLIENT_ID_TAG),
+ KafkaConsumerMetric.MetricType.GAUGE
+ ),
+ new KafkaConsumerMetric(
+ "fetch-latency-max",
+ "kafka/consumer/fetchLatencyMax",
+ Set.of(CLIENT_ID_TAG),
+ KafkaConsumerMetric.MetricType.GAUGE
+ ),
+ new KafkaConsumerMetric(
+ "fetch-size-avg",
+ "kafka/consumer/fetchSizeAvg",
+ Set.of(CLIENT_ID_TAG, "topic"),
+ KafkaConsumerMetric.MetricType.GAUGE
+ ),
+ new KafkaConsumerMetric(
+ "fetch-size-max",
+ "kafka/consumer/fetchSizeMax",
+ Set.of(CLIENT_ID_TAG, "topic"),
+ KafkaConsumerMetric.MetricType.GAUGE
+ ),
+ new KafkaConsumerMetric(
+ "records-lag",
+ "kafka/consumer/recordsLag",
+ Set.of(CLIENT_ID_TAG, "topic", "partition"),
+ KafkaConsumerMetric.MetricType.GAUGE
+ ),
+ new KafkaConsumerMetric(
+ "records-per-request-avg",
+ "kafka/consumer/recordsPerRequestAvg",
+ Set.of(CLIENT_ID_TAG, "topic"),
+ KafkaConsumerMetric.MetricType.GAUGE
+ ),
+ new KafkaConsumerMetric(
+ "outgoing-byte-total",
+ "kafka/consumer/outgoingBytes",
+ Set.of(CLIENT_ID_TAG, "node-id"),
Review Comment:
We should also add constants for "topic", "partition" and "node-id".
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]