ndimiduk commented on code in PR #6868:
URL: https://github.com/apache/hbase/pull/6868#discussion_r2095413443


##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ScanMetrics.java:
##########
@@ -51,53 +51,118 @@ public class ScanMetrics extends ServerSideScanMetrics {
   /**
    * number of RPC calls
    */
-  public final AtomicLong countOfRPCcalls = 
createCounter(RPC_CALLS_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")

Review Comment:
   Don't suppress these checkstyle warnings. These public fields are ugly, 
making them non-final makes them worse.



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ScanMetrics.java:
##########
@@ -51,53 +51,118 @@ public class ScanMetrics extends ServerSideScanMetrics {
   /**
    * number of RPC calls
    */
-  public final AtomicLong countOfRPCcalls = 
createCounter(RPC_CALLS_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")

Review Comment:
   So previously it would be "reasonable" for a consumer to grab the handle to 
this Atomic counter from any thread and then have confidence that their handle 
will be viable for the lifetime of the program. By making the attributes 
non-final and re-initializing their pointers, you break a kind of concurrency 
pattern that was present here. I approve of the approach of encapsulating these 
fields behind accessors, but this change is not thorough enough to force 
callers to adopt a different style, so I fear that it will lead to subtle bugs.
   
   In order to preserve semantic compatibility, I think that you need to retain 
the `final` annotation on these fields. But you're stuck without a way to reset 
all these counters at once. You can either live with that, suffer the 
possibility of inaccurate metric results, or you need to come up with a 
different approach. Maybe the causal use of these atomic fields is going to 
force you back to the "bolt-on" approach that you had originally? Or maybe you 
have some new idea? I'm really disappointed by the situation with these public 
fields.
   
   Since locking down these fields will be an API-breaking change, this class 
is annotated as `InterfaceAudience.Public`, and you want to get this feature 
into the existing release lines, I suggest that you approach the API 
improvement from a separate PR.



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ServerSideScanMetrics.java:
##########
@@ -55,66 +85,150 @@ protected AtomicLong createCounter(String counterName) {
   /**
    * number of rows filtered during scan RPC
    */
-  public final AtomicLong countOfRowsFiltered =
-    createCounter(COUNT_OF_ROWS_FILTERED_KEY_METRIC_NAME);
+  public AtomicLong countOfRowsFiltered;
 
   /**
    * number of rows scanned during scan RPC. Not every row scanned will be 
returned to the client
    * since rows may be filtered.
    */
-  public final AtomicLong countOfRowsScanned = 
createCounter(COUNT_OF_ROWS_SCANNED_KEY_METRIC_NAME);
+  public AtomicLong countOfRowsScanned;
 
-  public final AtomicLong countOfBlockBytesScanned =
-    createCounter(BLOCK_BYTES_SCANNED_KEY_METRIC_NAME);
+  public AtomicLong countOfBlockBytesScanned;
 
-  public final AtomicLong fsReadTime = createCounter(FS_READ_TIME_METRIC_NAME);
+  public AtomicLong fsReadTime;
 
+  /**
+   * Sets counter with counterName to passed in value, does nothing if counter 
does not exist. If
+   * region level scan metrics are enabled then sets the value of counter for 
the current region
+   * being scanned.
+   */
   public void setCounter(String counterName, long value) {
-    AtomicLong c = this.counters.get(counterName);
-    if (c != null) {
-      c.set(value);
-    }
+    currentScanMetricsHolder.setCounter(counterName, value);
   }
 
-  /** Returns true if a counter exists with the counterName */
+  /**
+   * Returns true if a counter exists with the counterName If region level 
scan metrics are enabled
+   * then checks if a counter exists with the counterName for the current 
region being scanned.
+   * Please see {@link #moveToNextRegion()}.
+   */
   public boolean hasCounter(String counterName) {
-    return this.counters.containsKey(counterName);
+    return currentScanMetricsHolder.hasCounter(counterName);
   }
 
-  /** Returns {@link AtomicLong} instance for this counter name, null if 
counter does not exist. */
+  /**
+   * Returns {@link AtomicLong} instance for this counter name, null if 
counter does not exist. If
+   * region level scan metrics are enabled then {@link AtomicLong} instance 
for current region being
+   * scanned is returned or null if counter does not exist. Please see {@link 
#moveToNextRegion()}.
+   */
   public AtomicLong getCounter(String counterName) {
-    return this.counters.get(counterName);
+    return currentScanMetricsHolder.getCounter(counterName);
   }
 
+  /**
+   * Increments the counter with counterName by delta, does nothing if counter 
does not exist. If
+   * region level scan metrics are enabled then increments the counter 
corresponding to the current
+   * region being scanned. Please see {@link #moveToNextRegion()}.
+   */
   public void addToCounter(String counterName, long delta) {
-    AtomicLong c = this.counters.get(counterName);
-    if (c != null) {
-      c.addAndGet(delta);
-    }
+    currentScanMetricsHolder.addToCounter(counterName, delta);
   }
 
   /**
-   * Get all of the values since the last time this function was called. 
Calling this function will
-   * reset all AtomicLongs in the instance back to 0.
-   * @return A Map of String -> Long for metrics
+   * Get all of the values combined for all the regions since the last time 
this function or
+   * {@link #getMetricsMapByRegion()} was called. Calling this function will 
reset all AtomicLongs
+   * in the instance back to 0.
+   * @return A Map of String -> Long for metrics
    */
   public Map<String, Long> getMetricsMap() {
     return getMetricsMap(true);
   }
 
   /**
-   * Get all of the values. If reset is true, we will reset the all 
AtomicLongs back to 0.
+   * Get all of the values combined for all the regions. If reset is true, we 
will reset the all
+   * AtomicLongs back to 0.
    * @param reset whether to reset the AtomicLongs to 0.
-   * @return A Map of String -&gt; Long for metrics
+   * @return A Map of String -> Long for metrics
    */
   public Map<String, Long> getMetricsMap(boolean reset) {
+    Map<String, Long> overallMetrics = new HashMap<>();
+    for (ScanMetricsHolder scanMetricsHolder : scanMetricsHolders) {
+      Map<String, Long> metricsSnapshot = 
scanMetricsHolder.getMetricsMap(reset);
+      metricsSnapshot.forEach((k, v) -> {
+        if (overallMetrics.containsKey(k)) {
+          overallMetrics.put(k, overallMetrics.get(k) + v);
+        } else {
+          overallMetrics.put(k, v);
+        }
+      });
+    }
+    return ImmutableMap.copyOf(overallMetrics);
+  }
+
+  /**
+   * Get values grouped by each region scanned since the last time this or 
{@link #getMetricsMap()}
+   * was called. Calling this function will reset all AtomicLongs in the 
instance back to 0.
+   * @return A Map of region -> (Map of metric name -> Long) for metrics
+   */
+  public Map<ScanMetricsRegionInfo, Map<String, Long>> getMetricsMapByRegion() 
{
+    return getMetricsMapByRegion(true);
+  }
+
+  /**
+   * Get values grouped by each region scanned. If reset is true, we will 
reset the all AtomicLongs
+   * back to 0.
+   * @param reset whether to reset the AtomicLongs to 0.
+   * @return A Map of region -> (Map of metric name -> Long) for metrics
+   */
+  public Map<ScanMetricsRegionInfo, Map<String, Long>> 
getMetricsMapByRegion(boolean reset) {
     // Create a builder
-    ImmutableMap.Builder<String, Long> builder = ImmutableMap.builder();
-    for (Map.Entry<String, AtomicLong> e : this.counters.entrySet()) {
-      long value = reset ? e.getValue().getAndSet(0) : e.getValue().get();
-      builder.put(e.getKey(), value);
+    ImmutableMap.Builder<ScanMetricsRegionInfo, Map<String, Long>> builder = 
ImmutableMap.builder();
+    for (ScanMetricsHolder scanMetricsHolder : scanMetricsHolders) {
+      if (
+        scanMetricsHolder.getScanMetricsRegionInfo()
+            == ScanMetricsRegionInfo.EMPTY_SCAN_METRICS_REGION_INFO
+      ) {
+        continue;
+      }
+      builder.put(scanMetricsHolder.getScanMetricsRegionInfo(),
+        scanMetricsHolder.getMetricsMap(reset));
     }
-    // Build the immutable map so that people can't mess around with it.
     return builder.build();
   }
+
+  @Override
+  public String toString() {
+    return scanMetricsHolders.stream().map(ScanMetricsHolder::toString)
+      .collect(Collectors.joining(";"));
+  }
+
+  /**
+   * Call this method after calling {@link #moveToNextRegion()} to populate 
server name and encoded
+   * region name details for the region being scanned and for which metrics 
are being collected at
+   * the moment.
+   */
+  public void initScanMetricsRegionInfo(ServerName serverName, String 
encodedRegionName) {
+    currentScanMetricsHolder.initScanMetricsRegionInfo(serverName, 
encodedRegionName);
+  }
+
+  protected boolean isFirstRegion() {
+    boolean tmpIsFirstRegion = isFirstRegion;
+    isFirstRegion = false;

Review Comment:
   Please don't modify state in a getter! At the very least, name it 
`getAndIncrement` or something like that. This is really confusing for callers 
and readers.



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/Scan.java:
##########
@@ -1033,4 +1035,14 @@ public boolean isNeedCursorResult() {
   public static Scan createScanFromCursor(Cursor cursor) {
     return new Scan().withStartRow(cursor.getRow());
   }
+
+  public Scan setEnableScanMetricsByRegion(final boolean enable) {
+    setAttribute(Scan.SCAN_ATTRIBUTES_METRICS_BY_REGION_ENABLE, 
Bytes.toBytes(enable));
+    return this;
+  }

Review Comment:
   Yeah we use a builder-like pattern throughout the `Scan` setters so it's 
best to maintain that interface.



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ScanMetricsHolder.java:
##########
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.metrics;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Generic holder class for capturing Scan Metrics as a map of metric name 
({@link String}) -> Value
+ * ({@link AtomicLong}).
+ */
[email protected]
+public class ScanMetricsHolder {

Review Comment:
   What is your expected concurrency model around this class? You're using 
atomic for individual fields but it seems like the whole thing needs to be 
reset at once. Even your reset method doesn't reset things atomically (no 
locking), so concurrent callers can see partial results.



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ScanMetricsRegionInfo.java:
##########
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.metrics;
+
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * POJO for capturing region level details when region level scan metrics are 
enabled. <br>
+ * <br>
+ * Currently, encoded region name and server name (host name, ports and 
startcode) are captured as
+ * region details. <br>
+ * <br>
+ * Instance of this class serves as key in the Map returned by
+ * {@link ServerSideScanMetrics#getMetricsMapByRegion()} or
+ * {@link ServerSideScanMetrics#getMetricsMapByRegion(boolean)}.
+ */
[email protected]
+public class ScanMetricsRegionInfo {
+  /**
+   * Users should only compare against this constant by reference and should 
not make any
+   * assumptions regarding content of the constant.
+   */
+  public static final ScanMetricsRegionInfo EMPTY_SCAN_METRICS_REGION_INFO =
+    new ScanMetricsRegionInfo(null, null);
+
+  private final Pair<String, ServerName> regionAndServerName;
+
+  ScanMetricsRegionInfo(String encodedRegionName, ServerName serverName) {
+    this.regionAndServerName = new Pair<>(encodedRegionName, serverName);
+  }
+
+  @Override
+  public boolean equals(Object obj) {

Review Comment:
   Commons lang has Equals and HashCode Builders that you can use out of the 
box.



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ScanMetricsRegionInfo.java:
##########
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.metrics;
+
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * POJO for capturing region level details when region level scan metrics are 
enabled. <br>
+ * <br>
+ * Currently, encoded region name and server name (host name, ports and 
startcode) are captured as
+ * region details. <br>
+ * <br>
+ * Instance of this class serves as key in the Map returned by
+ * {@link ServerSideScanMetrics#getMetricsMapByRegion()} or
+ * {@link ServerSideScanMetrics#getMetricsMapByRegion(boolean)}.
+ */
[email protected]
+public class ScanMetricsRegionInfo {
+  /**
+   * Users should only compare against this constant by reference and should 
not make any
+   * assumptions regarding content of the constant.
+   */
+  public static final ScanMetricsRegionInfo EMPTY_SCAN_METRICS_REGION_INFO =
+    new ScanMetricsRegionInfo(null, null);
+
+  private final Pair<String, ServerName> regionAndServerName;
+
+  ScanMetricsRegionInfo(String encodedRegionName, ServerName serverName) {
+    this.regionAndServerName = new Pair<>(encodedRegionName, serverName);
+  }
+
+  @Override
+  public boolean equals(Object obj) {
+    if (!(obj instanceof ScanMetricsRegionInfo)) {
+      return false;
+    }
+    return regionAndServerName.equals(((ScanMetricsRegionInfo) 
obj).regionAndServerName);
+  }
+
+  @Override
+  public int hashCode() {
+    return regionAndServerName.hashCode();
+  }
+
+  @Override
+  public String toString() {
+    return regionAndServerName.toString();

Review Comment:
   Pair's toString method isn't what you want either, right?



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ScanMetrics.java:
##########
@@ -51,53 +51,118 @@ public class ScanMetrics extends ServerSideScanMetrics {
   /**
    * number of RPC calls
    */
-  public final AtomicLong countOfRPCcalls = 
createCounter(RPC_CALLS_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")
+  public AtomicLong countOfRPCcalls;
 
   /**
    * number of remote RPC calls
    */
-  public final AtomicLong countOfRemoteRPCcalls = 
createCounter(REMOTE_RPC_CALLS_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")
+  public AtomicLong countOfRemoteRPCcalls;
 
   /**
    * sum of milliseconds between sequential next calls
    */
-  public final AtomicLong sumOfMillisSecBetweenNexts =
-    createCounter(MILLIS_BETWEEN_NEXTS_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")
+  public AtomicLong sumOfMillisSecBetweenNexts;
 
   /**
    * number of NotServingRegionException caught
    */
-  public final AtomicLong countOfNSRE = 
createCounter(NOT_SERVING_REGION_EXCEPTION_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")
+  public AtomicLong countOfNSRE;
 
   /**
    * number of bytes in Result objects from region servers
    */
-  public final AtomicLong countOfBytesInResults = 
createCounter(BYTES_IN_RESULTS_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")
+  public AtomicLong countOfBytesInResults;
 
   /**
    * number of bytes in Result objects from remote region servers
    */
-  public final AtomicLong countOfBytesInRemoteResults =
-    createCounter(BYTES_IN_REMOTE_RESULTS_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")
+  public AtomicLong countOfBytesInRemoteResults;
 
   /**
    * number of regions
    */
-  public final AtomicLong countOfRegions = 
createCounter(REGIONS_SCANNED_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")
+  public AtomicLong countOfRegions;
 
   /**
    * number of RPC retries
    */
-  public final AtomicLong countOfRPCRetries = 
createCounter(RPC_RETRIES_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")
+  public AtomicLong countOfRPCRetries;
 
   /**
    * number of remote RPC retries
    */
-  public final AtomicLong countOfRemoteRPCRetries = 
createCounter(REMOTE_RPC_RETRIES_METRIC_NAME);
+  @SuppressWarnings("checkstyle:VisibilityModifier")
+  public AtomicLong countOfRemoteRPCRetries;
 
   /**
    * constructor
    */
   public ScanMetrics() {
+    createScanMetricsHolder();
+  }
+
+  @Override
+  public void moveToNextRegion() {
+    if (isFirstRegion()) {
+      return;
+    }
+    super.moveToNextRegion();
+    createScanMetricsHolder();
+  }
+
+  private void createScanMetricsHolder() {
+    countOfRPCcalls = createCounter(RPC_CALLS_METRIC_NAME);
+    countOfRemoteRPCcalls = createCounter(REMOTE_RPC_CALLS_METRIC_NAME);
+    sumOfMillisSecBetweenNexts = 
createCounter(MILLIS_BETWEEN_NEXTS_METRIC_NAME);
+    countOfNSRE = createCounter(NOT_SERVING_REGION_EXCEPTION_METRIC_NAME);
+    countOfBytesInResults = createCounter(BYTES_IN_RESULTS_METRIC_NAME);
+    countOfBytesInRemoteResults = 
createCounter(BYTES_IN_REMOTE_RESULTS_METRIC_NAME);
+    countOfRegions = createCounter(REGIONS_SCANNED_METRIC_NAME);
+    countOfRPCRetries = createCounter(RPC_RETRIES_METRIC_NAME);
+    countOfRemoteRPCRetries = createCounter(REMOTE_RPC_RETRIES_METRIC_NAME);
+  }
+
+  public AtomicLong getCountOfRPCcalls() {

Review Comment:
   I prefer that these accessors return simple `long` values instead of the 
implementation detail `AtomicLong`.



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ScanMetricsRegionInfo.java:
##########
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.metrics;
+
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.hadoop.hbase.util.Pair;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * POJO for capturing region level details when region level scan metrics are 
enabled. <br>
+ * <br>
+ * Currently, encoded region name and server name (host name, ports and 
startcode) are captured as
+ * region details. <br>
+ * <br>
+ * Instance of this class serves as key in the Map returned by
+ * {@link ServerSideScanMetrics#getMetricsMapByRegion()} or
+ * {@link ServerSideScanMetrics#getMetricsMapByRegion(boolean)}.
+ */
[email protected]
+public class ScanMetricsRegionInfo {
+  /**
+   * Users should only compare against this constant by reference and should 
not make any
+   * assumptions regarding content of the constant.
+   */
+  public static final ScanMetricsRegionInfo EMPTY_SCAN_METRICS_REGION_INFO =
+    new ScanMetricsRegionInfo(null, null);
+
+  private final Pair<String, ServerName> regionAndServerName;

Review Comment:
   Don't wrap a pair, that's just an extra allocation. What's wrong with two 
fields?



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AbstractClientScanner.java:
##########
@@ -34,6 +35,9 @@ protected void initScanMetrics(Scan scan) {
     // check if application wants to collect scan metrics
     if (scan.isScanMetricsEnabled()) {
       scanMetrics = new ScanMetrics();
+      if (scan.isScanMetricsByRegionEnabled()) {
+        isScanMetricsByRegionEnabled = true;
+      }
     }

Review Comment:
   I think it makes sense to have a flag per feature (you have to retain the 
old flag for backwards compatibility). I think it also makes sense that that 
`isScanMetricsByRegionEnabled=true` implies `isScanMetricsEnabled=true` -- 
enabling the new feature implicitly enables the old feature.
   
   How you capture and accurately propagate this logic through the `Scan` state 
management will be an (perhaps annoying) implementation detail.



##########
hbase-client/src/main/java/org/apache/hadoop/hbase/client/metrics/ScanMetricsHolder.java:
##########
@@ -0,0 +1,109 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hbase.client.metrics;
+
+import java.util.HashMap;
+import java.util.Map;
+import java.util.concurrent.atomic.AtomicLong;
+import org.apache.hadoop.hbase.ServerName;
+import org.apache.yetus.audience.InterfaceAudience;
+
+/**
+ * Generic holder class for capturing Scan Metrics as a map of metric name 
({@link String}) -> Value
+ * ({@link AtomicLong}).
+ */
[email protected]
+public class ScanMetricsHolder {
+  private final Map<String, AtomicLong> counters = new HashMap<>();
+  private ScanMetricsRegionInfo scanMetricsRegionInfo =
+    ScanMetricsRegionInfo.EMPTY_SCAN_METRICS_REGION_INFO;
+
+  /**
+   * Create a new counter with the specified name
+   * @return {@link AtomicLong} instance for the counter with counterName
+   */
+  AtomicLong createCounter(String counterName) {
+    AtomicLong c = new AtomicLong(0);
+    counters.put(counterName, c);
+    return c;
+  }
+
+  /** Sets counter with counterName to passed in value, does nothing if 
counter does not exist. */
+  void setCounter(String counterName, long value) {
+    AtomicLong c = this.counters.get(counterName);
+    if (c != null) {
+      c.set(value);
+    }
+  }
+
+  /** Returns true if a counter exists with the counterName */
+  boolean hasCounter(String counterName) {
+    return this.counters.containsKey(counterName);
+  }
+
+  /** Returns {@link AtomicLong} instance for this counter name, null if 
counter does not exist. */
+  AtomicLong getCounter(String counterName) {
+    return this.counters.get(counterName);
+  }
+
+  /** Increments the counter with counterName by delta, does nothing if 
counter does not exist. */
+  void addToCounter(String counterName, long delta) {
+    AtomicLong c = this.counters.get(counterName);
+    if (c != null) {
+      c.addAndGet(delta);
+    }
+  }
+
+  /**
+   * Get all of the values. If reset is true, we will reset the all 
AtomicLongs back to 0.
+   * @param reset whether to reset the AtomicLongs to 0.
+   * @return A Map of String -> Long for metrics
+   */
+  Map<String, Long> getMetricsMap(boolean reset) {

Review Comment:
   Please don't have getter that modifies state!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to