[ 
https://issues.apache.org/jira/browse/HDFS-16065?focusedWorklogId=610483&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-610483
 ]

ASF GitHub Bot logged work on HDFS-16065:
-----------------------------------------

                Author: ASF GitHub Bot
            Created on: 14/Jun/21 07:56
            Start Date: 14/Jun/21 07:56
    Worklog Time Spent: 10m 
      Work Description: goiri commented on a change in pull request #3100:
URL: https://github.com/apache/hadoop/pull/3100#discussion_r650444427



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
##########
@@ -471,6 +471,9 @@ private Object invokeMethod(
         if (this.rpcMonitor != null) {
           this.rpcMonitor.proxyOpComplete(true);
         }
+        if (this.router.getRouterMetrics() != null) {
+          this.router.getRouterMetrics().incInvokedMethod(method);

Review comment:
       Can you point me to the equivalent for the Namenode?
   I thought that the RouterRpcServer was already tracking most of these 
metrics.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestRouterMetrics.java
##########
@@ -0,0 +1,121 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.metrics;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.test.MetricsAsserts.assertCounter;
+import static org.apache.hadoop.test.MetricsAsserts.getMetrics;
+
+/**
+ * Test case for FilesInGetListingOps metric in Namenode
+ */
+public class TestRouterMetrics {
+  private static final Configuration CONF = new HdfsConfiguration();
+  private static final String ROUTER_METRICS = "RouterActivity";
+  static {
+    CONF.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 100);
+    CONF.setInt(DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 1);
+    CONF.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1L);
+    CONF.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
+  }
+
+  private static final int NUM_SUBCLUSTERS = 2;
+  private static final int NUM_DNS = 3;
+
+
+  /** Federated HDFS cluster. */
+  private static MiniRouterDFSCluster cluster;
+
+  /** Random Router for this federated cluster. */
+  private MiniRouterDFSCluster.RouterContext router;
+
+  /** Filesystem interface to the Router. */
+  private FileSystem routerFS;
+  /** Filesystem interface to the Namenode. */
+  private FileSystem nnFS;
+
+  @BeforeClass
+  public static void globalSetUp() throws Exception {
+    cluster = new MiniRouterDFSCluster(false, NUM_SUBCLUSTERS);
+    cluster.setNumDatanodesPerNameservice(NUM_DNS);
+    cluster.startCluster();
+
+    Configuration routerConf = new RouterConfigBuilder()
+        .metrics()
+        .rpc()
+        .build();
+    cluster.addRouterOverrides(routerConf);
+    cluster.startRouters();
+
+    // Register and verify all NNs with all routers
+    cluster.registerNamenodes();
+    cluster.waitNamenodeRegistration();
+
+  }
+
+  @Before
+  public void testSetup() throws Exception {
+    // Create mock locations
+    cluster.installMockLocations();
+
+    // Delete all files via the NNs and verify
+    cluster.deleteAllFiles();
+
+    // Create test fixtures on NN
+    cluster.createTestDirectoriesNamenode();
+
+    // Wait to ensure NN has fully created its test directories
+    Thread.sleep(100);
+
+    router = cluster.getRouters().get(0);
+    this.routerFS = router.getFileSystem();
+
+  }
+
+  @AfterClass
+  public static void tearDown() throws Exception {
+    cluster.shutdown();
+  }
+
+  @Test
+  public void testGetListing() throws IOException {

Review comment:
       No point on going too exhaustive but it wouldn't hurt to have a handful.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/test/java/org/apache/hadoop/hdfs/server/federation/metrics/TestRouterMetrics.java
##########
@@ -0,0 +1,121 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *     http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hadoop.hdfs.server.federation.metrics;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hdfs.DFSConfigKeys;
+import org.apache.hadoop.hdfs.HdfsConfiguration;
+import org.apache.hadoop.hdfs.server.federation.MiniRouterDFSCluster;
+import org.apache.hadoop.hdfs.server.federation.RouterConfigBuilder;
+import org.junit.AfterClass;
+import org.junit.Before;
+import org.junit.BeforeClass;
+import org.junit.Test;
+
+import java.io.IOException;
+
+import static org.apache.hadoop.test.MetricsAsserts.assertCounter;
+import static org.apache.hadoop.test.MetricsAsserts.getMetrics;
+
+/**
+ * Test case for FilesInGetListingOps metric in Namenode
+ */
+public class TestRouterMetrics {
+  private static final Configuration CONF = new HdfsConfiguration();
+  private static final String ROUTER_METRICS = "RouterActivity";
+  static {
+    CONF.setLong(DFSConfigKeys.DFS_BLOCK_SIZE_KEY, 100);
+    CONF.setInt(DFSConfigKeys.DFS_BYTES_PER_CHECKSUM_KEY, 1);
+    CONF.setLong(DFSConfigKeys.DFS_HEARTBEAT_INTERVAL_KEY, 1L);
+    CONF.setInt(DFSConfigKeys.DFS_NAMENODE_REDUNDANCY_INTERVAL_SECONDS_KEY, 1);
+  }
+
+  private static final int NUM_SUBCLUSTERS = 2;
+  private static final int NUM_DNS = 3;
+
+
+  /** Federated HDFS cluster. */
+  private static MiniRouterDFSCluster cluster;
+
+  /** Random Router for this federated cluster. */
+  private MiniRouterDFSCluster.RouterContext router;
+
+  /** Filesystem interface to the Router. */
+  private FileSystem routerFS;
+  /** Filesystem interface to the Namenode. */
+  private FileSystem nnFS;
+
+  @BeforeClass
+  public static void globalSetUp() throws Exception {
+    cluster = new MiniRouterDFSCluster(false, NUM_SUBCLUSTERS);
+    cluster.setNumDatanodesPerNameservice(NUM_DNS);
+    cluster.startCluster();
+
+    Configuration routerConf = new RouterConfigBuilder()
+        .metrics()
+        .rpc()
+        .build();
+    cluster.addRouterOverrides(routerConf);
+    cluster.startRouters();
+
+    // Register and verify all NNs with all routers
+    cluster.registerNamenodes();
+    cluster.waitNamenodeRegistration();
+
+  }
+
+  @Before
+  public void testSetup() throws Exception {
+    // Create mock locations
+    cluster.installMockLocations();
+
+    // Delete all files via the NNs and verify
+    cluster.deleteAllFiles();
+
+    // Create test fixtures on NN
+    cluster.createTestDirectoriesNamenode();
+
+    // Wait to ensure NN has fully created its test directories
+    Thread.sleep(100);
+
+    router = cluster.getRouters().get(0);
+    this.routerFS = router.getFileSystem();
+
+  }
+
+  @AfterClass
+  public static void tearDown() throws Exception {
+    cluster.shutdown();
+  }
+
+  @Test
+  public void testGetListing() throws IOException {

Review comment:
       Add tests for a couple more methods.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterMetrics.java
##########
@@ -70,4 +222,451 @@ public void shutdown() {
   public void setSafeModeTime(long elapsed) {
     safeModeTime.set((int) elapsed);
   }
+
+
+  public void incInvokedMethod(Method method) {

Review comment:
       Add javadoc.

##########
File path: 
hadoop-hdfs-project/hadoop-hdfs-rbf/src/main/java/org/apache/hadoop/hdfs/server/federation/router/RouterRpcClient.java
##########
@@ -471,6 +471,9 @@ private Object invokeMethod(
         if (this.rpcMonitor != null) {
           this.rpcMonitor.proxyOpComplete(true);
         }
+        if (this.router.getRouterMetrics() != null) {
+          this.router.getRouterMetrics().incInvokedMethod(method);

Review comment:
       That makes sense.
   I have never been able to find the code that the RpcServer uses to track 
each method; does it align with this?
   We may want to put these methods within a more intuitive namespace referring 
to the client too.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 610483)
    Time Spent: 40m  (was: 0.5h)

> RBF: Add metrics to record Router's operations
> ----------------------------------------------
>
>                 Key: HDFS-16065
>                 URL: https://issues.apache.org/jira/browse/HDFS-16065
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: rbf
>            Reporter: Janus Chow
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 40m
>  Remaining Estimate: 0h
>
> Currently, Router's operations are not well recorded. It would be good to 
> have a similar metrics as "Hadoop:service=NameNode,name=NameNodeActivity" for 
> NameNode, which shows the count for each operations.
> Besides, some operations are invoked concurrently in Routers, know the counts 
> for concurrent operations would help us better knowing about the cluster's 
> state.
> This ticket is to add normal operation metrics and concurrent operation 
> metrics for Router.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to