[ 
https://issues.apache.org/jira/browse/HADOOP-16290?focusedWorklogId=624875&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-624875
 ]

ASF GitHub Bot logged work on HADOOP-16290:
-------------------------------------------

                Author: ASF GitHub Bot
            Created on: 20/Jul/21 09:35
            Start Date: 20/Jul/21 09:35
    Worklog Time Spent: 10m 
      Work Description: aajisaka commented on a change in pull request #3198:
URL: https://github.com/apache/hadoop/pull/3198#discussion_r672242315



##########
File path: 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ipc/TestRPC.java
##########
@@ -1603,6 +1602,70 @@ public void testSetProtocolEngine() {
     assertTrue(rpcEngine instanceof StoppedRpcEngine);
   }
 
+  @Test
+  public void testRpcMetricsInNanos() throws Exception {
+    final Server server;
+    TestRpcService proxy = null;
+
+    final int interval = 1;
+    conf.setBoolean(CommonConfigurationKeys.
+        RPC_METRICS_QUANTILE_ENABLE, true);
+    conf.set(CommonConfigurationKeys.
+        RPC_METRICS_PERCENTILES_INTERVALS_KEY, "" + interval);
+    conf.set(CommonConfigurationKeys.RPC_METRICS_TIME_UNIT, "NANOSECONDS");
+
+    server = setupTestServer(conf, 5);
+    String testUser = "testUserInNanos";
+    UserGroupInformation anotherUser =
+        UserGroupInformation.createRemoteUser(testUser);
+    TestRpcService proxy2 =
+        anotherUser.doAs((PrivilegedAction<TestRpcService>) () -> {
+          try {
+            return RPC.getProxy(TestRpcService.class, 0,
+                server.getListenerAddress(), conf);
+          } catch (IOException e) {
+            LOG.error("Something went wrong.", e);
+          }
+          return null;
+        });
+    try {
+      proxy = getClient(addr, conf);
+      for (int i = 0; i < 1000; i++) {

Review comment:
       [minor] Creating 3 * 1000 requests seems costly. Maybe 300 or 30?

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
##########
@@ -266,6 +268,24 @@ public DecayRpcScheduler(int numLevels, String ns, 
Configuration conf) {
         DecayRpcSchedulerDetailedMetrics.create(ns);
     decayRpcSchedulerDetailedMetrics.init(numLevels);
 
+    String timeunit = conf.get(CommonConfigurationKeys.RPC_METRICS_TIME_UNIT);
+    TimeUnit tmpTimeUnit;
+    if (StringUtils.isNotEmpty(timeunit)) {
+      try {
+        tmpTimeUnit = TimeUnit.valueOf(timeunit);
+      } catch (IllegalArgumentException e) {
+        LOG.info("Config key {} 's value {} does not correspond to enum values"
+                + " of java.util.concurrent.TimeUnit. Hence default unit"
+                + " {} will be used",
+            CommonConfigurationKeys.RPC_METRICS_TIME_UNIT, timeunit,
+            RpcMetrics.DEFAULT_METRIC_TIME_UNIT);
+        tmpTimeUnit = RpcMetrics.DEFAULT_METRIC_TIME_UNIT;
+      }
+    } else {
+      tmpTimeUnit = RpcMetrics.DEFAULT_METRIC_TIME_UNIT;
+    }

Review comment:
       These lines can be simplified as follows:
   ```suggestion
       TimeUnit tmpTimeUnit = RpcMetrics.DEFAULT_METRIC_TIME_UNIT;
       if (StringUtils.isNotEmpty(timeunit)) {
         try {
           tmpTimeUnit = TimeUnit.valueOf(timeunit);
         } catch (IllegalArgumentException e) {
           LOG.info("Config key {} 's value {} does not correspond to enum 
values"
                   + " of java.util.concurrent.TimeUnit. Hence default unit"
                   + " {} will be used",
               CommonConfigurationKeys.RPC_METRICS_TIME_UNIT, timeunit,
               RpcMetrics.DEFAULT_METRIC_TIME_UNIT);
         }
       }
   ```

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/DecayRpcScheduler.java
##########
@@ -266,6 +268,24 @@ public DecayRpcScheduler(int numLevels, String ns, 
Configuration conf) {
         DecayRpcSchedulerDetailedMetrics.create(ns);
     decayRpcSchedulerDetailedMetrics.init(numLevels);
 
+    String timeunit = conf.get(CommonConfigurationKeys.RPC_METRICS_TIME_UNIT);
+    TimeUnit tmpTimeUnit;
+    if (StringUtils.isNotEmpty(timeunit)) {
+      try {
+        tmpTimeUnit = TimeUnit.valueOf(timeunit);
+      } catch (IllegalArgumentException e) {
+        LOG.info("Config key {} 's value {} does not correspond to enum values"
+                + " of java.util.concurrent.TimeUnit. Hence default unit"
+                + " {} will be used",
+            CommonConfigurationKeys.RPC_METRICS_TIME_UNIT, timeunit,
+            RpcMetrics.DEFAULT_METRIC_TIME_UNIT);
+        tmpTimeUnit = RpcMetrics.DEFAULT_METRIC_TIME_UNIT;
+      }
+    } else {
+      tmpTimeUnit = RpcMetrics.DEFAULT_METRIC_TIME_UNIT;
+    }

Review comment:
       Maybe this functionality can be moved to a static method in RpcMetrics, 
and use them from both DecayRpcScheduler and RpcMetrics.
   ```
     metricsTimeUnit = RpcMetrics.getMetricsTimeUnitFromConf(conf);
   ```

##########
File path: 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/metrics/RpcMetrics.java
##########
@@ -49,7 +51,7 @@
   final String name;
   final boolean rpcQuantileEnable;
   /** The time unit used when storing/accessing time durations. */
-  public final static TimeUnit TIMEUNIT = TimeUnit.MILLISECONDS;
+  private static TimeUnit metricsTimeUnit = TimeUnit.MILLISECONDS;

Review comment:
       Thanks!




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Issue Time Tracking
-------------------

    Worklog Id:     (was: 624875)
    Time Spent: 6h 40m  (was: 6.5h)

> Enable RpcMetrics units to be configurable
> ------------------------------------------
>
>                 Key: HADOOP-16290
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16290
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: ipc, metrics
>            Reporter: Erik Krogen
>            Assignee: Viraj Jasani
>            Priority: Major
>              Labels: pull-request-available
>             Fix For: 3.4.0, 3.3.2
>
>          Time Spent: 6h 40m
>  Remaining Estimate: 0h
>
> One resulting discussion from HADOOP-16266 was that it would be better for 
> the RPC metrics (processing time, queue time) to be in micro- or nanoseconds, 
> since milliseconds does not accurately capture the processing time of many 
> RPC operations.  HADOOP-16266 made some small changes in this direction, but 
> to keep the size of the patch down, we did not make it fully configurable. We 
> can complete that work here.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to