dongjoon-hyun commented on a change in pull request #33116:
URL: https://github.com/apache/spark/pull/33116#discussion_r665727785



##########
File path: 
common/network-shuffle/src/main/java/org/apache/spark/network/shuffle/ExternalBlockHandler.java
##########
@@ -323,10 +323,13 @@ private void checkAuth(TransportClient client, String 
appId) {
 
     public ShuffleMetrics() {
       allMetrics = new HashMap<>();
-      allMetrics.put("openBlockRequestLatencyMillis", 
openBlockRequestLatencyMillis);
-      allMetrics.put("registerExecutorRequestLatencyMillis", 
registerExecutorRequestLatencyMillis);
-      allMetrics.put("fetchMergedBlocksMetaLatencyMillis", 
fetchMergedBlocksMetaLatencyMillis);
-      allMetrics.put("finalizeShuffleMergeLatencyMillis", 
finalizeShuffleMergeLatencyMillis);
+      // Note that for the latency metrics, the default unit is actually 
nanos, not millis.
+      // The variables have been renamed, but to preserve backwards 
compatibility, the metric
+      // names remain unchanged. See SPARK-35259 for more details.
+      allMetrics.put("openBlockRequestLatencyMillis", openBlockRequestLatency);

Review comment:
       @Ngone51 . Don't get me wrong. What I suggested was a simple conversion 
to keep the existing metric names while fixing the wrong metric values. Do we 
need more from Spark-side?
   ```
   allMetrics.put("openBlockRequestLatencyMillis", openBlockRequestLatency / 
1000);
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to