Github user xuanyuanking commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23207#discussion_r239067552
  
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/execution/metric/SQLMetrics.scala 
---
    @@ -163,6 +171,8 @@ object SQLMetrics {
             Utils.bytesToString
           } else if (metricsType == TIMING_METRIC) {
             Utils.msDurationToString
    +      } else if (metricsType == NS_TIMING_METRIC) {
    +        duration => Utils.msDurationToString(duration / 1000 / 1000)
    --- End diff --
    
    Maybe it's ok, as I test this locally with UT in SQLMetricsSuites, result 
below:
    ```
    shuffle records written: 2
    shuffle write time total (min, med, max): 37 ms (37 ms, 37 ms, 37 ms)
    shuffle bytes written total (min, med, max): 66.0 B (66.0 B, 66.0 B, 66.0 
    ```
    In the actual scenario the shuffle bytes written will be more larger, and 
keep the time to ms maybe enough, WDYT?


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to