Github user rxin commented on a diff in the pull request:

    https://github.com/apache/spark/pull/23207#discussion_r238836448
  
    --- Diff: core/src/main/scala/org/apache/spark/shuffle/metrics.scala ---
    @@ -50,3 +50,57 @@ private[spark] trait ShuffleWriteMetricsReporter {
       private[spark] def decBytesWritten(v: Long): Unit
       private[spark] def decRecordsWritten(v: Long): Unit
     }
    +
    +
    +/**
    + * A proxy class of ShuffleWriteMetricsReporter which proxy all metrics 
updating to the input
    + * reporters.
    + */
    +private[spark] class GroupedShuffleWriteMetricsReporter(
    --- End diff --
    
    I'd not create a general API here. Just put one in SQL similar to the read 
side that also calls the default one.
    
    It can be expensive to go through a seq for each record and bytes.


---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscr...@spark.apache.org
For additional commands, e-mail: reviews-h...@spark.apache.org

Reply via email to