JkSelf commented on a change in pull request #28109: 
[SPARK-31253][SQL][followup]  Add metric to the split task number for skew 
optimization
URL: https://github.com/apache/spark/pull/28109#discussion_r410086084
 
 

 ##########
 File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/CustomShuffleReaderExec.scala
 ##########
 @@ -107,14 +106,27 @@ case class CustomShuffleReaderExec private(
       (numPartitionsMetric.id, partitionSpecs.length.toLong)
 
     if (hasSkewedPartition) {
-      val skewedMetric = metrics("numSkewedPartitions")
-      val numSkewedPartitions = partitionSpecs.collect {
-        case p: PartialReducerPartitionSpec => p.reducerIndex
-      }.distinct.length
-      skewedMetric.set(numSkewedPartitions)
-      driverAccumUpdates = driverAccumUpdates :+ (skewedMetric.id, 
numSkewedPartitions.toLong)
-    }
+      val skewedPartitions = metrics("numSkewedPartitions")
+      val skewedSplits = metrics("numSkewedSplits")
+
+      val skewedMetrics = new mutable.HashMap[Int, Long]()
 
 Review comment:
   Here we need to calculate the number of splits for every skewed partition 
not only the total number. We may need this `HashMap`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to