ulysses-you commented on a change in pull request #32872:
URL: https://github.com/apache/spark/pull/32872#discussion_r669332135



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/CustomShuffleReaderExec.scala
##########
@@ -87,8 +87,15 @@ case class CustomShuffleReaderExec private(
     Iterator(desc)
   }
 
-  def hasCoalescedPartition: Boolean =
-    partitionSpecs.exists(_.isInstanceOf[CoalescedPartitionSpec])
+  /**
+   * Returns true iff some non-empty partitions were combined
+   */
+  def hasCoalescedPartition: Boolean = {
+    partitionSpecs.exists {
+      case s: CoalescedPartitionSpec => s.endReducerIndex - 
s.startReducerIndex > 1

Review comment:
       Although `CoalescedPartitionSpec(0, 0, 0)` looks a little hack but it's 
safe and effective.
   
   We don't know the numReducers if the shuffle comes from `EmptyRDD`, then we 
can only assume the number is same with `spark.sql.shuffle.partitions`. But 
sometimes the assuming is not correct, e.g. if serveral `EmptyRDD`s have 
different pre-shuffle partition numbers.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to