ulysses-you commented on a change in pull request #33541:
URL: https://github.com/apache/spark/pull/33541#discussion_r679656337
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AQEShuffleReadExec.scala
##########
@@ -112,6 +128,16 @@ case class AQEShuffleReadExec private(
partitionSpecs.exists(_.isInstanceOf[PartialMapperPartitionSpec]) ||
partitionSpecs.exists(_.isInstanceOf[CoalescedMapperPartitionSpec])
+ def isCoalescedRead: Boolean = {
+ partitionSpecs.sliding(2).forall {
+ // A single partition spec which is `CoalescedPartitionSpec` also means
coalesced read.
+ case Seq(_: CoalescedPartitionSpec) => true
+ case Seq(l: CoalescedPartitionSpec, r: CoalescedPartitionSpec) =>
+ l.endReducerIndex <= r.startReducerIndex
Review comment:
A corner case. If the last partition size is 0, we may discard that
partition directly which doesn't combine to previous partition. So this check
is incorrect If the last partition size is 0 and other partition size is big
enough. here existed a test in `ShufflePartitionsUtilSuite`:
```
{
// There are a few large shuffle partitions.
val bytesByPartitionId = Array[Long](110, 10, 100, 110, 0)
val expectedPartitionSpecs = Seq(
CoalescedPartitionSpec(0, 1, 110),
CoalescedPartitionSpec(1, 2, 10),
CoalescedPartitionSpec(2, 3, 100),
CoalescedPartitionSpec(3, 4, 110))
checkEstimation(Array(bytesByPartitionId), expectedPartitionSpecs ::
Nil, targetSize)
}
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]