viirya commented on a change in pull request #31468:
URL: https://github.com/apache/spark/pull/31468#discussion_r578914581
##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala
##########
@@ -52,16 +53,25 @@ case class CollectLimitExec(limit: Int, child: SparkPlan)
extends LimitExec {
SQLShuffleReadMetricsReporter.createShuffleReadMetrics(sparkContext)
override lazy val metrics = readMetrics ++ writeMetrics
protected override def doExecute(): RDD[InternalRow] = {
- val locallyLimited = child.execute().mapPartitionsInternal(_.take(limit))
- val shuffled = new ShuffledRowRDD(
- ShuffleExchangeExec.prepareShuffleDependency(
- locallyLimited,
- child.output,
- SinglePartition,
- serializer,
- writeMetrics),
- readMetrics)
- shuffled.mapPartitionsInternal(_.take(limit))
+ val childRDD = child.execute()
+ if (childRDD.getNumPartitions == 0) {
+ new ParallelCollectionRDD(sparkContext, Seq.empty[InternalRow], 1,
Map.empty)
Review comment:
Oh I see. I am not sure if `CollectLimitExec` must be single partition,
but this looks minor. I'm okay with `ParallelCollectionRDD`. Thanks.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]