jchen5 commented on code in PR #35975:
URL: https://github.com/apache/spark/pull/35975#discussion_r849993911


##########
sql/core/src/main/scala/org/apache/spark/sql/execution/limit.scala:
##########
@@ -182,6 +190,48 @@ case class GlobalLimitExec(limit: Int, child: SparkPlan) 
extends BaseLimitExec {
     copy(child = newChild)
 }
 
+/**
+ * Skip the first `offset` elements then take the first `limit` of the 
following elements in
+ * the child's single output partition.

Review Comment:
   It looks like if the child has multiple partitions, zipWithIndex will index 
starting with all the rows in the first partition, then the next partition etc. 
This is fine assuming the child data isn't sorted. Could you add a comment 
about this?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to