HyukjinKwon commented on code in PR #38130:
URL: https://github.com/apache/spark/pull/38130#discussion_r995630979
##########
sql/core/src/main/scala/org/apache/spark/sql/execution/python/AttachDistributedSequenceExec.scala:
##########
@@ -40,15 +42,45 @@ case class AttachDistributedSequenceExec(
override def outputPartitioning: Partitioning = child.outputPartitioning
+ @transient private var cached: RDD[InternalRow] = _
+
override protected def doExecute(): RDD[InternalRow] = {
- val childRDD = child.execute().map(_.copy())
- val checkpointed = if (childRDD.getNumPartitions > 1) {
- // to avoid execute multiple jobs. zipWithIndex launches a Spark job.
- childRDD.localCheckpoint()
+ val childRDD = child.execute()
Review Comment:
Hm, maybe let's just keep the legacy behaviour as is for now is AQE is
disabled. Keeping the behaviour as is is fine but changing it to a new
behaviour is a different story.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]