HyukjinKwon commented on a change in pull request #33624:
URL: https://github.com/apache/spark/pull/33624#discussion_r683051488



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala
##########
@@ -348,34 +344,24 @@ case class AdaptiveSparkPlanExec(
     withFinalPlanUpdate(_.execute())
   }
 
-  /**
-   * Determine if the final query stage supports columnar execution. Calling 
this method
-   * will trigger query execution of child query stages if they have not 
already executed.
-   *
-   * If this method returns true then it is safe to call doExecuteColumnar to 
execute the
-   * final stage.
-   */
-  def finalPlanSupportsColumnar(): Boolean = {
-    getFinalPhysicalPlan().supportsColumnar
-  }
-
   override def doExecuteColumnar(): RDD[ColumnarBatch] = {
     withFinalPlanUpdate(_.executeColumnar())
   }
 
+  override def doExecuteBroadcast[T](): broadcast.Broadcast[T] = {

Review comment:
       can we move this method back?

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala
##########
@@ -65,7 +65,8 @@ case class AdaptiveSparkPlanExec(
     inputPlan: SparkPlan,
     @transient context: AdaptiveExecutionContext,
     @transient preprocessingRules: Seq[Rule[SparkPlan]],
-    @transient isSubquery: Boolean)
+    @transient isSubquery: Boolean,
+    @transient override val supportsColumnar: Boolean = false)

Review comment:
       can we keep the docs of 
https://github.com/apache/spark/pull/33624/files#diff-ec42cd27662f3f528832c298a60fffa1d341feb04aa1d8c80044b70cbe0ebbfcL199-L203?

##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/adaptive/AdaptiveSparkPlanExec.scala
##########
@@ -536,7 +523,7 @@ case class AdaptiveSparkPlanExec(
       case s: ShuffleExchangeLike =>
         val newShuffle = applyPhysicalRules(
           s.withNewChildren(Seq(optimizedPlan)),
-          postStageCreationRules,
+          postStageCreationRules(outputColumnar = false),

Review comment:
       If that works without post-stage creation extension, I would prefer to 
revert it since Spark 3.2 is not out yet.

##########
File path: sql/core/src/main/scala/org/apache/spark/sql/execution/Columnar.scala
##########
@@ -519,9 +497,13 @@ case class RowToColumnarExec(child: SparkPlan) extends 
RowToColumnarTransition {
 /**
  * Apply any user defined [[ColumnarRule]]s and find the correct place to 
insert transitions
  * to/from columnar formatted data.
+ *
+ * @param columnarRules custom columnar rules
+ * @param outputColumnar whether or not the produced plan should output 
columnar format.

Review comment:
       maybe `outputsColumnar` to match with `supportsColumnar` or 
`isOutputColumnar`?




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to