dbaliafroozeh commented on a change in pull request #28885:
URL: https://github.com/apache/spark/pull/28885#discussion_r446673965



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/QueryExecution.scala
##########
@@ -326,7 +327,8 @@ object QueryExecution {
    */
   private[execution] def preparations(
       sparkSession: SparkSession,
-      adaptiveExecutionRule: Option[InsertAdaptiveSparkPlan] = None): 
Seq[Rule[SparkPlan]] = {
+      adaptiveExecutionRule: Option[InsertAdaptiveSparkPlan] = None,
+      subquery: Boolean): Seq[Rule[SparkPlan]] = {

Review comment:
       Yes, from the performance perspective makes sense to exclude them. I 
sort of don't like having another parameter and select rules based on that, so 
was thinking if it's not a huge performance difference let's not do it, but it 
can be expensive with canonicalization, etc. I guess we don't have any other 
way of detecting if a physical plan is a subquery locally inside the new rule, 
so it's fine to do it like this, maybe we need a more explicit name for 
`QueryExecution.prepareExecutedPlan` in the future, like 
`PrepareSubqueryForExecution` to make it more clear that this method is only 
called for subqueries. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to