viirya commented on a change in pull request #34642:
URL: https://github.com/apache/spark/pull/34642#discussion_r766939068



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlan.scala
##########
@@ -71,6 +71,11 @@ abstract class SparkPlan extends QueryPlan[SparkPlan] with 
Logging with Serializ
 
   val id: Int = SparkPlan.newPlanId()
 
+  /**
+   * Return true if this stage of the plan supports row-based execution.

Review comment:
       BTW, IMHO, if we add columnar support for more operators in the future, 
I guess it already implicitly indicates we "prefer" it over current execution 
(whole-stage codegen or interpreted one)? Just like whole-stage codegen, seems 
we simply prefer it once we verify it having better performance generally. This 
is similar to the 3rd party extensions/libraries situation, I think.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to