cloud-fan commented on a change in pull request #34779:
URL: https://github.com/apache/spark/pull/34779#discussion_r764156569



##########
File path: 
sql/core/src/main/scala/org/apache/spark/sql/SparkSessionExtensions.scala
##########
@@ -42,6 +42,7 @@ import org.apache.spark.sql.execution.{ColumnarRule, 
SparkPlan}
  * <li>Check Analysis Rules.</li>
  * <li>Optimizer Rules.</li>
  * <li>Pre CBO Rules.</li>
+ * <li>Early Scan Push-Down</li>

Review comment:
       After so many discussions in https://github.com/apache/spark/pull/30808 
, I'm really worried about the naming of this new extension point.
   
   In general, this new extension point allows people to inject custom data 
source operator pushdown rules, which run after the built-in ones. But then the 
existing `Pre CBO rules` becomes a confusing name, as the pushdown rules are 
also pre-CBO.
   
   We may need more time to think about the naming, or think if really need 
custom data source pushdown rules.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to