cloud-fan commented on a change in pull request #30097:
URL: https://github.com/apache/spark/pull/30097#discussion_r521931459
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/DataSourceStrategy.scala
##########
@@ -243,16 +243,16 @@ case class DataSourceAnalysis(conf: SQLConf) extends
Rule[LogicalPlan] with Cast
* TODO: we should remove the special handling for hive tables after
completely making hive as a
* data source.
*/
-class FindDataSourceTable(sparkSession: SparkSession) extends
Rule[LogicalPlan] {
Review comment:
I think this rule should still keep the session parameter. The rule
should only be dynamic about getting configs, but the session here is used to
get the catalog and pass to DS V1.
In practice, this doesn't matter as we always run rules with setting the
desired active session. But it's better to be consistent. There are other rules
taking catalog as a parameter, e.g. `RelationConversions`
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]