yaooqinn commented on a change in pull request #30045:
URL: https://github.com/apache/spark/pull/30045#discussion_r506352466
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala
##########
@@ -55,10 +55,11 @@ private[sql] class SharedState(
SharedState.setFsUrlStreamHandlerFactory(sparkContext.conf,
sparkContext.hadoopConfiguration)
- private val (conf, hadoopConf) = {
+ private[sql] val (conf, hadoopConf) = {
// Load hive-site.xml into hadoopConf and determine the warehouse path
which will be set into
// both spark conf and hadoop conf avoiding be affected by any
SparkSession level options
- SharedState.loadHiveConfFile(sparkContext.conf,
sparkContext.hadoopConfiguration)
+ SharedState.loadHiveConfFile(
+ sparkContext.conf, sparkContext.hadoopConfiguration, initialConfigs)
Review comment:
This is kind of a hidden bug here, If we create a SparkSession like
this:
```scala
SparkSession.builder.sparkContext(sc).config("spark.sql.warehouse.dir",
"abc").getOrCreate()
```
The `"spark.sql.warehouse.dir", "abc"` is used by the SessionCatalog
correctly, but we do not set it in the cloned conf in SharedState. It causes
the default database may use a wrong path here
https://github.com/apache/spark/blob/f253fad00c14376e950804849481fa6252cd8154/sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala#L135
And If we want to use it for RESET, we also need this config to be kept here.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]