yaooqinn commented on a change in pull request #31868:
URL: https://github.com/apache/spark/pull/31868#discussion_r596170119
##########
File path:
sql/core/src/main/scala/org/apache/spark/sql/internal/SharedState.scala
##########
@@ -53,24 +53,33 @@ private[sql] class SharedState(
SharedState.setFsUrlStreamHandlerFactory(sparkContext.conf,
sparkContext.hadoopConfiguration)
private[sql] val (conf, hadoopConf) = {
- // Load hive-site.xml into hadoopConf and determine the warehouse path
which will be set into
- // both spark conf and hadoop conf avoiding be affected by any
SparkSession level options
- val initialConfigsWithoutWarehouse = SharedState.resolveWarehousePath(
+ val warehousePath = SharedState.resolveWarehousePath(
sparkContext.conf, sparkContext.hadoopConfiguration, initialConfigs)
val confClone = sparkContext.conf.clone()
val hadoopConfClone = new Configuration(sparkContext.hadoopConfiguration)
+ // Extract entries from `SparkConf` and put them in the Hadoop conf.
+ confClone.getAll.foreach { case (k, v) =>
Review comment:
I don't quite get this change. IIUC, only `spark.hadoop` and
`spark.hive` prefixed ones get resolved and extracted into Hadoop
configuration. After this, it seems that a `hadoop.xyz` has higher priority
than `spark.hadoop.hadoop.xyz`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]