Github user srowen commented on a diff in the pull request:
    --- Diff: 
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
    @@ -741,7 +741,7 @@ private[sql] class SQLConf extends Serializable with 
CatalystConf with Logging {
       def variableSubstituteDepth: Int = getConf(VARIABLE_SUBSTITUTE_DEPTH)
    -  def warehousePath: String = new Path(getConf(WAREHOUSE_PATH)).toString
    +  def warehousePath: String = 
    --- End diff --
    Well, before SPARK-15899 it was always interpreted as a local file path. 
After that change (i.e. right now) it would be interpreted as an HDFS path if 
that's what `fs.defaultFS` says. My proposition is that:
    - It should default to `spark-warehouse` in the local filesystem working 
dir, because that's what the docs say (therefore, we have a bug at the moment 
after SPARK-15899)
    - It should be interpreted as a local path if no scheme is given, because 
other stuff in Spark works that way via `Utils.resolveURI`
    - It should be possible to set this to a non-local path, because you can do 
this with the Hive warehouse dir, which this option sort of parallels
    I think the current change matches those requirements. I think we do not 
want "/user/blah/warehouse" to be interpreted as an HDFS path by default 
because that's not how it worked historically, and that it works this way now 
is I believe an error.
    Is that convincing?

If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at or file a JIRA ticket
with INFRA.

To unsubscribe, e-mail:
For additional commands, e-mail:

Reply via email to