Github user avulanov commented on a diff in the pull request:
https://github.com/apache/spark/pull/15382#discussion_r82869165
--- Diff:
sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala ---
@@ -757,7 +758,10 @@ private[sql] class SQLConf extends Serializable with
CatalystConf with Logging {
def variableSubstituteDepth: Int = getConf(VARIABLE_SUBSTITUTE_DEPTH)
- def warehousePath: String = new Path(getConf(WAREHOUSE_PATH)).toString
+ def warehousePath: String = {
+ val path = new Path(getConf(WAREHOUSE_PATH))
+ FileSystem.get(path.toUri, new
Configuration()).makeQualified(path).toString
--- End diff --
You
[mentioned](https://github.com/apache/spark/pull/13868#discussion_r82154809)
that the original issue is as follows: _"...the usages of the new
makeQualifiedPath are a bit wrong in that they explicitly resolve the path
against the Hadoop file system, which can be HDFS."_ Should we rather look into
the code that does `makeQualifiedPath` on `warehousePath` given Hadoop FS
configuration? The fix would be to have a special case with paths that do not
have a schema. Actually, could you give a link to this code, I could not find
it right away?
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]