Cheng Lian created SPARK-8458:
---------------------------------
Summary: ORC data source can only write to the file system defined
in Hadoop configuration
Key: SPARK-8458
URL: https://issues.apache.org/jira/browse/SPARK-8458
Project: Spark
Issue Type: Bug
Components: SQL
Affects Versions: 1.4.0
Reporter: Cheng Lian
Assignee: Cheng Lian
Priority: Blocker
To reproduce this issue, we first define {{fs.default.name}} in Hadoop
{{core-site.xml}}:
{noformat}
<configuration>
...
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
...
</configuration>
{noformat}
Then execute the following Spark shell snippet:
{code}
sqlContext.range(0,
10).coalesce(1).write.mode("overwrite").format("orc").save("file:///tmp/foo")
{code}
The write job succeeds, but you can only find {{_SUCCESS}} under
{{file:///tmp/foo}}. Data files are actually written to HDFS directory
{{/tmp/foo/_temporary}} and left uncommitted.
The reason is that, [this
line|https://github.com/apache/spark/blob/9b2002722273f98e193ad6cd54c9626292ab27d1/sql/hive/src/main/scala/org/apache/spark/sql/hive/orc/OrcRelation.scala#L113]
uses {{Path.toUri.getPath}} rather than {{Path.toString}} to pass the path
string to {{OrcRecordWriter}} constructor, and this essentially strips the
scheme part of the path.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]