[
https://issues.apache.org/jira/browse/SPARK-4687?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14242055#comment-14242055
]
Sandy Ryza commented on SPARK-4687:
-----------------------------------
I think [~xuefuz] can probably motivate this better, but from what I
understand, the main use case is Hive's map joins and map bucket joins, in
which a smaller table needs to be distributed to every node. The smaller table
typically resides in HDFS, and is the output of a separate job. For map joins,
the smaller table is composed of a bunch of files in a single folder. For map
bucket joins, the smaller table is composed of a single folder with a bunch of
bucket folders underneath, each containing a set of data files. At the very
least, doing the prefixing would require a bunch of extra FS operations to
rename all the subfiles. Though that might make them difficult to read from
other Hive implementations?
Another totally separate situation I encountered a while ago where this kind of
thing would have been useful was calling http://ctakes.apache.org/ in a
distributed fashion. Calling into it requires letting it load a bunch of files
from a particular directory structure. We ultimately had to go with a
workaround that required installing the directory on every node.
Beyond the issues I outlined in my patch, are there particular edge cases
you're worried about where we wouldn't be able to copy the behavior from
addFile?
> SparkContext#addFile doesn't keep file folder information
> ---------------------------------------------------------
>
> Key: SPARK-4687
> URL: https://issues.apache.org/jira/browse/SPARK-4687
> Project: Spark
> Issue Type: Bug
> Affects Versions: 1.2.0
> Reporter: Jimmy Xiang
>
> Files added with SparkContext#addFile are loaded with Utils#fetchFile before
> a task starts. However, Utils#fetchFile puts all files under the Spart root
> on the worker node. We should have an option to keep the folder information.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]