[
https://issues.apache.org/jira/browse/HADOOP-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12708251#action_12708251
]
Doug Cutting commented on HADOOP-5795:
--------------------------------------
> Doug, can you please confirm?
Yes, I had assumed that any directories in the request would be expanded. The
goal is to have something we can call from FileInputFormat, which takes a list
of patterns. When the patterns contain no wildcards, we should be able to
create splits with a single RPC to the NameNode. So the semantics should match
those of FileInputFormat in this case.
> Add a bulk FIleSystem.getFileBlockLocations
> -------------------------------------------
>
> Key: HADOOP-5795
> URL: https://issues.apache.org/jira/browse/HADOOP-5795
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs
> Affects Versions: 0.20.0
> Reporter: Arun C Murthy
> Assignee: Jakob Homan
> Fix For: 0.21.0
>
>
> Currently map-reduce applications (specifically file-based input-formats) use
> FileSystem.getFileBlockLocations to compute splits. However they are forced
> to call it once per file.
> The downsides are multiple:
> # Even with a few thousand files to process the number of RPCs quickly
> starts getting noticeable
> # The current implementation of getFileBlockLocations is too slow since
> each call results in 'search' in the namesystem. Assuming a few thousand
> input files it results in that many RPCs and 'searches'.
> It would be nice to have a FileSystem.getFileBlockLocations which can take in
> a directory, and return the block-locations for all files in that directory.
> We could eliminate both the per-file RPC and also the 'search' by a 'scan'.
> When I tested this for terasort, a moderate job with 8000 input files the
> runtime halved from the current 8s to 4s. Clearly this is much more important
> for latency-sensitive applications...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.