[
https://issues.apache.org/jira/browse/HADOOP-5795?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12720318#action_12720318
]
dhruba borthakur commented on HADOOP-5795:
------------------------------------------
I think the extended version of the API would help in doing incremental distcp
when hdfs-append is supported. We use "distcp -update" to do an incremental
copy of files that have changed in length, but having this proposed extended
API (and more) allows distcp to copy only changed portions of a file.
> Add a bulk FIleSystem.getFileBlockLocations
> -------------------------------------------
>
> Key: HADOOP-5795
> URL: https://issues.apache.org/jira/browse/HADOOP-5795
> Project: Hadoop Core
> Issue Type: New Feature
> Components: dfs
> Affects Versions: 0.20.0
> Reporter: Arun C Murthy
> Assignee: Jakob Homan
> Fix For: 0.21.0
>
>
> Currently map-reduce applications (specifically file-based input-formats) use
> FileSystem.getFileBlockLocations to compute splits. However they are forced
> to call it once per file.
> The downsides are multiple:
> # Even with a few thousand files to process the number of RPCs quickly
> starts getting noticeable
> # The current implementation of getFileBlockLocations is too slow since
> each call results in 'search' in the namesystem. Assuming a few thousand
> input files it results in that many RPCs and 'searches'.
> It would be nice to have a FileSystem.getFileBlockLocations which can take in
> a directory, and return the block-locations for all files in that directory.
> We could eliminate both the per-file RPC and also the 'search' by a 'scan'.
> When I tested this for terasort, a moderate job with 8000 input files the
> runtime halved from the current 8s to 4s. Clearly this is much more important
> for latency-sensitive applications...
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.