[
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15067007#comment-15067007
]
Mithun Radhakrishnan commented on HADOOP-11794:
-----------------------------------------------
[~yzhangal],
bq. Appreciate your excellent work!
You're too kind. :]
bq. But I'm making it more flexible here, such that we can support variable
number blocks per split.
I agree with the principle of what you're suggesting. Combining multiple splits
into a larger split (based on size) is a problem that
{{CombineFileInputFormat}} provides a solution for. Do you think we can use
{{CombineFileInputFormat}} to combine block-level splits into a larger split?
bq. We need some new client-namenode API protocol to get back the locatedBlocks
for the specified block range...
Hmm... Do we? DistCp copies whole files (even if at a split level). Since we
can retrieve located blocks for all blocks in the file, shouldn't that be
enough? We could group locatedBlocks by block-id. Perhaps I'm missing something.
> distcp can copy blocks in parallel
> ----------------------------------
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
> Issue Type: Improvement
> Components: tools/distcp
> Affects Versions: 0.21.0
> Reporter: dhruba borthakur
> Assignee: Yongjun Zhang
> Attachments: MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are
> greater than 1 TB with a block size of 1 GB. If we use distcp to copy these
> files, the tasks either take a long long long time or finally fails. A better
> way for distcp would be to copy all the source blocks in parallel, and then
> stich the blocks back to files at the destination via the HDFS Concat API
> (HDFS-222)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)