[
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15048017#comment-15048017
]
Yongjun Zhang commented on HADOOP-11794:
----------------------------------------
Thanks [~mithun] and [~dhruba]!
There will be some complexity with regard to block size since we now support
variable size block (introduced by the append feature). We might need ask NN
for the size of all blocks a file has, and avoid have the split boundary at the
middle of a block. Another possibility is, to split the block into two if it
happens (since now we support multiple size block), I have not looked deeper at
this yet.
And I'm thinking we could have one FileSplit per multiple file-blocks, we can
make it as an input option to distcp.
Thanks again.
> distcp can copy blocks in parallel
> ----------------------------------
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
> Issue Type: Improvement
> Components: tools/distcp
> Affects Versions: 0.21.0
> Reporter: dhruba borthakur
> Assignee: Yongjun Zhang
> Attachments: MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are
> greater than 1 TB with a block size of 1 GB. If we use distcp to copy these
> files, the tasks either take a long long long time or finally fails. A better
> way for distcp would be to copy all the source blocks in parallel, and then
> stich the blocks back to files at the destination via the HDFS Concat API
> (HDFS-222)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)