[
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937230#comment-15937230
]
Yongjun Zhang commented on HADOOP-11794:
----------------------------------------
Hi [~steve_l], [~chris.douglas],
Thanks for the feedback.
I think if we know the job will fail, we want it to fail sooner rather than
later. That's why I had the DistributedFileSystem check in the beginning of
distcp.
Imagine if we run the job half way and found it doesn't work, we not only
wasted computing power, but also possibly left the cluster in an inconsistent
state.
So I think we should check if file system support getBlockLocations and concat
in the very beginning of distcp. That in my opinion, can be done as a follow-up
jira, because supporting a different filesystem involves not only the check
here, but also new unit tests for the corresponding file systems, and system
level testing. Does this make sense to you?
Thanks.
> distcp can copy blocks in parallel
> ----------------------------------
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
> Issue Type: Improvement
> Components: tools/distcp
> Affects Versions: 0.21.0
> Reporter: dhruba borthakur
> Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch,
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch,
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch,
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are
> greater than 1 TB with a block size of 1 GB. If we use distcp to copy these
> files, the tasks either take a long long long time or finally fails. A better
> way for distcp would be to copy all the source blocks in parallel, and then
> stich the blocks back to files at the destination via the HDFS Concat API
> (HDFS-222)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]