[
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15956675#comment-15956675
]
Omkar Aradhya K S commented on HADOOP-11794:
--------------------------------------------
{quote}
BTW, Steve still has an item for you to follow-up here
https://issues.apache.org/jira/browse/HADOOP-11794?focusedCommentId=15938217&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15938217
{quote}
[~yzhangal] Sorry for the late reply. Thanks for pointing this out. I almost
missed this!
{quote}
Omkar: if ADL doesn't implement the distcp contract test, you might want to
follow up this patch with a distcp test that forces the use of the concat
operation.
{quote}
[~steve_l] I will look into this.
> Enable distcp to copy blocks in parallel
> ----------------------------------------
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
> Issue Type: Improvement
> Components: tools/distcp
> Affects Versions: 0.21.0
> Reporter: dhruba borthakur
> Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch,
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch,
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch,
> HADOOP-11794.009.patch, HADOOP-11794.010.patch, MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are
> greater than 1 TB with a block size of 1 GB. If we use distcp to copy these
> files, the tasks either take a long long long time or finally fails. A better
> way for distcp would be to copy all the source blocks in parallel, and then
> stich the blocks back to files at the destination via the HDFS Concat API
> (HDFS-222)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]