[
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15929782#comment-15929782
]
Steve Loughran commented on HADOOP-11794:
-----------------------------------------
# this is an opportunity to switch distcp over to using the slf4j logger class;
existing logging can be left alone, but all new logs can switch to the inline
logging
# What does "YJD ls before distcp" in tests mean?
# {{TestDistCpSystem}} does a cleanup in {{testDistcpLargeFile}} as the last
operation in a successful test run. Does it still cleanup on a failure? If not,
what is the final state of the call & does it matter
# in the s3a tests we now have a -Pscale profile for scalable tests, and can
set file sizes. It might be nice to have here, but it's a complex piece of
work: not really justifiable except as a bigger set of scale tests
> distcp can copy blocks in parallel
> ----------------------------------
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
> Issue Type: Improvement
> Components: tools/distcp
> Affects Versions: 0.21.0
> Reporter: dhruba borthakur
> Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch,
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch,
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch,
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are
> greater than 1 TB with a block size of 1 GB. If we use distcp to copy these
> files, the tasks either take a long long long time or finally fails. A better
> way for distcp would be to copy all the source blocks in parallel, and then
> stich the blocks back to files at the destination via the HDFS Concat API
> (HDFS-222)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]