[
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15830849#comment-15830849
]
Yongjun Zhang edited comment on HADOOP-11794 at 1/24/17 6:08 PM:
-----------------------------------------------------------------
Sorry for the long delay, attaching patch rev 001.
With this patch, we can pass -chunksize <x> to distcp, to tell distcp to split
large files into chunks, each containing a number of blocks specified by this
new parameter, except the last chunk of a file may be smaller. CopyMapper will
treat each chunk as a single file so the chunks can be copied in parallel; And
the CopyCommitter concat the parts into one target file.
With this switch, we will enable preserving block size, disable the
randomization of entries in the sequence file, disable append feature. We could
do further optimization as follow-ups.
Any review is very welcome!
Thanks a lot.
In addition, thanks [~jojochuang], [~xiaochen] for assisting in an initial
draft we did a while back, and the three of us will be contributers of this
jira.
was (Author: yzhangal):
Sorry for the long delay, attaching patch rev 001.
With this patch, we can pass -chunksize <x> to distcp, to tell distcp to split
large files into chunks, each containing a number of blocks specified by this
new parameter, except the last chunk of a file may be smaller. CopyMapper will
treat each chunk as a single file so the chunks can be copied in parallel; And
the CopyCommitter concat the parts into one target file.
With this switch, we will enable preserving block size, disable the
randomization of entries in the sequence file, disable append feature.
We could do further optimization as follow-ups.
Any review is very welcome!
Thanks a lot.
> distcp can copy blocks in parallel
> ----------------------------------
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
> Issue Type: Improvement
> Components: tools/distcp
> Affects Versions: 0.21.0
> Reporter: dhruba borthakur
> Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch,
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are
> greater than 1 TB with a block size of 1 GB. If we use distcp to copy these
> files, the tasks either take a long long long time or finally fails. A better
> way for distcp would be to copy all the source blocks in parallel, and then
> stich the blocks back to files at the destination via the HDFS Concat API
> (HDFS-222)
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]