[
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15848116#comment-15848116
]
Yongjun Zhang edited comment on HADOOP-11794 at 2/1/17 9:49 PM:
----------------------------------------------------------------
Hi [~mithun],
Thanks you so much for the review and all the good comments!
I just uploaded rev 004 to address all of them.
* To answer your question in 3, to avoid the extra RPC call to get all blocks
of a file by ONLY calling the RPC when the file size is bigger than {{blockSize
* blocksPerChunk}}, and then check if the number of blocks is bigger than
{{blocksPerChunk}}. So it's possible that a file with many small blocks are not
split. But I think that should be ok, because the patch here intend to deal
with really large file, and variable size blocks are infrequent, this check
maybe reasonably good. However, in the future, we could still improve it if
necessary.
* About 6. the logging is already done in the method {{mergeFileChunks}}, when
debug logging is enabled.
In addition, I also added one more condition to check if the source FS is
DistributedFileSystem, otherwise, the file won't be splitted too.
Wonder if you could take a look at the new patch.
Thanks a lot.
was (Author: yzhangal):
Hi [~mithun],
Thanks you so much for the review and all the good comments!
I just uploaded rev 004 to address all of them.
* To answer your question in 3, to avoid the extra RPC call to get all blocks
of a file, I checked the file size first, if so, then get all blocks of the
file, and check if the number of blocks is bigger than {{blocksPerChunk}}. So
it's possible that a file with many small blocks are not split. But I think
that should be ok, because the patch here intend to deal with really large file.
* About 6. the logging is already done in the method {{mergeFileChunks}}, when
debug logging is enabled.
In addition, I also added one more condition to check if the source FS is
DistributedFileSystem, otherwise, the file won't be splitted too.
Wonder if you could take a look at the new patch.
Thanks a lot.
> distcp can copy blocks in parallel
> ----------------------------------
>
> Key: HADOOP-11794
> URL: https://issues.apache.org/jira/browse/HADOOP-11794
> Project: Hadoop Common
> Issue Type: Improvement
> Components: tools/distcp
> Affects Versions: 0.21.0
> Reporter: dhruba borthakur
> Assignee: Yongjun Zhang
> Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch,
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch,
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are
> greater than 1 TB with a block size of 1 GB. If we use distcp to copy these
> files, the tasks either take a long long long time or finally fails. A better
> way for distcp would be to copy all the source blocks in parallel, and then
> stich the blocks back to files at the destination via the HDFS Concat API
> (HDFS-222)
--
This message was sent by Atlassian JIRA
(v6.3.15#6346)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]