[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15044105#comment-15044105
 ] 

Mithun Radhakrishnan commented on HADOOP-11794:
-----------------------------------------------

[~yzhangal]: Thank you, sir. Please do. Hive has kept me busy enough not to 
devote time here. I'd be happy to review your work.

I had a patch a couple of years ago which split files on block-boundaries, 
copied them over, and then stitched them together using 
{{DistributedFileSystem.concat()}} in a reduce-step. If I can find the patch, 
I'll ping it to you, but it's not terribly hard to do this from scratch. The 
prototype had very promising performance.

I look forward to your solution.

> distcp can copy blocks in parallel
> ----------------------------------
>
>                 Key: HADOOP-11794
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11794
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>    Affects Versions: 0.21.0
>            Reporter: dhruba borthakur
>            Assignee: Mithun Radhakrishnan
>         Attachments: MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to