[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15937359#comment-15937359
 ] 

Chris Douglas commented on HADOOP-11794:
----------------------------------------

[~omkarksa], can you post your patch?

I see your point, [~yzhangal], but shouldn't the cleanup/rollback code handle 
the inconsistency? Moreover, doesn't distcp also use append to support sync, 
without first verifying that the destination FS supports it? Wasted cycles are 
unlikely; this doesn't work inconsistently, it fails 100% of the time for 
unambiguous reasons. Surely someone would test this option before trying it on 
a significant deployment.

To fail before submission, this could use {{concat}} during job setup if 
enabled e.g., parallelize the scan and concatenate the result for the input 
file [1]. More generally, distcp could add a phase to job setup that tests that 
the options are consistent with the capabilities of the src/dst FileSystems, 
but _that_ would be an extension.

The early check makes sense, but false positives are worse than false 
negatives, here.

[1] Unfortunately, SequenceFile (format for distcp) would need some 
modifications to make that straightforward. There's an option to omit the 
header if the file already exists, but not one that explicitly and 
independently suppresses it.

> distcp can copy blocks in parallel
> ----------------------------------
>
>                 Key: HADOOP-11794
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11794
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>    Affects Versions: 0.21.0
>            Reporter: dhruba borthakur
>            Assignee: Yongjun Zhang
>         Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to