[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15936105#comment-15936105
 ] 

Steve Loughran commented on HADOOP-11794:
-----------------------------------------

bq. Is there any reason not to use FileSystem.concat & 
FileSystem.getFileBlockLocations ?

{{FileSystem.getFileBlockLocations }} is something filesystems have to 
implement this otherwise basic client code fails; if they don't have locality 
they tend just to say "localhost" and 1 block.

Concat though, barely implemented. As the [FS spec 
says|http://hadoop.apache.org/docs/r3.0.0-alpha2/hadoop-project-dist/hadoop-common/filesystem/filesystem.html#void_concatPath_p_Path_sources],
 "This is a little-used operation currently implemented only by HDFS"

Looking for subclasses of FileSyste.concat() It looks like only: hdfs, webhdfs, 
httpfs, filterfilesystem do lt. 

supporting webhdfs would be really good, as its the one recommended for 
cross-hadoop version distcp, and long-haul.

For now, how about have it it check for HDFS and webhdfs and rejects if 
anything else.





> distcp can copy blocks in parallel
> ----------------------------------
>
>                 Key: HADOOP-11794
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11794
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>    Affects Versions: 0.21.0
>            Reporter: dhruba borthakur
>            Assignee: Yongjun Zhang
>         Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to