[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15936385#comment-15936385
 ] 

Steve Loughran commented on HADOOP-11794:
-----------------------------------------

One of the problems we have here is there's no API for impls to declare what 
they do; HADOOP-9565 has discussed this, but it's stalled. As it is, no way to 
determine if an FS implements a feature unless probed.

There is always the option of doing that: sending in an invalid concat() 
request and differentiating between: UnsupportedException and any other 
response, then assuming that the "any other response" exception means that it 
is implemented, but that the arguments were invalid. concat("/", new Path[0]) 
should be enough.

Omkar, are you planning to do a new concat? Because it might be that for 
different filesystems, there are better things to do.
For S3, we could attempt to do multipart PUT operations in parallel, though 
that would be somewhat complicated by the fact you need to know the request ID 
before any part of the operation begins. If you were doing the upload from a 
single machine, the block output stream writes data in blocks now anyway.

I don't know about other object stores, but we can and should think about how 
best to support them, even assuming their parallel upload mechanisms are 
similarly unique. It may be that the code needs to be worked to support 
different partitioning & scheduling for different endpoints. FWIW, I've been 
contemplating what it would take to do one in Spark, because that might let me 
get away with starting the upload before even the listing has finished, and 
reschedule work where there is capacity, rather than decide up front how to 
break things up. Supporting parallelised chunk upload wasn't something I'd 
considered though. An extra complication.

> distcp can copy blocks in parallel
> ----------------------------------
>
>                 Key: HADOOP-11794
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11794
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>    Affects Versions: 0.21.0
>            Reporter: dhruba borthakur
>            Assignee: Yongjun Zhang
>         Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to