[ 
https://issues.apache.org/jira/browse/HADOOP-11794?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15935473#comment-15935473
 ] 

Yongjun Zhang commented on HADOOP-11794:
----------------------------------------

Thanks much for reviewing and trying [~ste...@apache.org] and [~omkarksa]!

{quote}
this is an opportunity to switch distcp over to using the slf4j logger class; 
existing logging can be left alone, but all new logs can switch to the inline 
logging
{quote}
Since this jira has been going on for long, I hope we can address logger issue 
as a separate follow-up jira.

{quote}
What does "YJD ls before distcp" in tests mean?
{quote}
Good catch, I forgot to drop some debugging stuff in test code. will in next 
rev.

{quote}
Does it still cleanup on a failure? If not, what is the final state of the call 
& does it matter
{quote}
It does not really matter since the test failed, but cleaning it up would be ok 
too. 

{quote}
in the s3a tests we now have a -Pscale profile for scalable tests, and can set 
file sizes. It might be nice to have here, but it's a complex piece of work: 
not really justifiable except as a bigger set of scale tests
{quote}
Scale test is a good thing to do, the unit of the patch mostly focus on 
functionality.

{quote}
5.      Observed following compatibility issues:
a.      You are checking for instance of DistributedFileSystem in many places 
and all other FileSystem implementations don’t implement DistributedFileSystem
                                                    i.     Could this be 
changed to something more compatible with other implementations of FileSystem?
{quote}
The main reason of checking DistributedFileSystem is the support of 
getBlockLocations, and concat feature. I'm not sure whether we can assume other 
File System support that.
  
{quote}
b.      You are using the new DFSUtilClient, which makes DistCp incompatible 
with older versions of Hadoop
                                                    i.     Can this be changed 
to be backward compatible
{quote}
The current patch is for trunk where client and server code are separated. When 
we backport this change to other version of hadoop, we can make the change 
accordingly, for example, to use DFSUtil. 

{quote}
6.      If the compatibility issues are addressed, the new DistCp with your 
feature would be available for other FileSystem implementations as well as 
backward compatible.
a.      I was able to make little modifications to your patch and got it 
working with ADLS.
{quote}
Good work there! Glad to hear that it works for you with little modifications. 
I think we can probably commit this patch first, and then do other work as 
improvement jiras.

Thanks again!
 



  



> distcp can copy blocks in parallel
> ----------------------------------
>
>                 Key: HADOOP-11794
>                 URL: https://issues.apache.org/jira/browse/HADOOP-11794
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: tools/distcp
>    Affects Versions: 0.21.0
>            Reporter: dhruba borthakur
>            Assignee: Yongjun Zhang
>         Attachments: HADOOP-11794.001.patch, HADOOP-11794.002.patch, 
> HADOOP-11794.003.patch, HADOOP-11794.004.patch, HADOOP-11794.005.patch, 
> HADOOP-11794.006.patch, HADOOP-11794.007.patch, HADOOP-11794.008.patch, 
> MAPREDUCE-2257.patch
>
>
> The minimum unit of work for a distcp task is a file. We have files that are 
> greater than 1 TB with a block size of  1 GB. If we use distcp to copy these 
> files, the tasks either take a long long long time or finally fails. A better 
> way for distcp would be to copy all the source blocks in parallel, and then 
> stich the blocks back to files at the destination via the HDFS Concat API 
> (HDFS-222)



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to