[ 
https://issues.apache.org/jira/browse/HADOOP-13600?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16324725#comment-16324725
 ] 

Steve Loughran commented on HADOOP-13600:
-----------------------------------------

Not looked at this for a while, I'll try and take a look in detail, especially 
now the committer is merged in.

One thing I've realised is our copy operation isn't doing what we do elsewhere; 
shuffle the list of files so there's more scattering of workload across shards 
in the bucket, so less risk of throttling.

* from the list, grab the first batch to copy (say, same amount as we can 
delete in a single batch)
* pick out the first few largest files to start copying first
* shuffle the rest of the batch

This is what I've done in 
[cloudup|https://github.com/steveloughran/cloudup/blob/master/src/main/java/org/apache/hadoop/tools/cloudup/Cloudup.java]
 and I believe it makes for a fast upload

> S3a rename() to copy files in a directory in parallel
> -----------------------------------------------------
>
>                 Key: HADOOP-13600
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13600
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/s3
>    Affects Versions: 2.7.3
>            Reporter: Steve Loughran
>            Assignee: Sahil Takiar
>         Attachments: HADOOP-13600.001.patch
>
>
> Currently a directory rename does a one-by-one copy, making the request 
> O(files * data). If the copy operations were launched in parallel, the 
> duration of the copy may be reducable to the duration of the longest copy. 
> For a directory with many files, this will be significant



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to