[ 
https://issues.apache.org/jira/browse/CRUNCH-580?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15045843#comment-15045843
 ] 

Gabriel Reid commented on CRUNCH-580:
-------------------------------------

This looks like a very valid use of Guava, and I think it doesn't make too much 
sense to block something like this because of our kill-Guava project.

I'm still pretty worried about the whole Guava situation (particularly the 
headaches I'm going to go through at work if we upgrade to v18 in Crunch), but 
like I say, I don't think that that should block a useful fix for S3 users like 
this.

> FileTargetImpl#handleOutputs Inefficiency on S3NativeFileSystem
> ---------------------------------------------------------------
>
>                 Key: CRUNCH-580
>                 URL: https://issues.apache.org/jira/browse/CRUNCH-580
>             Project: Crunch
>          Issue Type: Bug
>          Components: Core, IO
>    Affects Versions: 0.13.0
>         Environment: Amazon Elastic Map Reduce
>            Reporter: Jeffrey Quinn
>            Assignee: Josh Wills
>         Attachments: CRUNCH-580.patch
>
>
> We have run in to a pretty frustrating inefficiency inside of 
> org.apache.crunch.io.impl.FileTargetImpl#handleOutputs.
> This method loops over all of the partial output files and moves them to 
> their ultimate destination directories, calling 
> org.apache.hadoop.fs.FileSystem#rename(org.apache.hadoop.fs.Path, 
> org.apache.hadoop.fs.Path) on each partial output in a loop.
> This is no problem when the org.apache.hadoop.fs.FileSystem in question is 
> HDFS where #rename is a cheap operation, but when an implementation such as 
> S3NativeFileSystem is used it is extremely inefficient, as each iteration 
> through the loop makes a single blocking S3 API call, and this loop can be 
> extremely long when there are many thousands of partial output files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to