[ 
https://issues.apache.org/jira/browse/HADOOP-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17580760#comment-17580760
 ] 

ASF GitHub Bot commented on HADOOP-18028:
-----------------------------------------

steveloughran opened a new pull request, #4752:
URL: https://github.com/apache/hadoop/pull/4752

   
   wrap up merge of the prefetch feature branch rebased to trunk.
   
   -----
   
   This is the the preview release of the HADOOP-18028 S3A performance input 
stream.
   It is still stabilizing, but ready to test.
   
   Contains
   
   HADOOP-18028. High performance S3A input stream (#4109)
        Contributed by Bhalchandra Pandit.
   
   HADOOP-18180. Replace use of twitter util-core with java futures (#4115)
        Contributed by PJ Fanning.
   
   HADOOP-18177. Document prefetching architecture. (#4205)
        Contributed by Ahmar Suhail
   
   HADOOP-18175. fix test failures with prefetching s3a input stream (#4212)
    Contributed by Monthon Klongklaew
   
   HADOOP-18231.  S3A prefetching: fix failing tests & drain stream async.  
(#4386)
   
        * adds in new test for prefetching input stream
        * creates streamStats before opening stream
        * updates numBlocks calculation method
        * fixes ITestS3AOpenCost.testOpenFileLongerLength
        * drains stream async
        * fixes failing unit test
   
        Contributed by Ahmar Suhail
   
   HADOOP-18254. Disable S3A prefetching by default. (#4469)
        Contributed by Ahmar Suhail
   
   HADOOP-18190. Collect IOStatistics during S3A prefetching (#4458)
   
        This adds iOStatisticsConnection to the S3PrefetchingInputStream class, 
with
        new statistic names in StreamStatistics.
   
        This stream is not (yet) IOStatisticsContext aware.
   
        Contributed by Ahmar Suhail
   
   HADOOP-18379 rebase feature/HADOOP-18028-s3a-prefetch to trunk
   HADOOP-18187. Convert s3a prefetching to use JavaDoc for fields and enums.
   HADOOP-18318. Update class names to be clear they belong to S3A prefetching
        Contributed by Steve Loughran
   
   Change-Id: I6511c51c3580c57eb72e8ea686c88e3917d12a06
   
   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: 
https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 
'HADOOP-17799. Your PR title ...'.
   -->
   
   
   ### How was this patch tested?
   
   s3 london with -Dparallel-tests -DtestsThreadCount=12 -Dscale
   
   I saw a failure of 
`ITestS3APrefetchingInputStream.testRandomReadLargeFile()` on a test run with 
threads = 8; assertion that a gauge was empty was false.
   
   if this happens again it'll need investigating.
   
   ### For code changes:
   
   - [X] Does the title or this PR starts with the corresponding JIRA issue id 
(e.g. 'HADOOP-17799. Your PR title ...')?
   - [X] Object storage: have the integration tests been executed and the 
endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies 
licensed in a way that is compatible for inclusion under [ASF 
2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, 
`NOTICE-binary` files?
   
   




> High performance S3A input stream with prefetching & caching
> ------------------------------------------------------------
>
>                 Key: HADOOP-18028
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18028
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>            Reporter: Bhalchandra Pandit
>            Assignee: Bhalchandra Pandit
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 14.5h
>  Remaining Estimate: 0h
>
> I work for Pinterest. I developed a technique for vastly improving read 
> throughput when reading from the S3 file system. It not only helps the 
> sequential read case (like reading a SequenceFile) but also significantly 
> improves read throughput of a random access case (like reading Parquet). This 
> technique has been very useful in significantly improving efficiency of the 
> data processing jobs at Pinterest. 
>  
> I would like to contribute that feature to Apache Hadoop. More details on 
> this technique are available in this blog I wrote recently:
> [https://medium.com/pinterest-engineering/improving-efficiency-and-reducing-runtime-using-s3-read-optimization-b31da4b60fa0]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to