[
https://issues.apache.org/jira/browse/HADOOP-18028?focusedWorklogId=748516&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-748516
]
ASF GitHub Bot logged work on HADOOP-18028:
-------------------------------------------
Author: ASF GitHub Bot
Created on: 28/Mar/22 10:05
Start Date: 28/Mar/22 10:05
Worklog Time Spent: 10m
Work Description: steveloughran merged pull request #4109:
URL: https://github.com/apache/hadoop/pull/4109
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
Issue Time Tracking
-------------------
Worklog Id: (was: 748516)
Time Spent: 13h 20m (was: 13h 10m)
> High performance S3A input stream with prefetching & caching
> ------------------------------------------------------------
>
> Key: HADOOP-18028
> URL: https://issues.apache.org/jira/browse/HADOOP-18028
> Project: Hadoop Common
> Issue Type: Improvement
> Components: fs/s3
> Reporter: Bhalchandra Pandit
> Assignee: Bhalchandra Pandit
> Priority: Major
> Labels: pull-request-available
> Time Spent: 13h 20m
> Remaining Estimate: 0h
>
> I work for Pinterest. I developed a technique for vastly improving read
> throughput when reading from the S3 file system. It not only helps the
> sequential read case (like reading a SequenceFile) but also significantly
> improves read throughput of a random access case (like reading Parquet). This
> technique has been very useful in significantly improving efficiency of the
> data processing jobs at Pinterest.
>
> I would like to contribute that feature to Apache Hadoop. More details on
> this technique are available in this blog I wrote recently:
> [https://medium.com/pinterest-engineering/improving-efficiency-and-reducing-runtime-using-s3-read-optimization-b31da4b60fa0]
>
--
This message was sent by Atlassian Jira
(v8.20.1#820001)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]