[ 
https://issues.apache.org/jira/browse/HADOOP-18028?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17556477#comment-17556477
 ] 

Steve Loughran commented on HADOOP-18028:
-----------------------------------------

assume someone with a 192 vcore system will be running something like spark or 
impala with have 192 worker threads, each reading one file and writing output 
somwhere (local disk, remote disk, memory) as it does so.

also assume that the number of cores a server has will only increase over 
time/;cost of getting manycore server reduces

see also HADOOP-17195 and HADOOP-17937

there's no way we could have 72 MB per file in any app with many worker threads 
(hive, spark, impala). [~mehakmeetSingh] -now much ram was each abfs input 
stream using?

if the default memory consumption values are down to a number which everyone 
feels safe with *and* disk storage is disabled by default, then resource 
consumption is probably manageable.

have a look at the abfs AbfsInputStream and google;s to see what they;ve had to 
deal with/tune
https://github.com/hortonworks/bigdata-interop/blob/111d06c48fdab0280e0aa4a379cfde9e069943c3/gcs/src/main/java/com/google/cloud/hadoop/fs/gcs/GoogleHadoopFSInputStream.java

> High performance S3A input stream with prefetching & caching
> ------------------------------------------------------------
>
>                 Key: HADOOP-18028
>                 URL: https://issues.apache.org/jira/browse/HADOOP-18028
>             Project: Hadoop Common
>          Issue Type: Improvement
>          Components: fs/s3
>            Reporter: Bhalchandra Pandit
>            Assignee: Bhalchandra Pandit
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 13h 50m
>  Remaining Estimate: 0h
>
> I work for Pinterest. I developed a technique for vastly improving read 
> throughput when reading from the S3 file system. It not only helps the 
> sequential read case (like reading a SequenceFile) but also significantly 
> improves read throughput of a random access case (like reading Parquet). This 
> technique has been very useful in significantly improving efficiency of the 
> data processing jobs at Pinterest. 
>  
> I would like to contribute that feature to Apache Hadoop. More details on 
> this technique are available in this blog I wrote recently:
> [https://medium.com/pinterest-engineering/improving-efficiency-and-reducing-runtime-using-s3-read-optimization-b31da4b60fa0]
>  



--
This message was sent by Atlassian Jira
(v8.20.7#820007)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to