Steve Loughran updated HADOOP-13525:
    Parent: HADOOP-15220  (was: HADOOP-14831)

> Optimize uses of FS operations in the ASF analysis frameworks and libraries 
> ----------------------------------------------------------------------------
>                 Key: HADOOP-13525
>                 URL: https://issues.apache.org/jira/browse/HADOOP-13525
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs, fs/s3
>    Affects Versions: 2.8.1
>            Reporter: Steve Loughran
>            Assignee: Steve Loughran
>            Priority: Major
> Review uses of the FS APIs in applications using the Hadoop FS API to access 
> filesystems; identify suboptimal uses and tune them for better performance 
> against HDFS and object stores
> * Assume arbitrary Hadoop 2.x releases: make no changes which are known to 
> make operations on older versions of Hadoop slower
> * Do propose those changes which deliver speedups in later versions of 
> Hadoop, while not impacting older versions, or risk of causing scalability 
> problems.
> * Add more tests, especially scalable ones which also display metrics.
> * Use standard benchmarks and optimization tools to identify hotspots.
> * Use FS behaviour as verified in the FS contract tests as evidence that 
> filesystems correctly implement the Hadoop FS APIs. If a use of an API call 
> is made which hints at the expectation of different/untested behaviours, 
> leave alone and add new tests to the Hadoop FS contract to determine cross-FS 
> semantics.
> * Focus on the startup, split calculation and directory scanning operations: 
> the ones which slow down entire queries.
> * Eliminate use of {{isDirectory()}},  {{getLength()}}, {{exists()}} if a 
> followon operation ({{getStatus()}},{{delete()}}, ... makes the use redundant.
> * Assume that {{FileStatus}} entries are not cached; the cost of creating 
> them is 1 RPC call against HDFS, 1+ HTTPS call against object stores.
> * Locate calls to the listing operations, identify speedups, especially on 
> recursive directory scans.
> * Identify suboptimal seek patterns (backwards as well as forwards) and 
> attempt to reduce/eliminate through reordering and result caching.
> * Try to reuse the results of previous operations (e.g {{FileStatus}} 
> instances) in follow-on calls.
> * Commonly used file formats (e.g ORC) will have transitive benefits.
> * Frameworks to use predicate pushdown where this delivers speedups
> * Document best practises identified and implemented.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org

Reply via email to