[ 
https://issues.apache.org/jira/browse/HIVE-26699?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17647671#comment-17647671
 ] 

Steve Loughran commented on HIVE-26699:
---------------------------------------

the api itself went in to hadoop earlier, in 3.3.0 HADOOP-15229

if you are only building on 3.3+ you have the api, just the read policy is an 
s3a only one to set with opt("fs.s3a.experimenta.fadvise", "sequential")

HADOOP-16202
* defined some standard ops for all filesystems to recognise and optionally 
support.
* defined the idea that the read policy should be an ordered list of "policies 
to understand", so we could put in new ones later
* added file length as an option, rather than just filestatus
* and split start/end (nothing uses it, but prefetchers should know not to 
prefetch past split end)
* fixed every use in hadoop itself to say "whole-file" when reading the whole 
file, "sequential" when doing sequential reads. that addresses a bug where on a 
hive cluster with s3a fixed to be random, distcp and yarn localization are both 
underperformant

is hive hadoop 3.3.x + only yet?



> Iceberg: S3 fadvise can hurt JSON parsing significantly in DWX
> --------------------------------------------------------------
>
>                 Key: HIVE-26699
>                 URL: https://issues.apache.org/jira/browse/HIVE-26699
>             Project: Hive
>          Issue Type: Improvement
>            Reporter: Rajesh Balamohan
>            Priority: Major
>
> Hive reads JSON metadata information (TableMetadataParser::read()) multiple 
> times; E.g during query compilation, AM split computation, stats computation, 
> during commits  etc.
>  
> With large JSON files (due to multiple inserts), it takes a lot longer time 
> with S3 FS with "fs.s3a.experimental.input.fadvise" set to "random". (e.g in 
> the order of 10x).To be on safer side, it will be good to set this to 
> "normal" mode in configs, when reading iceberg tables.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to