[ 
https://issues.apache.org/jira/browse/HADOOP-17347?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bilahari T H updated HADOOP-17347:
----------------------------------
    Description: 
Optimize read performance for the following scenarios
 # Read small files completely
 Files that are of size smaller than the read buffer size can be considered as 
small files. In case of such files it would be better to read the full file 
into the AbfsInputStream buffer.
 # Read last blok if the read is for footer
 If the read is for the last 8 bytes, read the full file completely.
 This will optimize reads for parquet files. [Parquet file 
format|https://www.ellicium.com/parquet-file-format-structure/]

  was:Files that are of size smaller than the read buffer size can be 
considered as small files. In case of such files it would be better to read the 
full file into the AbfsInputStream buffer.


> ABFS: Read optimizations
> ------------------------
>
>                 Key: HADOOP-17347
>                 URL: https://issues.apache.org/jira/browse/HADOOP-17347
>             Project: Hadoop Common
>          Issue Type: Sub-task
>          Components: fs/azure
>    Affects Versions: 3.4.0
>            Reporter: Bilahari T H
>            Assignee: Bilahari T H
>            Priority: Major
>
> Optimize read performance for the following scenarios
>  # Read small files completely
>  Files that are of size smaller than the read buffer size can be considered 
> as small files. In case of such files it would be better to read the full 
> file into the AbfsInputStream buffer.
>  # Read last blok if the read is for footer
>  If the read is for the last 8 bytes, read the full file completely.
>  This will optimize reads for parquet files. [Parquet file 
> format|https://www.ellicium.com/parquet-file-format-structure/]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to