[
https://issues.apache.org/jira/browse/PARQUET-2171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17578470#comment-17578470
]
Timothy Miller commented on PARQUET-2171:
-----------------------------------------
This might synergize well with the bulk I/O features I've been adding to
ParquetMR. Some of the initial work is already in some PRs, and the rest of the
plan can be found at
[https://docs.google.com/document/d/1fBGpF_LgtfaeHnPD5CFEIpA2Ga_lTITmFdFIcO9Af-g/edit?usp=sharing]
I determined what to optimize from profiling, and I have run experiments on the
new implementation. I glanced through your Hadoop commits, and I noticed that
you use ByteBuffer a lot. I have found ByteBuffer to impose a nontrivial amount
of overhead, and you might want to consider providing array-based methods as
well.
> Implement vectored IO in parquet file format
> --------------------------------------------
>
> Key: PARQUET-2171
> URL: https://issues.apache.org/jira/browse/PARQUET-2171
> Project: Parquet
> Issue Type: New Feature
> Components: parquet-mr
> Reporter: Mukund Thakur
> Priority: Major
>
> We recently added a new feature called vectored IO in Hadoop for improving
> read performance for seek heavy readers. Spark Jobs and others which uses
> parquet will greatly benefit from this api. Details can be found hereĀ
> [https://github.com/apache/hadoop/commit/e1842b2a749d79cbdc15c524515b9eda64c339d5]
> https://issues.apache.org/jira/browse/HADOOP-18103
> https://issues.apache.org/jira/browse/HADOOP-11867
--
This message was sent by Atlassian Jira
(v8.20.10#820010)