mukund-thakur commented on code in PR #999:
URL: https://github.com/apache/parquet-mr/pull/999#discussion_r996074063
##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##########
@@ -1093,10 +1099,38 @@ private ColumnChunkPageReadStore
internalReadFilteredRowGroup(BlockMetaData bloc
}
}
}
- // actually read all the chunks
+ // Vectored IO up.
+
+ List<FileRange> ranges = new ArrayList<>();
for (ConsecutivePartList consecutiveChunks : allParts) {
- consecutiveChunks.readAll(f, builder);
+ ranges.add(FileRange.createFileRange(consecutiveChunks.offset, (int)
consecutiveChunks.length));
+ }
+ LOG.warn("Doing vectored IO for ranges {}", ranges);
+ f.readVectored(ranges, ByteBuffer::allocate);
Review Comment:
I haven't been able to do any benchmarks for Parquet yet. Also, we can
always configure or even change the default later based on the outcome of the
experiments we perform.
We have only done the benchmarks for Hive in orc till now. Even for the
hive, I want to run more benchmarks by configuring the minseek and maxsize with
different values and figure out the best default values.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]