parthchandra commented on code in PR #999: URL: https://github.com/apache/parquet-mr/pull/999#discussion_r992787569
########## parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java: ########## @@ -1093,10 +1099,38 @@ private ColumnChunkPageReadStore internalReadFilteredRowGroup(BlockMetaData bloc } } } - // actually read all the chunks + // Vectored IO up. + + List<FileRange> ranges = new ArrayList<>(); for (ConsecutivePartList consecutiveChunks : allParts) { - consecutiveChunks.readAll(f, builder); + ranges.add(FileRange.createFileRange(consecutiveChunks.offset, (int) consecutiveChunks.length)); + } + LOG.warn("Doing vectored IO for ranges {}", ranges); + f.readVectored(ranges, ByteBuffer::allocate); Review Comment: see comment below about using `options.getAllocator` ########## parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java: ########## @@ -1093,10 +1099,38 @@ private ColumnChunkPageReadStore internalReadFilteredRowGroup(BlockMetaData bloc } } } - // actually read all the chunks + // Vectored IO up. + + List<FileRange> ranges = new ArrayList<>(); for (ConsecutivePartList consecutiveChunks : allParts) { - consecutiveChunks.readAll(f, builder); + ranges.add(FileRange.createFileRange(consecutiveChunks.offset, (int) consecutiveChunks.length)); Review Comment: Right. I was thinking that `readAllVectored` will take all `ConsecutiveParts` as input. Or at least move this block into a new function. You will need to do the same thing in `readNextRowGroup` as well. ########## parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java: ########## @@ -1093,10 +1099,38 @@ private ColumnChunkPageReadStore internalReadFilteredRowGroup(BlockMetaData bloc } } } - // actually read all the chunks + // Vectored IO up. + + List<FileRange> ranges = new ArrayList<>(); for (ConsecutivePartList consecutiveChunks : allParts) { - consecutiveChunks.readAll(f, builder); + ranges.add(FileRange.createFileRange(consecutiveChunks.offset, (int) consecutiveChunks.length)); + } + LOG.warn("Doing vectored IO for ranges {}", ranges); + f.readVectored(ranges, ByteBuffer::allocate); Review Comment: Does `readVectored` allocate a single buffer per range? Or does it split each range into bite sized pieces? If all the columns are being read, a single range can be the entire row group, potentially more than a GB. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: dev-unsubscr...@parquet.apache.org For queries about this service, please contact Infrastructure at: us...@infra.apache.org