mukund-thakur commented on code in PR #999:
URL: https://github.com/apache/parquet-mr/pull/999#discussion_r992678347


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##########
@@ -1093,10 +1099,38 @@ private ColumnChunkPageReadStore 
internalReadFilteredRowGroup(BlockMetaData bloc
         }
       }
     }
-    // actually read all the chunks
+    // Vectored IO up.
+
+    List<FileRange> ranges = new ArrayList<>();
     for (ConsecutivePartList consecutiveChunks : allParts) {
-      consecutiveChunks.readAll(f, builder);
+      ranges.add(FileRange.createFileRange(consecutiveChunks.offset, (int) 
consecutiveChunks.length));

Review Comment:
   I have changed both the places where I thought the integration can be done. 
I am not really sure which will give better performance results, which is why 
left the other portion commented. 
   1. One option is to change in readAllVectored as you suggested and I did 
before.
   2. Another change (current one) is at the top layer. 
   The reason I moved from 1st to 2nd of because of the name 
ConsecutivePartList. The name suggests it is a consecutive part essentially 
meaning just a single range and for which we won't be getting the real vectored 
io benefit like parallel IO and range coalescing. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to