parthchandra commented on code in PR #999:
URL: https://github.com/apache/parquet-mr/pull/999#discussion_r996038868


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##########
@@ -1093,10 +1099,38 @@ private ColumnChunkPageReadStore 
internalReadFilteredRowGroup(BlockMetaData bloc
         }
       }
     }
-    // actually read all the chunks
+    // Vectored IO up.
+
+    List<FileRange> ranges = new ArrayList<>();
     for (ConsecutivePartList consecutiveChunks : allParts) {
-      consecutiveChunks.readAll(f, builder);
+      ranges.add(FileRange.createFileRange(consecutiveChunks.offset, (int) 
consecutiveChunks.length));
+    }
+    LOG.warn("Doing vectored IO for ranges {}", ranges);
+    f.readVectored(ranges, ByteBuffer::allocate);

Review Comment:
   Hmm. The largest you will get after a merge is 1 MB (default for S3). But if 
parquet gives provides an 8 MB range vectored io will not split it into 8 
ranges of 1MB. The _smallest_ read a parquet file reader is likely to do is 1 
MB (the default page size), so in effect we are never going to merge ranges. 
   Either way, we have two goals - reduce the number of seeks and increase the 
parallelism -  which seem to be in conflict. If we increase the number of 
ranges (and consequently use smaller ranges) we get more seeks, and if we 
decrease the number of ranges we get less parallelism. 
   Do you have any suggestions on what is a good compromise based on 
experiments with Hdfs/S3 etc? 
    



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@parquet.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to