parthchandra commented on code in PR #999:
URL: https://github.com/apache/parquet-mr/pull/999#discussion_r991511389


##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##########
@@ -1811,6 +1845,44 @@ public void readAll(SeekableInputStream f, 
ChunkListBuilder builder) throws IOEx
       }
     }
 
+    public void readAllVectored(SeekableInputStream f, ChunkListBuilder 
builder)
+      throws IOException, ExecutionException, InterruptedException {
+      LOG.warn("Reading through the vectored API.[readAllVectored]");
+      List<Chunk> result = new ArrayList<Chunk>(chunks.size());
+
+      int fullAllocations = (int) (length / options.getMaxAllocationSize());
+      int lastAllocationSize = (int) (length % options.getMaxAllocationSize());
+
+      int numAllocations = fullAllocations + (lastAllocationSize > 0 ? 1 : 0);
+      List<FileRange> fileRanges = new ArrayList<>(numAllocations);
+
+      long currentOffset = offset;
+      for (int i = 0; i < fullAllocations; i += 1) {
+        
//buffers.add(options.getAllocator().allocate(options.getMaxAllocationSize()));
+        fileRanges.add(FileRange.createFileRange(currentOffset, 
options.getMaxAllocationSize()));
+        currentOffset = currentOffset + options.getMaxAllocationSize();
+      }
+
+      if (lastAllocationSize > 0) {
+        //buffers.add(options.getAllocator().allocate(lastAllocationSize));
+        fileRanges.add(FileRange.createFileRange(currentOffset, 
lastAllocationSize));
+      }
+      LOG.warn("Doing vectored IO for ranges {}", fileRanges);
+      f.readVectored(fileRanges, ByteBuffer::allocate);

Review Comment:
   Use the allocator (`options.getAllocator`)? Keep in mind the allocated 
buffer might be a direct byte buffer.



##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##########
@@ -1093,10 +1099,38 @@ private ColumnChunkPageReadStore 
internalReadFilteredRowGroup(BlockMetaData bloc
         }
       }
     }
-    // actually read all the chunks
+    // Vectored IO up.
+
+    List<FileRange> ranges = new ArrayList<>();
     for (ConsecutivePartList consecutiveChunks : allParts) {
-      consecutiveChunks.readAll(f, builder);
+      ranges.add(FileRange.createFileRange(consecutiveChunks.offset, (int) 
consecutiveChunks.length));

Review Comment:
   I would do this the way you were planning to do it initially ( or so it 
appears). Move this into a readAllVectored method.



##########
parquet-hadoop/src/main/java/org/apache/parquet/hadoop/ParquetFileReader.java:
##########
@@ -1093,10 +1099,38 @@ private ColumnChunkPageReadStore 
internalReadFilteredRowGroup(BlockMetaData bloc
         }
       }
     }
-    // actually read all the chunks
+    // Vectored IO up.
+
+    List<FileRange> ranges = new ArrayList<>();
     for (ConsecutivePartList consecutiveChunks : allParts) {
-      consecutiveChunks.readAll(f, builder);
+      ranges.add(FileRange.createFileRange(consecutiveChunks.offset, (int) 
consecutiveChunks.length));

Review Comment:
   And make it configurable to choose between vectored I/O and non vectored I/O 
(see `HadoopReadOptions` and `ParquetReadOptions`)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to