lidavidm commented on a change in pull request #6744:
URL: https://github.com/apache/arrow/pull/6744#discussion_r413792469



##########
File path: cpp/src/parquet/file_reader.cc
##########
@@ -212,6 +237,21 @@ class SerializedFile : public ParquetFileReader::Contents {
     file_metadata_ = std::move(metadata);
   }
 
+  void PreBuffer(const std::vector<int>& row_groups,
+                 const std::vector<int>& column_indices,
+                 const ::arrow::io::CacheOptions& options) {
+    cached_source_ =
+        std::make_shared<arrow::io::internal::ReadRangeCache>(source_, 
options);

Review comment:
       That's fair. In our case, even if a dataset is large, individual files 
are smaller, so it's fine to buffer an entire file and then discard it. But I 
agree that for other use cases, this is not ideal.
   
   A caller who is very concerned about memory might instead choose to 
explicitly read only one row group at a time to limit memory usage. This is 
rather annoying, though. 
   
   We could create a separate cache per row group, but this means we lose some 
performance as we can't coalesce reads across row groups anymore. However that 
might be a worthwhile tradeoff for large files. Correct me if I'm wrong, 
though, but even that doesn't help much without more refactoring, since reading 
is organized along columns. 




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to