pitrou commented on a change in pull request #6744:
URL: https://github.com/apache/arrow/pull/6744#discussion_r413747770
##########
File path: cpp/src/parquet/file_reader.cc
##########
@@ -212,6 +237,21 @@ class SerializedFile : public ParquetFileReader::Contents {
file_metadata_ = std::move(metadata);
}
+ void PreBuffer(const std::vector<int>& row_groups,
+ const std::vector<int>& column_indices,
+ const ::arrow::io::CacheOptions& options) {
+ cached_source_ =
+ std::make_shared<arrow::io::internal::ReadRangeCache>(source_,
options);
Review comment:
I guess my question is: if I'm reading one record batch at a time (in
streaming fashion), shouldn't the cache be per-record batch?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]