rdettai commented on a change in pull request #6935:
URL: https://github.com/apache/arrow/pull/6935#discussion_r414417688
##########
File path: rust/parquet/src/column/reader.rs
##########
@@ -190,15 +190,12 @@ impl<T: DataType> ColumnReaderImpl<T> {
(self.num_buffered_values - self.num_decoded_values) as
usize,
);
- // Adjust batch size by taking into account how much space is
left in
- // values slice or levels slices (if available)
- adjusted_size = min(adjusted_size, values.len() - values_read);
- if let Some(ref levels) = def_levels {
- adjusted_size = min(adjusted_size, levels.len() -
levels_read);
- }
- if let Some(ref levels) = rep_levels {
- adjusted_size = min(adjusted_size, levels.len() -
levels_read);
- }
+ // Adjust batch size by taking into account how much data there
+ // to read. As batch_size is also smaller than value and level
+ // slices (if available), this ensures that available space is
not
+ // exceeded.
+ adjusted_size = min(adjusted_size, batch_size - values_read);
Review comment:
As stated in my PR comment, the `read_batch` function can receive any
combination of `batch_size`, `def_levels.len()`, `rep_levels.len()` and
`values.len()`. If the `batch_size` is the limiting factor your
`iter_batch_size` might end up larger than the `batch_size`. This happened to
me when called from `record_reader.rs` on a parquet file with relatively small
row groups (500k rows). I did not manage to reproduce the phenomenon with a
mock data file though ... :-/
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]