jp0317 commented on code in PR #39818:
URL: https://github.com/apache/arrow/pull/39818#discussion_r1473480947
##########
cpp/src/parquet/column_reader.cc:
##########
@@ -1478,16 +1480,29 @@ class TypedRecordReader : public
TypedColumnReaderImpl<DType>,
// We skipped the levels by incrementing 'levels_position_'. For values
// we do not have a buffer, so we need to read them and throw them away.
// First we need to figure out how many present/not-null values there are.
- std::shared_ptr<::arrow::ResizableBuffer> valid_bits;
- valid_bits = AllocateBuffer(this->pool_);
-
PARQUET_THROW_NOT_OK(valid_bits->Resize(bit_util::BytesForBits(skipped_records),
- /*shrink_to_fit=*/true));
+ int64_t buffer_size = bit_util::BytesForBits(skipped_records);
+ if (valid_bits_for_skip_ == nullptr) {
+ valid_bits_for_skip_ = AllocateBuffer(this->pool_);
+ }
+ if (buffer_size > valid_bits_for_skip_->size()) {
Review Comment:
Good point. Theoretically It should also work. One question is how can the
[skipped_records](https://github.com/apache/arrow/blob/main/cpp/src/parquet/column_reader.cc#L1473)
for `SkipRecordsInBufferNonRepeated` ever exceed
[kMinLevelBatchSize](https://github.com/apache/arrow/blob/main/cpp/src/parquet/column_reader.cc#L1369)
(i.e., 1024)? If it's always less than 1024, adding a chunk based logic here
would be unnecessary (resizing to 0 feels more straightforward and less risky
as it follows the old behavior).
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]