wgtmac commented on code in PR #14353:
URL: https://github.com/apache/arrow/pull/14353#discussion_r1085376681
##########
cpp/src/parquet/encoding.h:
##########
@@ -317,6 +317,13 @@ class TypedDecoder : virtual public Decoder {
int64_t valid_bits_offset,
typename EncodingTraits<DType>::Accumulator* out) =
0;
+ virtual int DecodeArrow_opt(int num_values, int null_count, const uint8_t*
valid_bits,
+ int32_t* offset,
+ std::shared_ptr<::arrow::ResizableBuffer>&
values,
+ int64_t valid_bits_offset, int32_t*
bianry_length) {
+ return 0;
Review Comment:
> @wgtmac I'm not sure if you are objecting to my proposal above. A separate
`BufferBuilder` for the string data allows to presize for the computed total
length, so it should address your concern.
Sorry I didn't make it clear. I'm not objecting your proposal. Instead I
just summarized another optimization we have done before if reusing
`arrow::RecordBatch` on the same reader is possible. Internal experiment
reveals that repeated buffer allocation and resize operation are non-negligible
overhead, especially for wide columns (e.g. 1000+ columns). @pitrou
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]