albertlockett commented on issue #7545:
URL: https://github.com/apache/arrow-rs/issues/7545#issuecomment-2927239790
Have made some progress on this .. turns out reading multiple record batches
will reproduce
```rs
let file = std::fs::File::open("/tmp/test.parquet").unwrap();
let builder = ParquetRecordBatchReaderBuilder::try_new(file).unwrap();
let mut stream = builder.build().unwrap();
println!("batch 1:");
let batch = stream.next().unwrap().unwrap();
print_batches(&[batch]).unwrap();
println!("batch 2:");
let batch = stream.next().unwrap().unwrap();
print_batches(&[batch]).unwrap();
```
calling `consume_record_data` here
https://github.com/apache/arrow-rs/blob/3d88c11a47982de5dc175080497b516eba274c77/parquet/src/arrow/array_reader/byte_array_dictionary.rs#L168
calls `take` on the dictionary_buffer, which causes it to get replaced by
the default implementation
https://github.com/apache/arrow-rs/blob/3d88c11a47982de5dc175080497b516eba274c77/parquet/src/arrow/buffer/dictionary_buffer.rs#L35-L40
Which is how on the second batch, we end up in this branch trying to decode
an offset buffer on the 2nd batch:
https://github.com/apache/arrow-rs/blob/3d88c11a47982de5dc175080497b516eba274c77/parquet/src/arrow/buffer/dictionary_buffer.rs#L179-L191
Will continue investigating proper soltuion
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]