nevi-me opened a new issue #819:
URL: https://github.com/apache/arrow-rs/issues/819


   **Is your feature request related to a problem or challenge? Please describe 
what you are trying to do.**
   
   When writing arrow binary columns to parquet, we create thousands of small 
`ByteBuffer` objects, and this leads to much of the writer time spent on 
allocating and dropping these objects.
   
   **Describe the solution you'd like**
   
   A `ButeBuffer` is backed by a `ByteBufferPtr` which is an alias for 
`Arc<Vec<u8>>` with similar abstractions on length and offset. If we were to 
create a single `ByteBuffer` from the Arrow data, we would reduce allocations 
down to 1, and then reuse this buffer when writing binary values.
   
   Local experiments have shown reasonable improvements in the writer.
   
   **Describe alternatives you've considered**
   
   I considered slicing into the Arrow buffer directly, but the 
`parquet::encoding::Encoding` is inflexible to this approach.
   
   **Additional context**
   
   I noticed this while profiling code and trying to simplify how we write 
nested lists.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to