tustvold opened a new issue, #5775:
URL: https://github.com/apache/arrow-rs/issues/5775

   **Is your feature request related to a problem or challenge? Please describe 
what you are trying to do.**
   <!--
   A clear and concise description of what the problem is. Ex. I'm always 
frustrated when [...] 
   (This section helps Arrow developers understand the context and *why* for 
this feature, in addition to  the *what*)
   -->
   
   Currently when reading the parquet metadata, in particular 
`decode_metadata`, we read into structures within `parquet::format` that are 
generated by the thrift compiler. Despite reading from a fixed slice of memory, 
these allocate new buffers for all variable width data, including all 
statistics. This is especially unfortunate as these thrift data structures are 
temporary, and quickly get parsed into more optimal data structures.
   
   For example when reading the schema of a 26 column parquet we see almost 
10,000 allocations, most as a result of the thrift data structures.
   
   
![image](https://github.com/apache/arrow-rs/assets/1781103/bb79403e-0f75-4e12-a257-15e1e54d8528)
   
   Almost all of them are very small
   
   
![image](https://github.com/apache/arrow-rs/assets/1781103/27b5aad4-f577-4834-b18b-04c3ab50ff5c)
   
   **Describe the solution you'd like**
   <!--
   A clear and concise description of what you want to happen.
   -->
   
   The optimal solution would be for the parquet decoder to borrow data rather 
than allocating new data structures, this would avoid the vast majority of 
these allocations and is possible because the thrift binary encoding is just a 
side-prefixed string - 
https://github.com/apache/thrift/blob/master/doc/specs/thrift-compact-protocol.md#binary-encoding
   
   Given that a lot of the allocations are small, deploying a small string 
optimisation might also be valuable, but just borrowing string slices will be 
optimal.
   
   **Describe alternatives you've considered**
   <!--
   A clear and concise description of any alternative solutions or features 
you've considered.
   -->
   
   **Additional context**
   <!--
   Add any other context or screenshots about the feature request here.
   -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to