HippoBaro opened a new issue, #9446:
URL: https://github.com/apache/arrow-rs/issues/9446

   **Is your feature request related to a problem or challenge? Please describe 
what you are trying to do.**
   <!--
   A clear and concise description of what the problem is. Ex. I'm always 
frustrated when [...] 
   (This section helps Arrow developers understand the context and *why* for 
this feature, in addition to  the *what*)
   -->
   
   When writing a Parquet column with very sparse data, the column writer 
accumulates unbounded memory (linear to row count) even though the actual 
encoded output can be tiny.
   
   Concretely, `GenericColumnWriter` appends raw `i16` definition and 
repetition levels into `Vec<i16>` sinks (`def_levels_sink` / `rep_levels_sink`) 
on every `write_batch` call, and only RLE-encodes them in bulk when a data page 
is flushed.
   
   **Describe the solution you'd like**
   <!--
   A clear and concise description of what you want to happen.
   -->
   
   Replace the two raw-level `Vec<i16>` sinks with streaming `RleEncoder` 
instances. Levels would be RLE-encoded incrementally as each `write_batch` call 
arrives, so only the compact encoded bytes accumulate in memory.
   
   This would make memory usage for definition/repetition levels proportional 
to the encoded size of the current page.
   
   
   **Describe alternatives you've considered**
   <!--
   A clear and concise description of any alternative solutions or features 
you've considered.
   -->
   
   **Additional context**
   <!--
   Add any other context or screenshots about the feature request here.
   -->
   
   N/A
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to