tustvold opened a new issue, #3871: URL: https://github.com/apache/arrow-rs/issues/3871
**Is your feature request related to a problem or challenge? Please describe what you are trying to do.** <!-- A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] (This section helps Arrow developers understand the context and *why* for this feature, in addition to the *what*) --> Currently ArrowWriter buffers up `RecordBatch` until it has enough rows to populate an entire row group, and then proceeds to write each column in turn to the output buffer. **Describe the solution you'd like** <!-- A clear and concise description of what you want to happen. --> The encoded parquet data is often orders of magnitude smaller than the corresponding arrow data. The read path goes to great lengths to allow incremental reading of data within a row group. It may therefore be desirable to instead encode arrow data eagerly, writing each ColumnChunk to its own temporary buffer, and then stitching these back together. This would allow writing larger row groups, whilst potentially consuming less memory in the arrow writer. This would likely involve extending or possibly replacing `SerializedRowGroupWriter` to allow writing to the same column multiple times **Describe alternatives you've considered** <!-- A clear and concise description of any alternative solutions or features you've considered. --> We could not do this, parquet is inherently a read-optimised format and write performance may therefore be less of a priority for many workloads. **Additional context** <!-- Add any other context or screenshots about the feature request here. --> -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
