pacman82 commented on PR #1774: URL: https://github.com/apache/arrow-rs/pull/1774#issuecomment-1147269547
@tustvold For now getting the `compressed_size` of each row group after I've written it, did the trick for me. End to end my usecase is about creating files of roughly the same size, while streaming data from a database. My current solution is like this: 1. Accumulate the compressed size of each row group written 2. If sum of compressed size goes over a threshold, reset to zero and start writing the next row group into a new file. Maybe this interface would help me simplify things? Or is there a way to be more "precise" in the resulting file size. Anyhow I do not see it at the moment. Cheers, Markus -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
