alamb opened a new issue, #1718:
URL: https://github.com/apache/arrow-rs/issues/1718

   **Is your feature request related to a problem or challenge? Please describe 
what you are trying to do.**
   The encoding / compression is  most often the bottleneck for increasing the 
throughput of writing parquet files. Even though the actual writing of bytes 
must be done serially, the encoding could be done in parallel (into memory 
buffers) before the actual write
   
   **Describe the solution you'd like**
   I would like a way (either an explicit API or an example) that allows using 
multiple cores to write `ArrowRecord` batches to a file.  
   
   Note that trying to parallelize writes today results in corrupted parquet 
files, see https://github.com/apache/arrow-rs/issues/1717
   
   **Describe alternatives you've considered**
   There is a high level description of parallel decoding in @jorgecarleitao 's 
parquet2 https://github.com/jorgecarleitao/parquet2#higher-parallelism (focused 
on reading)
   
   **Additional context**
   Mailing list https://lists.apache.org/thread/rbhfwcpd6qfk52rtzm2t6mo3fhvdpc91
   
   
   Also, https://github.com/apache/arrow-rs/issues/1711 is possibly related


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to