devinjdangelo commented on code in PR #7655:
URL: https://github.com/apache/arrow-datafusion/pull/7655#discussion_r1371675277


##########
datafusion/common/src/config.rs:
##########
@@ -377,12 +377,24 @@ config_namespace! {
         pub bloom_filter_ndv: Option<u64>, default = None
 
         /// Controls whether DataFusion will attempt to speed up writing
-        /// large parquet files by first writing multiple smaller files
-        /// and then stitching them together into a single large file.
-        /// This will result in faster write speeds, but higher memory usage.
-        /// Also currently unsupported are bloom filters and column indexes
-        /// when single_file_parallelism is enabled.
-        pub allow_single_file_parallelism: bool, default = false
+        /// parquet files by serializing them in parallel. Each column
+        /// in each row group in each output file are serialized in parallel
+        /// leveraging a maximum possible core count of 
n_files*n_row_groups*n_columns.
+        pub allow_single_file_parallelism: bool, default = true
+
+        /// If allow_single_file_parallelism=true, this setting allows
+        /// applying backpressure to prevent working on too many row groups in
+        /// parallel in case of limited memory or slow I/O speed causing
+        /// OOM errors. Lowering this number limits memory growth at the cost
+        /// of potentially slower write speeds.
+        pub maximum_parallel_row_group_writers: usize, default = 16
+
+        /// If allow_single_file_parallelism=true, this setting allows
+        /// applying backpressure to prevent too many RecordBatches building
+        /// up in memory in case the parallel writers cannot consume them fast
+        /// enough. Lowering this number limits memory growth at the cost
+        /// of potentially lower write speeds.
+        pub maximum_buffered_record_batches_per_stream: usize, default = 200

Review Comment:
   Lower the buffer size down to 2 would save a few 100MB of memory, but at the 
cost of more than double the serialization time. The decision comes down to 
which is a better default behavior. Imo, most users can spare the memory for 
the performance benefit, and for those that can't they can always lower the 
buffer size. We could instead default to a low buffer size (favoring minimum 
memory usage over execution time) and I could update the docs to suggest 
increasing the buffer for signficant performance gains on systems with many 
cores. Here are the numbers I gathered using the script in the description:
   
   ### Parallel Parquet Writer, Varying Row Group Buffer Sizes
   
     | Buffer Size=2 | Buffer Size=64 | Buffer Size=128
   -- | -- | -- | --
   Execution Time (s) | 62.6 | 35.6 | 25.02
   Peak Memory Usage (MB) | 495.0 | 606.8 | 712.05
   
   For comparison, the non parallelized writer takes 155s and peak memory usage 
is 449.7MB for the same task.



##########
datafusion/common/src/config.rs:
##########
@@ -377,12 +377,24 @@ config_namespace! {
         pub bloom_filter_ndv: Option<u64>, default = None
 
         /// Controls whether DataFusion will attempt to speed up writing
-        /// large parquet files by first writing multiple smaller files
-        /// and then stitching them together into a single large file.
-        /// This will result in faster write speeds, but higher memory usage.
-        /// Also currently unsupported are bloom filters and column indexes
-        /// when single_file_parallelism is enabled.
-        pub allow_single_file_parallelism: bool, default = false
+        /// parquet files by serializing them in parallel. Each column
+        /// in each row group in each output file are serialized in parallel
+        /// leveraging a maximum possible core count of 
n_files*n_row_groups*n_columns.
+        pub allow_single_file_parallelism: bool, default = true
+
+        /// If allow_single_file_parallelism=true, this setting allows
+        /// applying backpressure to prevent working on too many row groups in
+        /// parallel in case of limited memory or slow I/O speed causing
+        /// OOM errors. Lowering this number limits memory growth at the cost
+        /// of potentially slower write speeds.
+        pub maximum_parallel_row_group_writers: usize, default = 16
+
+        /// If allow_single_file_parallelism=true, this setting allows
+        /// applying backpressure to prevent too many RecordBatches building
+        /// up in memory in case the parallel writers cannot consume them fast
+        /// enough. Lowering this number limits memory growth at the cost
+        /// of potentially lower write speeds.
+        pub maximum_buffered_record_batches_per_stream: usize, default = 200

Review Comment:
   Lower the buffer size down to 2 would save a few 100MB of memory, but at the 
cost of more than double the serialization time. The decision comes down to 
which is a better default behavior. Imo, most users can spare the memory for 
the performance benefit, and for those that can't they can always lower the 
buffer size. We could instead default to a low buffer size (favoring minimum 
memory usage over execution time) and I could update the docs to suggest 
increasing the buffer for signficant performance gains on systems with many 
cores. Here are the numbers I gathered using the script in the description:
   
   ### Parallel Parquet Writer, Varying Row Group Buffer Sizes
   
     | Buffer Size=2 | Buffer Size=64 | Buffer Size=128
   -- | -- | -- | --
   Execution Time (s) | 62.6 | 35.6 | 25.02
   Peak Memory Usage (MB) | 495.0 | 606.8 | 712.05
   
   For comparison, the non parallelized writer takes 155s and peak memory usage 
is 449.7MB for the same task.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to