westonpace commented on a change in pull request #10955:
URL: https://github.com/apache/arrow/pull/10955#discussion_r692351686



##########
File path: cpp/src/arrow/dataset/file_base.h
##########
@@ -364,6 +364,28 @@ struct ARROW_DS_EXPORT FileSystemDatasetWriteOptions {
   /// {i} will be replaced by an auto incremented integer.
   std::string basename_template;
 
+  /// If greater than 0 then this will limit the maximum number of files that 
can be left
+  /// open. If an attempt is made to open too many files then the least 
recently used file
+  /// will be closed.  If this setting is set too low you may end up 
fragmenting your data
+  /// into many small files.
+  uint32_t max_open_files = 1024;
+
+  /// If greater than 0 then this will limit how many rows are placed in any 
single file.
+  uint64_t max_rows_per_file = 0;
+
+  /// If greater than 0 then the dataset writer will create a new file if a 
request comes
+  /// in and all existing writers for that file are busy and have at least
+  /// min_rows_per_file.  This can be used to increase performance on 
filesystems like S3
+  /// where the write speed may be slow but many concurrent writes are 
supported.  This is
+  /// a hint only and the writer may need to create smaller files to satisfy
+  /// max_open_files.
+  uint64_t min_rows_per_file = 0;

Review comment:
       I'm starting to feel a little less certain about this option as it is 
kind of confusing.  Also, in a lot of cases we will be writing to enough files 
concurrently that it won't really matter.
   
   If you were to write out one file with one million rows it would be slower 
than writing out four files with 250,000 rows because S3 performance really 
relies on multiple concurrent streams.  From [Performance Design Patterns for 
Amazon 
S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/optimizing-performance-design-patterns.html)
   
   > Make one concurrent request for each 85–90 MB/s of desired network 
throughput. To saturate a 10 Gb/s network interface card (NIC), you might use 
about 15 concurrent requests over separate connections. You can scale up the 
concurrent requests over more connections to saturate faster NICs, such as 25 
Gb/s or 100 Gb/s NICs. 
   
   It doesn't take too many CPUs to get a parquet scan running at 500 MB/s so 
to prevent S3 from being the bottleneck you'd need 5 or 6 concurrent read 
streams and 5 or 6 concurrent write streams.
   
   The reason I am less certain is that the option the confusing and setting a 
reasonable max_rows_per_file is probably good enough.  Consider the above 
example (1 million vs 250k).  If you are writing billions of rows and your 
max_rows_per_file is 1 million then it probably won't matter, you'll be writing 
out 4-5 files soon enough anyways.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to