jorisvandenbossche commented on a change in pull request #10955:
URL: https://github.com/apache/arrow/pull/10955#discussion_r692352068
##########
File path: cpp/src/arrow/dataset/file_base.h
##########
@@ -364,6 +364,28 @@ struct ARROW_DS_EXPORT FileSystemDatasetWriteOptions {
/// {i} will be replaced by an auto incremented integer.
std::string basename_template;
+ /// If greater than 0 then this will limit the maximum number of files that
can be left
+ /// open. If an attempt is made to open too many files then the least
recently used file
+ /// will be closed. If this setting is set too low you may end up
fragmenting your data
+ /// into many small files.
+ uint32_t max_open_files = 1024;
+
+ /// If greater than 0 then this will limit how many rows are placed in any
single file.
+ uint64_t max_rows_per_file = 0;
+
+ /// If greater than 0 then the dataset writer will create a new file if a
request comes
+ /// in and all existing writers for that file are busy and have at least
+ /// min_rows_per_file. This can be used to increase performance on
filesystems like S3
+ /// where the write speed may be slow but many concurrent writes are
supported. This is
+ /// a hint only and the writer may need to create smaller files to satisfy
+ /// max_open_files.
+ uint64_t min_rows_per_file = 0;
+
+ /// If true then the write will delete all files from a partition when it is
first
+ /// written to. This delete will only affect partitions that have at least
one file
+ /// written to them.
+ bool purge_modified_partition = false;
Review comment:
Ah, sorry, I misread :) It's indeed clearly still deleting data with
this option. Adding an option to error sounds a good idea indeed.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]