jorisvandenbossche commented on a change in pull request #10955:
URL: https://github.com/apache/arrow/pull/10955#discussion_r692070088



##########
File path: cpp/src/arrow/dataset/file_base.h
##########
@@ -364,6 +364,28 @@ struct ARROW_DS_EXPORT FileSystemDatasetWriteOptions {
   /// {i} will be replaced by an auto incremented integer.
   std::string basename_template;
 
+  /// If greater than 0 then this will limit the maximum number of files that 
can be left
+  /// open. If an attempt is made to open too many files then the least 
recently used file
+  /// will be closed.  If this setting is set too low you may end up 
fragmenting your data
+  /// into many small files.
+  uint32_t max_open_files = 1024;
+
+  /// If greater than 0 then this will limit how many rows are placed in any 
single file.
+  uint64_t max_rows_per_file = 0;
+
+  /// If greater than 0 then the dataset writer will create a new file if a 
request comes
+  /// in and all existing writers for that file are busy and have at least
+  /// min_rows_per_file.  This can be used to increase performance on 
filesystems like S3
+  /// where the write speed may be slow but many concurrent writes are 
supported.  This is
+  /// a hint only and the writer may need to create smaller files to satisfy
+  /// max_open_files.
+  uint64_t min_rows_per_file = 0;
+
+  /// If true then the write will delete all files from a partition when it is 
first
+  /// written to.  This delete will only affect partitions that have at least 
one file
+  /// written to them.
+  bool purge_modified_partition = false;

Review comment:
       So this basically allows to "append" a dataset (eg adding new days, if 
it was partitioned per day), without the risk of overriding existing data? 
   (seems like a nice option, should it be the default?)

##########
File path: cpp/src/arrow/dataset/dataset_writer.h
##########
@@ -0,0 +1,89 @@
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+#pragma once
+
+#include <string>
+
+#include "arrow/dataset/file_base.h"
+#include "arrow/record_batch.h"
+#include "arrow/status.h"
+#include "arrow/util/future.h"
+
+namespace arrow {
+namespace dataset {
+
+constexpr uint64_t kDefaultDatasetWriterMaxRowsQueued = 64 * 1024 * 1024;
+
+/// \brief Utility class that manages a set of writers to different paths
+///
+/// Writers may be closed and reopened (and a new file created) based on the 
dataset
+/// write options (for example, min_rows_per_file or max_open_files)
+///
+/// The dataset writer enforces its own back pressure based on the # of rows 
(as opposed
+/// to # of batches which is how it is typically enforced elsewhere) and # of 
files.
+class DatasetWriter {
+ public:
+  /// \brief Creates a dataset writer
+  /// max_rows_queued represents the max number of rows allowed in the dataset 
writer
+  /// at any given time.
+  DatasetWriter(FileSystemDatasetWriteOptions write_options,
+                std::shared_ptr<Schema> schema,
+                uint64_t max_rows_queued = kDefaultDatasetWriterMaxRowsQueued);
+
+  ~DatasetWriter();
+
+  /// \brief Writes a batch to the dataset
+  /// \param[in] directory The directory to write to
+  ///
+  /// Note: The written filename will be {directory}/{filename_factory(i)} 
where i is a
+  /// counter controlled by `max_open_files` and `max_rows_per_file`
+  ///
+  /// If multiple WriteRecordBatch calls arrive with the same `directory` then 
the batches
+  /// may be written to the same file.

Review comment:
       Could it be an option that this does not happen? (each new batch is a 
new file) As alternative way to control the files instead of `max_rows_per_file`

##########
File path: cpp/src/arrow/dataset/file_base.h
##########
@@ -364,6 +364,28 @@ struct ARROW_DS_EXPORT FileSystemDatasetWriteOptions {
   /// {i} will be replaced by an auto incremented integer.
   std::string basename_template;
 
+  /// If greater than 0 then this will limit the maximum number of files that 
can be left
+  /// open. If an attempt is made to open too many files then the least 
recently used file
+  /// will be closed.  If this setting is set too low you may end up 
fragmenting your data
+  /// into many small files.
+  uint32_t max_open_files = 1024;
+
+  /// If greater than 0 then this will limit how many rows are placed in any 
single file.
+  uint64_t max_rows_per_file = 0;
+
+  /// If greater than 0 then the dataset writer will create a new file if a 
request comes
+  /// in and all existing writers for that file are busy and have at least
+  /// min_rows_per_file.  This can be used to increase performance on 
filesystems like S3
+  /// where the write speed may be slow but many concurrent writes are 
supported.  This is
+  /// a hint only and the writer may need to create smaller files to satisfy
+  /// max_open_files.
+  uint64_t min_rows_per_file = 0;

Review comment:
       What's the default behaviour here? 
   I don't fully understand how setting this helps increase performance on S3. 
I would expect, based on your description of the characteristics of S3, that it 
helps to write more smaller files to S3 (so they can be written concurrently). 
   Or it helps S3 by setting this to a relatively low number (and the default 
is to always wait on a busy writer to finish so it can the batch can be added 
to an existing file being written? (resulting is fewer larger files))




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to