lidavidm commented on code in PR #14151:
URL: https://github.com/apache/arrow/pull/14151#discussion_r984699345


##########
java/dataset/src/main/java/org/apache/arrow/dataset/file/JniWrapper.java:
##########
@@ -45,4 +46,21 @@ private JniWrapper() {
    */
   public native long makeFileSystemDatasetFactory(String uri, int fileFormat);
 
+  /**
+   * Write all record batches in a {@link NativeRecordBatchIterator} into 
files. This internally
+   * depends on C++ write API: FileSystemDataset::Write.
+   *
+   * @param itr iterator to be used for writing
+   * @param schema serialized schema of output files
+   * @param fileFormat target file format (ID)
+   * @param uri target file uri
+   * @param partitionColumns columns used to partition output files
+   * @param maxPartitions maximum partitions to be included in written files
+   * @param baseNameTemplate file name template used to make partitions. E.g. 
"dat_{i}", i is current partition
+   *                         ID around all written files.
+   */
+  public native void writeFromScannerToFile(CRecordBatchIterator itr, long 
schema_address,

Review Comment:
   Same question - why are we manually bridging data to C++, why do we have 
CRecordBatchIterator at all, when we could use ArrowArrayStream instead? This 
should just take an ArrowArrayStream as the data source



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to