pitrou commented on code in PR #36377:
URL: https://github.com/apache/arrow/pull/36377#discussion_r1268110361


##########
cpp/src/parquet/arrow/writer.cc:
##########
@@ -469,6 +415,30 @@ class FileWriterImpl : public FileWriter {
     return writer_->metadata();
   }
 
+  Status WriteTableUnbuffered(const Table& table, int64_t chunk_size) {
+    auto WriteRowGroup = [&](int64_t offset, int64_t size) {
+      RETURN_NOT_OK(NewRowGroup(size));
+      for (int i = 0; i < table.num_columns(); i++) {
+        RETURN_NOT_OK(WriteColumnChunk(table.column(i), offset, size));
+      }
+      return Status::OK();
+    };
+
+    if (table.num_rows() == 0) {
+      // Append a row group with 0 rows
+      RETURN_NOT_OK_ELSE(WriteRowGroup(0, 0), PARQUET_IGNORE_NOT_OK(Close()));
+      return Status::OK();
+    }
+
+    for (int chunk = 0; chunk * chunk_size < table.num_rows(); chunk++) {
+      int64_t offset = chunk * chunk_size;
+      RETURN_NOT_OK_ELSE(
+          WriteRowGroup(offset, std::min(chunk_size, table.num_rows() - 
offset)),
+          PARQUET_IGNORE_NOT_OK(Close()));

Review Comment:
   I know you're just moving code around, but do you know why `Close()` is 
being called here? We shouldn't close the file even though we have not finished 
writing all chunks.
   
   Also, I don't understand why errors are silenced using 
`PARQUET_IGNORE_NOT_OK`. I don't think this is a good idea.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to