thisisnic commented on a change in pull request #10141:
URL: https://github.com/apache/arrow/pull/10141#discussion_r624049352



##########
File path: r/R/csv.R
##########
@@ -585,3 +613,55 @@ readr_to_csv_convert_options <- function(na,
     include_columns = include_columns
   )
 }
+
+#' Write CSV file to disk
+#'
+#' @param x `data.frame`, [RecordBatch], or [Table]
+#' @param sink A string file path, URI, or [OutputStream], or path in a file
+#' system (`SubTreeFileSystem`)
+#' @param include_header Whether to write an initial header line with column 
names
+#' @param batch_size Maximum number of rows processed at a time. Default is 
1024.
+#'
+#' @return The input `x`, invisibly. Note that if `sink` is an [OutputStream],
+#' the stream will be left open.
+#' @export
+#' @examples
+#' \donttest{
+#' tf <- tempfile()
+#' on.exit(unlink(tf))
+#' write_csv_arrow(mtcars, tf)
+#' }
+#' @include arrow-package.R
+write_csv_arrow <- function(x,
+                          sink,
+                          include_header = TRUE,
+                          batch_size = 1024L
+                          ) {
+  # Handle and validate options before touching data
+  batch_size <- as.integer(batch_size)
+  assert_that(batch_size > 0)

Review comment:
       I get the below error.  Shall I remove the assertion as it'd handled at 
the C++ level, or leave it in as it's a cleaner error message with 
`assert_that`?
   
   ```
   Error: Invalid: Negative buffer resize: -40
   /home/nic2/arrow/cpp/src/arrow/buffer.cc:262  buffer->Resize(size)
   /home/nic2/arrow/cpp/src/arrow/csv/writer.cc:337  AllocateResizableBuffer( 
options.batch_size * schema_->num_fields() * kColumnSizeGuess, pool_)
   /home/nic2/arrow/cpp/src/arrow/csv/writer.cc:315  
PrepareForContentsWrite(options, out)
   ```




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to