mapleFU commented on code in PR #37896:
URL: https://github.com/apache/arrow/pull/37896#discussion_r1358346261


##########
cpp/src/arrow/record_batch.h:
##########
@@ -350,4 +350,13 @@ class ARROW_EXPORT RecordBatchReader {
       Iterator<std::shared_ptr<RecordBatch>> batches, std::shared_ptr<Schema> 
schema);
 };
 
+/// \brief Concatenate record batches
+///
+/// \param[in] batches a vector of record batches to be concatenated
+/// \param[in] pool memory to store the result will be allocated from this 
memory pool
+/// \return the concatenated record batch

Review Comment:
   Nit: can we port some comment from `ConcatenateTables`? (And we don't have a 
`ConcatenateTablesOptions` and force schema equal)
   
   ```c++
   /// \brief Construct a new table from multiple input tables.
   ///
   /// The new table is assembled from existing column chunks without copying,
   /// if schemas are identical. If schemas do not match exactly and
   /// unify_schemas is enabled in options (off by default), an attempt is
   /// made to unify them, and then column chunks are converted to their
   /// respective unified datatype, which will probably incur a copy.
   /// :func:`arrow::PromoteTableToSchema` is used to unify schemas.
   ///
   /// Tables are concatenated in order they are provided in and the order of
   /// rows within tables will be preserved.
   ///
   /// \param[in] tables a std::vector of Tables to be concatenated
   /// \param[in] options specify how to unify schema of input tables
   /// \param[in] memory_pool MemoryPool to be used if null-filled arrays need 
to
   /// be created or if existing column chunks need to endure type conversion
   /// \return new Table
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to