martin-g commented on code in PR #8790:
URL: https://github.com/apache/arrow-rs/pull/8790#discussion_r2510450735


##########
arrow-pyarrow/src/lib.rs:
##########
@@ -484,6 +487,120 @@ impl IntoPyArrow for ArrowArrayStreamReader {
     }
 }
 
+/// This is a convenience wrapper around `Vec<RecordBatch>` that tries to 
simplify conversion from
+/// and to `pyarrow.Table`.
+///
+/// This could be used in circumstances where you either want to consume a 
`pyarrow.Table` directly
+/// (although technically, since `pyarrow.Table` implements the 
ArrayStreamReader PyCapsule
+/// interface, one could also consume a `PyArrowType<ArrowArrayStreamReader>` 
instead) or, more
+/// importantly, where one wants to export a `pyarrow.Table` from a 
`Vec<RecordBatch>` from the Rust
+/// side.
+///
+/// ```ignore
+/// #[pyfunction]
+/// fn return_table(...) -> PyResult<PyArrowType<Table>> {
+///     let batches: Vec<RecordBatch>;
+///     let schema: SchemaRef;
+///     PyArrowType(Table::try_new(batches, schema).map_err(|err| 
err.into_py_err(py))?)
+/// }
+/// ```
+#[derive(Clone)]
+pub struct Table {
+    record_batches: Vec<RecordBatch>,
+    schema: SchemaRef,
+}
+
+impl Table {
+    pub fn try_new(
+        record_batches: Vec<RecordBatch>,
+        schema: SchemaRef,
+    ) -> Result<Self, ArrowError> {
+        /// This function was copied from `pyo3_arrow/utils.rs` for now. I 
don't understand yet why
+        /// this is required instead of a "normal" `schema == 
record_batch.schema()` check.
+        ///
+        /// TODO: Either remove this check, replace it with something already 
existing in `arrow-rs`
+        ///     or move it to a central `utils` location.
+        fn schema_equals(left: &SchemaRef, right: &SchemaRef) -> bool {
+            left.fields
+                .iter()
+                .zip(right.fields.iter())
+                .all(|(left_field, right_field)| {
+                    left_field.name() == right_field.name()
+                        && left_field
+                            .data_type()
+                            .equals_datatype(right_field.data_type())
+                })
+        }
+
+        for record_batch in &record_batches {
+            if !schema_equals(&schema, &record_batch.schema()) {
+                return Err(ArrowError::SchemaError(
+                    //"All record batches must have the same 
schema.".to_owned(),

Review Comment:
   ```suggestion
   ```



##########
arrow-pyarrow/src/lib.rs:
##########
@@ -44,17 +44,20 @@
 //! | `pyarrow.Array`             | [ArrayData]                                
                        |
 //! | `pyarrow.RecordBatch`       | [RecordBatch]                              
                        |
 //! | `pyarrow.RecordBatchReader` | [ArrowArrayStreamReader] / `Box<dyn 
RecordBatchReader + Send>` (1) |
+//! | `pyarrow.Table`             | [Table] (2)                                
                        |
 //!
 //! (1) `pyarrow.RecordBatchReader` can be imported as 
[ArrowArrayStreamReader]. Either
 //! [ArrowArrayStreamReader] or `Box<dyn RecordBatchReader + Send>` can be 
exported
 //! as `pyarrow.RecordBatchReader`. (`Box<dyn RecordBatchReader + Send>` is 
typically
 //! easier to create.)
 //!
-//! PyArrow has the notion of chunked arrays and tables, but arrow-rs doesn't
-//! have these same concepts. A chunked table is instead represented with
-//! `Vec<RecordBatch>`. A `pyarrow.Table` can be imported to Rust by calling
-//! 
[pyarrow.Table.to_reader()](https://arrow.apache.org/docs/python/generated/pyarrow.Table.html#pyarrow.Table.to_reader)
-//! and then importing the reader as a [ArrowArrayStreamReader].
+//! (2) Although arrow-rs offers [Table], a convenience wrapper for 
[pyarrow.Table](https://arrow.apache.org/docs/python/generated/pyarrow.Table)
+//! that internally holds `Vec<RecordBatch>`, it is meant primarily for use 
cases where you already
+//! have `Vec<RecordBatch>` on the Rust side and want to export that in bulk 
as a `pyarrow.Table`.
+//! In general, it is recommended to use streaming approaches instead of 
dealing with data in bulk.
+//! For example, a `pyarrow.Table` (or any other object that implements the 
ArrayStream PyCapsule
+//! interface) can be imported to Rust through 
`PyArrowType<ArrowArrayStreamReader>>` instead of

Review Comment:
   ```suggestion
   //! interface) can be imported to Rust through 
`PyArrowType<ArrowArrayStreamReader>` instead of
   ```



##########
arrow-pyarrow/src/lib.rs:
##########
@@ -484,6 +487,120 @@ impl IntoPyArrow for ArrowArrayStreamReader {
     }
 }
 
+/// This is a convenience wrapper around `Vec<RecordBatch>` that tries to 
simplify conversion from
+/// and to `pyarrow.Table`.
+///
+/// This could be used in circumstances where you either want to consume a 
`pyarrow.Table` directly
+/// (although technically, since `pyarrow.Table` implements the 
ArrayStreamReader PyCapsule
+/// interface, one could also consume a `PyArrowType<ArrowArrayStreamReader>` 
instead) or, more
+/// importantly, where one wants to export a `pyarrow.Table` from a 
`Vec<RecordBatch>` from the Rust
+/// side.
+///
+/// ```ignore
+/// #[pyfunction]
+/// fn return_table(...) -> PyResult<PyArrowType<Table>> {
+///     let batches: Vec<RecordBatch>;
+///     let schema: SchemaRef;
+///     PyArrowType(Table::try_new(batches, schema).map_err(|err| 
err.into_py_err(py))?)
+/// }
+/// ```
+#[derive(Clone)]
+pub struct Table {
+    record_batches: Vec<RecordBatch>,
+    schema: SchemaRef,
+}
+
+impl Table {
+    pub fn try_new(
+        record_batches: Vec<RecordBatch>,
+        schema: SchemaRef,
+    ) -> Result<Self, ArrowError> {
+        /// This function was copied from `pyo3_arrow/utils.rs` for now. I 
don't understand yet why
+        /// this is required instead of a "normal" `schema == 
record_batch.schema()` check.
+        ///
+        /// TODO: Either remove this check, replace it with something already 
existing in `arrow-rs`
+        ///     or move it to a central `utils` location.
+        fn schema_equals(left: &SchemaRef, right: &SchemaRef) -> bool {
+            left.fields

Review Comment:
   This impl seems incorrect - the zip() operation does not check that the 
iterators have the same number of items. It actually checks that left is a 
subset of right or right is a subset of left. So, if `right` has one field more 
than `left` this function will still return `true`.
   
https://play.rust-lang.org/?version=stable&mode=debug&edition=2024&gist=95c113900129b392365cdfb3b4c2b4e6



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to