jonded94 commented on code in PR #8790:
URL: https://github.com/apache/arrow-rs/pull/8790#discussion_r2510675748
##########
arrow-pyarrow/src/lib.rs:
##########
@@ -484,6 +487,120 @@ impl IntoPyArrow for ArrowArrayStreamReader {
}
}
+/// This is a convenience wrapper around `Vec<RecordBatch>` that tries to
simplify conversion from
+/// and to `pyarrow.Table`.
+///
+/// This could be used in circumstances where you either want to consume a
`pyarrow.Table` directly
+/// (although technically, since `pyarrow.Table` implements the
ArrayStreamReader PyCapsule
+/// interface, one could also consume a `PyArrowType<ArrowArrayStreamReader>`
instead) or, more
+/// importantly, where one wants to export a `pyarrow.Table` from a
`Vec<RecordBatch>` from the Rust
+/// side.
+///
+/// ```ignore
+/// #[pyfunction]
+/// fn return_table(...) -> PyResult<PyArrowType<Table>> {
+/// let batches: Vec<RecordBatch>;
+/// let schema: SchemaRef;
+/// PyArrowType(Table::try_new(batches, schema).map_err(|err|
err.into_py_err(py))?)
+/// }
+/// ```
+#[derive(Clone)]
+pub struct Table {
+ record_batches: Vec<RecordBatch>,
+ schema: SchemaRef,
+}
+
+impl Table {
+ pub fn try_new(
+ record_batches: Vec<RecordBatch>,
+ schema: SchemaRef,
+ ) -> Result<Self, ArrowError> {
+ /// This function was copied from `pyo3_arrow/utils.rs` for now. I
don't understand yet why
+ /// this is required instead of a "normal" `schema ==
record_batch.schema()` check.
+ ///
+ /// TODO: Either remove this check, replace it with something already
existing in `arrow-rs`
+ /// or move it to a central `utils` location.
+ fn schema_equals(left: &SchemaRef, right: &SchemaRef) -> bool {
+ left.fields
Review Comment:
In principle, instead of using this schema check method at all, I'd much
rather have the underlying issue solved by understanding why either the
ArrowStreamReader PyCapsule interface of `pyarrow.Table` or the Rust `Box<dyn
RecordBatchReader>` part seems to swallow up `RecordBatch` metadata.
If that issue would be fixed, then this function can be left out again and a
normal `schema == recordbatch.schema()` test could be used. This function would
only have relevance if it's *expected* that the `RecordBatch` coming from a
stream reader somehow doesn't have metadata anymore, because then we would have
to do this custom schema equality check.
But in general your comment would be also relevant for @kylebarron as he is
using this function as-is in his crate `pyo3-arrow`.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]