jorisvandenbossche commented on code in PR #40818:
URL: https://github.com/apache/arrow/pull/40818#discussion_r1555404445
##########
python/pyarrow/table.pxi:
##########
@@ -1344,17 +1344,28 @@ cdef class ChunkedArray(_PandasConvertible):
A capsule containing a C ArrowArrayStream struct.
"""
cdef:
+ ChunkedArray chunked
ArrowArrayStream* c_stream = NULL
if requested_schema is not None:
- out_type = DataType._import_from_c_capsule(requested_schema)
- if self.type != out_type:
- raise NotImplementedError("Casting to requested_schema")
+ target_type = DataType._import_from_c_capsule(requested_schema)
+
+ if target_type != self.type:
+ try:
+ chunked = self.cast(target_type, safe=True)
Review Comment:
@paleolimbot added this on-the-fly casting for RecordBatchReader in
https://github.com/apache/arrow/pull/40070, and it seems that for Table's
stream export method, we convert this to a RecordBatchReader and that way Table
also already supports casting chunk by chunk.
We could indeed do something similar for ChunkedArray, will open an issue.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]