moriyoshi opened a new issue, #42024:
URL: https://github.com/apache/arrow/issues/42024

   ### Describe the enhancement requested
   
   
[pyarrow.flight.RecordBatchStream](https://arrow.apache.org/docs/python/generated/pyarrow.flight.RecordBatchStream.html)
 can take `pyarrow.Table` as the backing record store, and when it does so, it 
will use `TableBatchReader` behind the scene to render the chunks.
   
   While `TableBatchReader` has the method `set_chunksize` that is to specify 
the maximum chunk size, there aren't no ways to take advantage of it in the 
Python wrapper.
   
   My proposal here is to get the `RecordBatchStream` class to take the second 
optional argument `max_chunksize` just like 
[`pyarrow.flight.MetadataRcordBatchWriter.write_table()`](https://arrow.apache.org/docs/dev/python/generated/pyarrow.flight.MetadataRecordBatchWriter.html#pyarrow.flight.MetadataRecordBatchWriter.write_table),
 which ends up invocation of `TableBatchReader::set_chunksize()`
   
   
   
   ### Component(s)
   
   Python


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to