[
https://issues.apache.org/jira/browse/ARROW-504?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15834694#comment-15834694
]
Matthew Rocklin edited comment on ARROW-504 at 1/23/17 2:53 PM:
----------------------------------------------------------------
At the moment I don't have any active use cases for this. We tend to handle
pandas dataframes as atomic blocks of data.
However generally I agree that streaming chunks in a more granular way is
probably a better way to go. Non-blocking IO quickly becomes blocking IO if
data starts overflowing local buffers. This is the sort of technology that
might influence future design decisions.
>From a pure Dask perspective my ideal serialization interface is Python object
>-> iterator of memoryview objects.
was (Author: mrocklin):
At the moment I don't have any active use cases for this. We tend to handle
pandas dataframes as atomic blocks of data.
However generally I agree that streaming chunks in a more granular way is
probably a better way to go. Non-blocking IO quickly becomes blocking IO if
data starts overflows local buffers. This is the sort of technology that might
influence future design decisions.
>From a pure Dask perspective my ideal serialization interface is Python object
>-> iterator of memoryview objects.
> [Python] Add adapter to write pandas.DataFrame in user-selected chunk size to
> streaming format
> ----------------------------------------------------------------------------------------------
>
> Key: ARROW-504
> URL: https://issues.apache.org/jira/browse/ARROW-504
> Project: Apache Arrow
> Issue Type: New Feature
> Reporter: Wes McKinney
>
> While we can convert a {{pandas.DataFrame}} to a single (arbitrarily large)
> {{arrow::RecordBatch}}, it is not easy to create multiple small record
> batches -- we could do so in a streaming fashion and immediately write them
> into an {{arrow::io::OutputStream}}.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)