[
https://issues.apache.org/jira/browse/ARROW-1089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17288574#comment-17288574
]
Weston Pace commented on ARROW-1089:
------------------------------------
Can you help me understand the goal here a bit? Is the goal to receive a
stream of bytes and write it to disk without ever deserializing into an
in-memory structure (other than something like Buffer)? That was my initial
guess reading the description. In that case it isn't so much a
synchronous/asynchronous question. You could solve this problem either way.
I'm also not sure how it is related to parquet streaming. Parquet streaming
still involves creating a record batch in memory. It seems like this would
only be possible with something like feather where the on-wire format mirrors
the on-disk format (barring possible compression).
> [C++][Python] Add API to write an Arrow stream into either the stream or file
> formats on disk
> ---------------------------------------------------------------------------------------------
>
> Key: ARROW-1089
> URL: https://issues.apache.org/jira/browse/ARROW-1089
> Project: Apache Arrow
> Issue Type: New Feature
> Components: C++, Python
> Reporter: Wes McKinney
> Priority: Major
> Labels: dataset
>
> For Arrow streams with unknown size, it would be useful to be able to write
> the data to disk either as a stream or as the file format (for random access)
> with minimal overhead; i.e. we would avoid record batch IPC loading and write
> the raw messages directly to disk
--
This message was sent by Atlassian Jira
(v8.3.4#803005)