[
https://issues.apache.org/jira/browse/ARROW-12970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17376799#comment-17376799
]
Grant Williams commented on ARROW-12970:
----------------------------------------
I think natively returning tuples (or at least having the option to do so)
makes a lot of sense from a user perspective. I would guess the use case for a
row iterator is going to be most similar to a database cursor or a Result/Row
Proxy in sqlalchemy. Having the data returned in a tuple (or tuple-like) format
would make integration really nice in mixed data source code bases.
Personally, I think the row accessor makes a lot of sense on Record Batches
since it's already pretty straightforward to slice rows from Tables as tuples
and a conversion like:
{code:java}
yield from chain.from_iterable(zip(*map(lambda col: col.to_pylist(),
batch.columns)) for batch in record_batches){code}
feels clunky (and inefficient).
> [Python] Efficient "row accessor" for a pyarrow RecordBatch / Table
> -------------------------------------------------------------------
>
> Key: ARROW-12970
> URL: https://issues.apache.org/jira/browse/ARROW-12970
> Project: Apache Arrow
> Issue Type: New Feature
> Components: Python
> Reporter: Luke Higgins
> Priority: Minor
> Fix For: 6.0.0
>
>
> It would be nice to have a nice row accessor for a Table akin to
> pandas.DataFrame.itertuples.
> I have a lot of code where I am converting a parquet file to pandas just to
> have access to the rows through iterating with itertuples. Having this
> ability in pyarrow natively would be a nice feature and would avoid memory
> copy in the pandas conversion.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)