[
https://issues.apache.org/jira/browse/ARROW-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Rok Mihevc updated ARROW-1660:
------------------------------
External issue URL: https://github.com/apache/arrow/issues/17668
> [Python] pandas field values are messed up across rows
> ------------------------------------------------------
>
> Key: ARROW-1660
> URL: https://issues.apache.org/jira/browse/ARROW-1660
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.7.1
> Environment: 4.4.0-72-generic #93-Ubuntu SMP x86_64, python3
> Reporter: MIkhail Osckin
> Assignee: Wes McKinney
> Priority: Major
>
> I have the following scala case class to store sparse matrix data to read it
> later using python
> {code:java}
> case class CooVector(
> id: Int,
> row_ids: Seq[Int],
> rowsIdx: Seq[Int],
> colIdx: Seq[Int],
> data: Seq[Double])
> {code}
> I save the dataset of this type to multiple parquet files using spark and
> then read it using pyarrow.parquet and convert the result to pandas dataset.
> The problem i have is that some values end up in wrong rows, for example,
> row_ids might end up in wrong cooVector row. I have no idea what the reason
> is but might be it is related to the fact that the fields are of variable
> sizes. And everything is correct if i read it using spark. Also i checked
> to_pydict method and the result is correct, so seems like the problem
> somewhere in to_pandas method.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)