[ 
https://issues.apache.org/jira/browse/ARROW-1660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

MIkhail Osckin updated ARROW-1660:
----------------------------------
    Description: 
I have the following scala case class to store sparse matrix data to read it 
later using python

{code:java}
case class CooVector(
    id: Int,
    row_ids: Seq[Int],
    rowsIdx: Seq[Int],
    colIdx: Seq[Int],
    data: Seq[Double])
{code}

I save the dataset of this type to multiple parquet files using spark and then 
read it using pyarrow.parquet and convert the result to pandas dataset.

The problem i have is that some values end up in wrong rows, for example, 
row_ids might end up in wrong cooVector row. I have no idea what the reason is 
but might be it is related to the fact that the fields are of variable sizes. 
And everything is correct if i read it using spark. Also i checked to_pydict 
method and the result is correct, so seems like the problem somewhere in 
to_pandas method.

  was:
I have the following scala case class to store sparse matrix data to read it 
later using python

{code:java}
case class CooVector(
    id: Int,
    row_ids: Seq[Int],
    rowsIdx: Seq[Int],
    colIdx: Seq[Int],
    data: Seq[Double])
{code}

I save the dataset of this type to multiple parquet files using spark and then 
read it using pyarrow.parquet and convert the result to pandas dataset.

The problem i have is that some values end up in wrong rows, for example, 
row_ids might end up in wrong cooVector row. I have no idea what the reason is 
but might be it is related to the fact that the fields are of variable sizes. 
And everything is correct if i read it using spark.



> pandas field values are messed up across rows
> ---------------------------------------------
>
>                 Key: ARROW-1660
>                 URL: https://issues.apache.org/jira/browse/ARROW-1660
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.7.1
>         Environment: 4.4.0-72-generic #93-Ubuntu SMP x86_64, python3
>            Reporter: MIkhail Osckin
>
> I have the following scala case class to store sparse matrix data to read it 
> later using python
> {code:java}
> case class CooVector(
>     id: Int,
>     row_ids: Seq[Int],
>     rowsIdx: Seq[Int],
>     colIdx: Seq[Int],
>     data: Seq[Double])
> {code}
> I save the dataset of this type to multiple parquet files using spark and 
> then read it using pyarrow.parquet and convert the result to pandas dataset.
> The problem i have is that some values end up in wrong rows, for example, 
> row_ids might end up in wrong cooVector row. I have no idea what the reason 
> is but might be it is related to the fact that the fields are of variable 
> sizes. And everything is correct if i read it using spark. Also i checked 
> to_pydict method and the result is correct, so seems like the problem 
> somewhere in to_pandas method.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to