Hans Pirnay created ARROW-9136:
----------------------------------

             Summary: pandas index information gets lost when partition_cols 
are used
                 Key: ARROW-9136
                 URL: https://issues.apache.org/jira/browse/ARROW-9136
             Project: Apache Arrow
          Issue Type: Bug
          Components: Python
    Affects Versions: 0.17.1
            Reporter: Hans Pirnay


I originally reported this as a pandas github issue 
[https://github.com/pandas-dev/pandas/issues/34790]

 

To reproduce:
{code:python}
df = pd.DataFrame({'Data': [1, 2], 'partition': [1, 2]}, index=['2000-01-01', 
'2010-01-02'])

data_path_with_partitions = 'with_partitions.parquet'
df.to_parquet(data_path_with_partitions, partition_cols=['partition'])
df_read_with_partitions = pd.read_parquet(data_path_with_partitions)
pd.testing.assert_frame_equal(df, df_read_with_partitions)  # <-- this fails 
because the index has been turned into an extra column __index_level_0
{code}
As far as I can tell the issue is in the pandas integration of 
{{pyarrow.parquet}}, in particular that the 
{{subtable.schema.metadata[b'pandas']}} of the {{subtable}} generated in 
{{pyarrow/parquet.py:1725}} no longer contains the index column info passed in 
via {{subschema.metadata[b'pandas']}}. This overwriting happens in 
{{pyarrow/pandas_compat.py:595}}. 

 

I tried working around this by creating a *{{_common_schema}} file, but since 
the metadata of the individual datasets all have (incorrect) {{b'pandas'}} 
keys, these are prioritized.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to