[
https://issues.apache.org/jira/browse/ARROW-4492?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16796744#comment-16796744
]
Wes McKinney commented on ARROW-4492:
-------------------------------------
To do anything about this we must first address ARROW-4872
{code}
In [3]: df = dd.read_parquet('/home/wesm/Downloads/slug.pq',
categories=['slug'], engine='pyarrow').compute()
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-3-f941aa5262fc> in <module>
----> 1 df = dd.read_parquet('/home/wesm/Downloads/slug.pq',
categories=['slug'], engine='pyarrow').compute()
~/miniconda/envs/arrow-3.7/lib/python3.7/site-packages/dask/dataframe/io/parquet.py
in read_parquet(path, columns, filters, categories, index, storage_options,
engine, infer_divisions)
1153
1154 return read(fs, fs_token, paths, columns=columns, filters=filters,
-> 1155 categories=categories, index=index,
infer_divisions=infer_divisions)
1156
1157
~/miniconda/envs/arrow-3.7/lib/python3.7/site-packages/dask/dataframe/io/parquet.py
in _read_pyarrow(fs, fs_token, paths, columns, filters, categories, index,
infer_divisions)
703 _open = lambda fn: pq.ParquetFile(fs.open(fn, mode='rb'))
704 for piece in dataset.pieces:
--> 705 pf = piece.get_metadata(_open)
706 # non_empty_pieces.append(piece)
707 if pf.num_row_groups > 0:
TypeError: get_metadata() takes 1 positional argument but 2 were given
{code}
> [Python] Failure reading Parquet column as pandas Categorical in 0.12
> ---------------------------------------------------------------------
>
> Key: ARROW-4492
> URL: https://issues.apache.org/jira/browse/ARROW-4492
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.12.0
> Reporter: George Sakkis
> Priority: Major
> Labels: Parquet
> Fix For: 0.13.0
>
> Attachments: slug.pq
>
>
> On pyarrow 0.12.0 some (but not all) columns cannot be read as category
> dtype. Attached is an extracted failing sample.
> {noformat}
> import dask.dataframe as dd
> df = dd.read_parquet('slug.pq', categories=['slug'],
> engine='pyarrow').compute()
> print(len(df['slug'].dtype.categories))
> {noformat}
> This works on pyarrow 0.11.1 (and fastparquet 0.2.1).
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)