[
https://issues.apache.org/jira/browse/ARROW-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16338057#comment-16338057
]
Robert Dailey commented on ARROW-1938:
--------------------------------------
Let me gather the data I was using for this. Here are the steps I I took:
* Read dataset pieces
* Concat resulting DataFrames together
* Convert all object columns to category
* Write concatenated DataFrame to parquet set
I tried converting the columns back to strings, but I still hit the error. To
get around the issue, I could take the following steps:
* Write concatenated DataFrame to csv
* Load the csv file into pandas
* Write DataFrame to parquet set
> [Python] Error writing to partitioned Parquet dataset
> -----------------------------------------------------
>
> Key: ARROW-1938
> URL: https://issues.apache.org/jira/browse/ARROW-1938
> Project: Apache Arrow
> Issue Type: Bug
> Components: Python
> Affects Versions: 0.8.0
> Environment: Linux (Ubuntu 16.04)
> Reporter: Robert Dailey
> Assignee: Phillip Cloud
> Priority: Major
> Fix For: 0.9.0
>
> Attachments: pyarrow_dataset_error.png
>
>
> I receive the following error after upgrading to pyarrow 0.8.0 when writing
> to a dataset:
> * ArrowIOError: Column 3 had 187374 while previous column had 10000
> The command was:
> write_table_values = {'row_group_size': 10000}
> pq.write_to_dataset(pa.Table.from_pandas(df, preserve_index=True),
> '/logs/parsed/test', partition_cols=['Product', 'year', 'month', 'day',
> 'hour'], **write_table_values)
> I've also tried write_table_values = {'chunk_size': 10000} and received the
> same error.
> This same command works in version 0.7.1. I am trying to troubleshoot the
> problem but wanted to submit a ticket.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)