[ 
https://issues.apache.org/jira/browse/ARROW-1938?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16297330#comment-16297330
 ] 

Robert Dailey commented on ARROW-1938:
--------------------------------------

Okay.  On further review, the get/pop change I made no change in functionality. 
 What did matter was write the df to a single parquet file first.  Workaround 
steps:
# write df to single parquet file (pq.write_table(table, 
'/logs/parsed/single_file.parquet'))
# read parquet file to table (table = 
pq.read_pandas('/logs/parsed/single_file.parquet'))
# write table to dataset (pq.write_to_dataset(table, 
'/logs/parsed/test_from_parquet', partition_cols=['Product', 'year', 'month', 
'day', 'hour'], **write_table_values))

> Error writing to partitioned dataset
> ------------------------------------
>
>                 Key: ARROW-1938
>                 URL: https://issues.apache.org/jira/browse/ARROW-1938
>             Project: Apache Arrow
>          Issue Type: Bug
>          Components: Python
>    Affects Versions: 0.8.0
>         Environment: Linux (Ubuntu 16.04)
>            Reporter: Robert Dailey
>         Attachments: pyarrow_dataset_error.png
>
>
> I receive the following error after upgrading to pyarrow 0.8.0 when writing 
> to a dataset:
> * ArrowIOError: Column 3 had 187374 while previous column had 10000
> The command was:
> write_table_values = {'row_group_size': 10000}
> pq.write_to_dataset(pa.Table.from_pandas(df, preserve_index=True), 
> '/logs/parsed/test', partition_cols=['Product', 'year', 'month', 'day', 
> 'hour'], **write_table_values)
> I've also tried write_table_values = {'chunk_size': 10000} and received the 
> same error.
> This same command works in version 0.7.1.  I am trying to troubleshoot the 
> problem but wanted to submit a ticket.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to