[ 
https://issues.apache.org/jira/browse/ARROW-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17259609#comment-17259609
 ] 

Joris Van den Bossche commented on ARROW-10406:
-----------------------------------------------

I agree with [~npr] that this is a typical use case that should work out of the 
box. If the IPC file format does not support different dictionaries, I would 
expect that we unify the dictionaries before writing. 

(that's also what we eg do on conversion to pandas in {{to_pandas()}}, and 
probably in R as well when converting to a DataFrame with factors?)



> [Format] Support dictionary replacement in the IPC file format
> --------------------------------------------------------------
>
>                 Key: ARROW-10406
>                 URL: https://issues.apache.org/jira/browse/ARROW-10406
>             Project: Apache Arrow
>          Issue Type: Wish
>          Components: Format
>            Reporter: Neal Richardson
>            Priority: Major
>
> I read a big (taxi) csv file and specified that I wanted to dictionary-encode 
> some columns. The resulting Table has ChunkedArrays with 1604 chunks. When I 
> go to write this Table to the IPC file format (write_feather), I get an 
> error: 
> {code}
>   Invalid: Dictionary replacement detected when writing IPC file format. 
> Arrow IPC files only support a single dictionary for a given field accross 
> all batches.
> {code}
> I can write this to Parquet and read it back in, and the roundtrip of the 
> data is correct. We should be able to do this in IPC too.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to