[
https://issues.apache.org/jira/browse/ARROW-10406?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17259231#comment-17259231
]
Neal Richardson edited comment on ARROW-10406 at 1/5/21, 9:50 PM:
------------------------------------------------------------------
I'm not sure I agree with the reclassification of this as a Format issue,
though I guess that's one way to solve it. I contend that, if the format
doesn't support this, then the C++ CSV reader has a bug/misfeature because it
shouldn't read data that we can't write.
Here's a trivial way to reproduce it from R, using a tiny CSV and setting a
small block size in the read options:
{code}
> library(arrow)
> df <- data.frame(chr=c(rep("a", 3), rep("b", 3)), int=1:6)
> write.csv(df, "test.csv", row.names=FALSE)
> system("cat test.csv")
"chr","int"
"a",1
"a",2
"a",3
"b",4
"b",5
"b",6
> tab <- read_csv_arrow("test.csv",
> read_options=CsvReadOptions$create(block_size=16L), as_data_frame=FALSE,
> schema=schema(chr=dictionary(), int=int32()))
> tab
Table
6 rows x 2 columns
$chr <dictionary<values=string, indices=int32>>
$int <int32>
> tab$chr
ChunkedArray
[
-- dictionary:
[]
-- indices:
[],
-- dictionary:
[
"a"
]
-- indices:
[
0,
0,
0
],
-- dictionary:
[
"b"
]
-- indices:
[
0,
0,
0
]
]
> write_feather(tab, tempfile())
Error: Invalid: Dictionary replacement detected when writing IPC file format.
Arrow IPC files only support a single dictionary for a given field across all
batches.
In /Users/enpiar/Documents/ursa/arrow/cpp/src/arrow/ipc/writer.cc, line 983,
code: WriteDictionaries(batch)
In /Users/enpiar/Documents/ursa/arrow/cpp/src/arrow/ipc/writer.cc, line 939,
code: WriteRecordBatch(*batch)
In /Users/enpiar/Documents/ursa/arrow/cpp/src/arrow/ipc/feather.cc, line 804,
code: writer->WriteTable(table, properties.chunksize)
{code}
was (Author: npr):
I'm not sure I agree with the reclassification of this as a Format issue,
though I guess that's one way to solve it. I contend that, if the format
doesn't support this, then the C++ CSV reader has a bug/misfeature because it
shouldn't read data that we can't write.
Here's a trivial way to reproduce it from R, using a tiny CSV and setting a
small block size in the read options:
{code}
library(arrow)
> df <- data.frame(chr=c(rep("a", 3), rep("b", 3)), int=1:6)
> write.csv(df, "test.csv", row.names=FALSE)
> system("cat test.csv")
"chr","int"
"a",1
"a",2
"a",3
"b",4
"b",5
"b",6
> tab <- read_csv_arrow("test.csv",
> read_options=CsvReadOptions$create(block_size=16L), as_data_frame=FALSE,
> schema=schema(chr=dictionary(), int=int32()))
> tab
Table
6 rows x 2 columns
$chr <dictionary<values=string, indices=int32>>
$int <int32>
> tab$chr
ChunkedArray
[
-- dictionary:
[]
-- indices:
[],
-- dictionary:
[
"a"
]
-- indices:
[
0,
0,
0
],
-- dictionary:
[
"b"
]
-- indices:
[
0,
0,
0
]
]
> write_feather(tab, tempfile())
Error: Invalid: Dictionary replacement detected when writing IPC file format.
Arrow IPC files only support a single dictionary for a given field across all
batches.
In /Users/enpiar/Documents/ursa/arrow/cpp/src/arrow/ipc/writer.cc, line 983,
code: WriteDictionaries(batch)
In /Users/enpiar/Documents/ursa/arrow/cpp/src/arrow/ipc/writer.cc, line 939,
code: WriteRecordBatch(*batch)
In /Users/enpiar/Documents/ursa/arrow/cpp/src/arrow/ipc/feather.cc, line 804,
code: writer->WriteTable(table, properties.chunksize)
{code}
> [Format] Support dictionary replacement in the IPC file format
> --------------------------------------------------------------
>
> Key: ARROW-10406
> URL: https://issues.apache.org/jira/browse/ARROW-10406
> Project: Apache Arrow
> Issue Type: Wish
> Components: Format
> Reporter: Neal Richardson
> Priority: Major
>
> I read a big (taxi) csv file and specified that I wanted to dictionary-encode
> some columns. The resulting Table has ChunkedArrays with 1604 chunks. When I
> go to write this Table to the IPC file format (write_feather), I get an
> error:
> {code}
> Invalid: Dictionary replacement detected when writing IPC file format.
> Arrow IPC files only support a single dictionary for a given field accross
> all batches.
> {code}
> I can write this to Parquet and read it back in, and the roundtrip of the
> data is correct. We should be able to do this in IPC too.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)