[
https://issues.apache.org/jira/browse/ARROW-10426?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17223354#comment-17223354
]
Neal Richardson commented on ARROW-10426:
-----------------------------------------
Doing some digging in the source, the error message comes from
https://github.com/apache/arrow/blob/master/cpp/src/parquet/column_writer.cc#L1504-L1508,
and specifically it sounds like it's being thrown from
https://github.com/apache/arrow/blob/master/cpp/src/parquet/column_writer.cc#L1797.
Basically the C++ Parquet writer doesn't support Large* types, only the
regular string/binary.
Is this just not yet implemented, or is this something that Parquet does not
support? [[email protected]][~emkornfield][~wesm]
>From the R side, I suspect that ARROW-9293 will make this go away. If the
>data.frame -> Table conversion chunked the input, then this large_string type
>column would probably end up as regular string type and thus wouldn't fail to
>write to Parquet.
> [C++] Arrow type large_string cannot be written to Parquet type column
> descriptor
> ---------------------------------------------------------------------------------
>
> Key: ARROW-10426
> URL: https://issues.apache.org/jira/browse/ARROW-10426
> Project: Apache Arrow
> Issue Type: Bug
> Components: C++, R
> Affects Versions: 2.0.0
> Environment: R 4.0.3 on OSX 10.15.7
> Reporter: Gabriel Bassett
> Priority: Minor
> Labels: parquet
>
> When trying to write a dataset in parquet format, arrow errors with the
> message: "Arrow type large_string cannot be written to Parquet type column
> descriptor"
> {code:java}
> arrow::write_dataset(
> dataframe,
> "/directory/",
> "parquet",
> "partitioning" = c("col1", "col2")
> )
> {code}
> The dataframe in question is very large with one column containing the text
> of message board posts encoded in HTML.
--
This message was sent by Atlassian Jira
(v8.3.4#803005)