[
https://issues.apache.org/jira/browse/HUDI-722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17069995#comment-17069995
]
Alexander Filipchik commented on HUDI-722:
------------------------------------------
Sure. The context -> he have a very schema heavy stream, means schema has
multiple levels, arrays of structs which have arrays of structs. I saw 2500
columns on parquet level.
We caught bunch of issues with avro aonversions with that stream. It works fine
on 0.5, but when I tried to upgrade to 0.6 I got this error. The table type is
MOR and INSERT.
If you want I we can do a f2f session (Zoom, Hangouts) as it will be easier to
explain or even debug.
> IndexOutOfBoundsException in MessageColumnIORecordConsumer.addBinary when
> writing parquet
> -----------------------------------------------------------------------------------------
>
> Key: HUDI-722
> URL: https://issues.apache.org/jira/browse/HUDI-722
> Project: Apache Hudi (incubating)
> Issue Type: Bug
> Components: Writer Core
> Reporter: Alexander Filipchik
> Assignee: lamber-ken
> Priority: Major
> Fix For: 0.6.0
>
>
> Some writes fail with java.lang.IndexOutOfBoundsException : Invalid array
> range: X to X inside MessageColumnIORecordConsumer.addBinary call.
> Specifically: getColumnWriter().write(value, r[currentLevel],
> currentColumnIO.getDefinitionLevel());
> fails as size of r is the same as current level. What can be causing it?
>
> It gets executed via: ParquetWriter.write(IndexedRecord) Library version:
> 1.10.1 Avro is a very complex object (~2.5k columns, highly nested, arrays of
> unions present).
> But what is surprising is that it fails to write top level field:
> PrimitiveColumnIO _hoodie_commit_time r:0 d:1 [_hoodie_commit_time] which is
> the first top level field in Avro: {"_hoodie_commit_time": "20200317215711",
> "_hoodie_commit_seqno": "20200317215711_0_650",
--
This message was sent by Atlassian Jira
(v8.3.4#803005)