Alexander Filipchik created HUDI-722:
----------------------------------------
Summary: IndexOutOfBoundsException in
MessageColumnIORecordConsumer.addBinary when writing parquet
Key: HUDI-722
URL: https://issues.apache.org/jira/browse/HUDI-722
Project: Apache Hudi (incubating)
Issue Type: Bug
Components: Writer Core
Reporter: Alexander Filipchik
Fix For: 0.6.0
Some writes fail with java.lang.IndexOutOfBoundsException : Invalid array
range: X to X inside MessageColumnIORecordConsumer.addBinary call.
Specifically: getColumnWriter().write(value, r[currentLevel],
currentColumnIO.getDefinitionLevel());
fails as size of r is the same as current level. What can be causing it?
It gets executed via: ParquetWriter.write(IndexedRecord) Library version:
1.10.1 Avro is a very complex object (~2.5k columns, highly nested).
But what is surprising is that it fails to write top level field:
PrimitiveColumnIO _hoodie_commit_time r:0 d:1 [_hoodie_commit_time] which is
the first top level field in Avro: {"_hoodie_commit_time": "20200317215711",
"_hoodie_commit_seqno": "20200317215711_0_650",
--
This message was sent by Atlassian Jira
(v8.3.4#803005)