Aitozi opened a new pull request, #7013:
URL: https://github.com/apache/paimon/pull/7013

   … with avro
   
   <!-- Please specify the module before the PR name: [core] ... or [flink] ... 
-->
   
   ### Purpose
   
   Fix the error
   
   ```
   Job aborted due to stage failure: Task 0 in stage 32.0 failed 1 times, most 
recent failure: Lost task 0.0 in stage 32.0 (TID 38) (100.81.154.93 executor 
driver): java.lang.ArrayIndexOutOfBoundsException: Index 3 out of bounds for 
length 3
        at 
org.apache.paimon.io.DataFileRecordReader.readBatch(DataFileRecordReader.java:151)
        at 
org.apache.paimon.io.DataFileRecordReader.readBatch(DataFileRecordReader.java:45)
        at 
org.apache.paimon.mergetree.compact.ConcatRecordReader.readBatch(ConcatRecordReader.java:66)
        at 
org.apache.paimon.spark.PaimonRecordReaderIterator.readBatch(PaimonRecordReaderIterator.scala:95)
        at 
org.apache.paimon.spark.PaimonRecordReaderIterator.advanceIfNeeded(PaimonRecordReaderIterator.scala:140)
        at 
org.apache.paimon.spark.PaimonRecordReaderIterator.hasNext(PaimonRecordReaderIterator.scala:66)
        at 
org.apache.paimon.spark.PaimonPartitionReader.advanceIfNeeded(PaimonPartitionReader.scala:85)
        at 
org.apache.paimon.spark.PaimonPartitionReader.next(PaimonPartitionReader.scala:66)
        at 
org.apache.spark.sql.execution.datasources.v2.PartitionIterator.hasNext(DataSourceRDD.scala:120)
   ```
   <!-- Linking this pull request to the issue -->
   Linked issue: close #xxx
   
   <!-- What is the purpose of the change -->
   
   ### Tests
   
   <!-- List UT and IT cases to verify this change -->
   
   ### API and Format
   
   <!-- Does this change affect API or storage format -->
   
   ### Documentation
   
   <!-- Does this change introduce a new feature -->
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to