JoshuaZhuCN opened a new issue #3647:
URL: https://github.com/apache/hudi/issues/3647


   **_Tips before filing an issue_**
   
   - Have you gone through our 
[FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   When using spark datasource for upsert operation, the parquet file cannot be 
read, even if the parameter writelegacyformat of spark is set to true.
   
   The error message may be as follows:
   
    diagnostics: User class threw exception: org.apache.spark.SparkException: 
Job aborted due to stage failure: Task 8 in stage 686.0 failed 4 times, most 
recent failure: Lost task 8.3 in stage 686.0 (TID 30676, calc3.leqee.com, 
executor 3): org.apache.hudi.exception.HoodieUpsertException: Error upserting 
bucketType UPDATE for partition :133
                at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpsertPartition(BaseSparkCommitActionExecutor.java:305)
                at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.lambda$execute$ecf5068c$1(BaseSparkCommitActionExecutor.java:156)
                at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
                at 
org.apache.spark.api.java.JavaRDDLike$$anonfun$mapPartitionsWithIndex$1.apply(JavaRDDLike.scala:102)
                at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:875)
                at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$25.apply(RDD.scala:875)
                at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
                at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
                at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:359)
                at org.apache.spark.rdd.RDD$$anonfun$7.apply(RDD.scala:357)
                at 
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1182)
                at 
org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1156)
                at 
org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:1091)
                at 
org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1156)
                at 
org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:882)
                at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:357)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:308)
                at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
                at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:346)
                at org.apache.spark.rdd.RDD.iterator(RDD.scala:310)
                at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
                at org.apache.spark.scheduler.Task.run(Task.scala:123)
                at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
                at 
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
                at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
                at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
                at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
                at java.lang.Thread.run(Thread.java:748)
        Caused by: org.apache.hudi.exception.HoodieException: 
org.apache.hudi.exception.HoodieException: 
java.util.concurrent.ExecutionException: 
org.apache.hudi.exception.HoodieException: operation has failed
                at 
org.apache.hudi.table.action.commit.SparkMergeHelper.runMerge(SparkMergeHelper.java:102)
                at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpdateInternal(BaseSparkCommitActionExecutor.java:334)
                at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpdate(BaseSparkCommitActionExecutor.java:325)
                at 
org.apache.hudi.table.action.deltacommit.AbstractSparkDeltaCommitActionExecutor.handleUpdate(AbstractSparkDeltaCommitActionExecutor.java:80)
                at 
org.apache.hudi.table.action.commit.BaseSparkCommitActionExecutor.handleUpsertPartition(BaseSparkCommitActionExecutor.java:298)
                ... 30 more
        Caused by: org.apache.hudi.exception.HoodieException: 
java.util.concurrent.ExecutionException: 
org.apache.hudi.exception.HoodieException: operation has failed
                at 
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.execute(BoundedInMemoryExecutor.java:147)
                at 
org.apache.hudi.table.action.commit.SparkMergeHelper.runMerge(SparkMergeHelper.java:100)
                ... 34 more
        Caused by: java.util.concurrent.ExecutionException: 
org.apache.hudi.exception.HoodieException: operation has failed
                at java.util.concurrent.FutureTask.report(FutureTask.java:122)
                at java.util.concurrent.FutureTask.get(FutureTask.java:192)
                at 
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.execute(BoundedInMemoryExecutor.java:141)
                ... 35 more
        Caused by: org.apache.hudi.exception.HoodieException: operation has 
failed
                at 
org.apache.hudi.common.util.queue.BoundedInMemoryQueue.throwExceptionIfFailed(BoundedInMemoryQueue.java:247)
                at 
org.apache.hudi.common.util.queue.BoundedInMemoryQueue.readNextRecord(BoundedInMemoryQueue.java:226)
                at 
org.apache.hudi.common.util.queue.BoundedInMemoryQueue.access$100(BoundedInMemoryQueue.java:52)
                at 
org.apache.hudi.common.util.queue.BoundedInMemoryQueue$QueueIterator.hasNext(BoundedInMemoryQueue.java:277)
                at 
org.apache.hudi.common.util.queue.BoundedInMemoryQueueConsumer.consume(BoundedInMemoryQueueConsumer.java:36)
                at 
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.lambda$null$2(BoundedInMemoryExecutor.java:121)
                at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                ... 3 more
        Caused by: org.apache.parquet.io.ParquetDecodingException: Can not read 
value at 0 in block -1 in file 
hdfs://xxxx/c8b3370f-99d3-4ac6-a374-f79b69ffd3ca-6_26-268507-0_20210903185008.parquet
                at 
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:251)
                at 
org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:132)
                at 
org.apache.parquet.hadoop.ParquetReader.read(ParquetReader.java:136)
                at 
org.apache.hudi.common.util.ParquetReaderIterator.hasNext(ParquetReaderIterator.java:49)
                at 
org.apache.hudi.common.util.queue.IteratorBasedQueueProducer.produce(IteratorBasedQueueProducer.java:45)
                at 
org.apache.hudi.common.util.queue.BoundedInMemoryExecutor.lambda$null$0(BoundedInMemoryExecutor.java:92)
                at java.util.concurrent.FutureTask.run(FutureTask.java:266)
                at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
                ... 4 more
        Caused by: java.lang.UnsupportedOperationException: 
org.apache.parquet.column.values.dictionary.PlainValuesDictionary$PlainLongDictionary
                at 
org.apache.parquet.column.Dictionary.decodeToBinary(Dictionary.java:41)
                at 
org.apache.parquet.avro.AvroConverters$BinaryConverter.setDictionary(AvroConverters.java:75)
                at 
org.apache.parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:341)
                at 
org.apache.parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:80)
                at 
org.apache.parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:75)
                at 
org.apache.parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:271)
                at 
org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:147)
                at 
org.apache.parquet.io.MessageColumnIO$1.visit(MessageColumnIO.java:109)
                at 
org.apache.parquet.filter2.compat.FilterCompat$NoOpFilter.accept(FilterCompat.java:165)
                at 
org.apache.parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:109)
                at 
org.apache.parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:137)
                at 
org.apache.parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:222)
                ... 11 more
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. Use bulk insert to generate a Hudi table with a field of type decimal
   2. Upsert the Hudi table(this option will failed with an error)
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version : 0.9.0
   
   * Spark version : 2.4.7
   
   * Hive version : ~
   
   * Hadoop version : 3.1.4
   
   * Storage (HDFS/S3/GCS..) : HDFS
   
   * Running on Docker? (yes/no) : no
   
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to