ligou525 commented on issue #9596:
URL: https://github.com/apache/hudi/issues/9596#issuecomment-1880439633

   Hi @raghunittala, 
   do you find a solution for this problem? I faced the same issue when call 
the insertOverwrite api:
   Caused by: org.apache.hudi.exception.HoodieException: Error getting all file 
groups in pending clustering
           at 
org.apache.hudi.common.util.ClusteringUtils.getAllFileGroupsInPendingClusteringPlans(ClusteringUtils.java:135)
           at 
org.apache.hudi.common.table.view.AbstractTableFileSystemView.init(AbstractTableFileSystemView.java:113)
           at 
org.apache.hudi.common.table.view.HoodieTableFileSystemView.init(HoodieTableFileSystemView.java:108)
           at 
org.apache.hudi.common.table.view.HoodieTableFileSystemView.<init>(HoodieTableFileSystemView.java:102)
           at 
org.apache.hudi.common.table.view.HoodieTableFileSystemView.<init>(HoodieTableFileSystemView.java:93)
           at 
org.apache.hudi.metadata.HoodieMetadataFileSystemView.<init>(HoodieMetadataFileSystemView.java:44)
           at 
org.apache.hudi.common.table.view.FileSystemViewManager.createInMemoryFileSystemView(FileSystemViewManager.java:166)
           at 
org.apache.hudi.common.table.view.FileSystemViewManager.lambda$createViewManager$5fcdabfe$1(FileSystemViewManager.java:259)
           at 
org.apache.hudi.common.table.view.FileSystemViewManager.lambda$getFileSystemView$1(FileSystemViewManager.java:111)
           at 
java.util.concurrent.ConcurrentHashMap.computeIfAbsent(ConcurrentHashMap.java:1660)
           at 
org.apache.hudi.common.table.view.FileSystemViewManager.getFileSystemView(FileSystemViewManager.java:110)
           at 
org.apache.hudi.table.HoodieTable.getSliceView(HoodieTable.java:303)
           at 
org.apache.hudi.table.action.commit.JavaInsertOverwriteCommitActionExecutor.getAllExistingFileIds(JavaInsertOverwriteCommitActionExecutor.java:77)
           at 
org.apache.hudi.table.action.commit.JavaInsertOverwriteCommitActionExecutor.lambda$getPartitionToReplacedFileIds$823fa0f9$1(JavaInsertOverwriteCommitActionExecutor.java:71)
           at 
org.apache.hudi.common.function.FunctionWrapper.lambda$throwingMapToPairWrapper$3(FunctionWrapper.java:68)
           ... 28 common frames omitted
   Caused by: java.lang.ClassCastException: 
org.apache.avro.generic.GenericData$Record cannot be cast to 
org.apache.avro.specific.SpecificRecordBase
           at 
org.apache.hudi.common.table.timeline.TimelineMetadataUtils.deserializeAvroMetadata(TimelineMetadataUtils.java:206)
           at 
org.apache.hudi.common.table.timeline.TimelineMetadataUtils.deserializeRequestedReplaceMetadata(TimelineMetadataUtils.java:186)
           at 
org.apache.hudi.common.util.ClusteringUtils.getRequestedReplaceMetadata(ClusteringUtils.java:95)
           at 
org.apache.hudi.common.util.ClusteringUtils.getClusteringPlan(ClusteringUtils.java:106)
           at 
org.apache.hudi.common.util.ClusteringUtils.lambda$getAllPendingClusteringPlans$0(ClusteringUtils.java:69)
           at 
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
           at 
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
           at 
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
           at 
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
           at 
java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
           at 
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
           at 
java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:499)
           at 
org.apache.hudi.common.util.ClusteringUtils.getAllFileGroupsInPendingClusteringPlans(ClusteringUtils.java:129)
           ... 42 common frames omitted


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to