MorningGlow commented on issue #9672:
URL: https://github.com/apache/hudi/issues/9672#issuecomment-1713719110
@ad1happy2go I have now switched to version 0.13.1, and I remember reporting
an error during the first merge ,May I ask if there is a problem with my
configuration?
```
org.apache.flink.util.FlinkException: Global failure triggered by
OperatorCoordinator for 'stream_write: WF_UNITINFOTRAVEL_HUDI' (operator
3be69e0bbe7ef4739ffdb41eadc976f5).
at
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder$LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:556)
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$start$0(StreamWriteOperatorCoordinator.java:191)
at
org.apache.hudi.sink.utils.NonThrownExecutor.handleException(NonThrownExecutor.java:142)
at
org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:133)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.hudi.exception.HoodieException: Executor executes
action [commits the instant 20230911193845109] error
... 6 more
Caused by: org.apache.hudi.exception.HoodieException: Failed to update
metadata
at
org.apache.hudi.client.HoodieFlinkTableServiceClient.writeTableMetadata(HoodieFlinkTableServiceClient.java:184)
at
org.apache.hudi.client.HoodieFlinkWriteClient.writeTableMetadata(HoodieFlinkWriteClient.java:279)
at
org.apache.hudi.client.BaseHoodieWriteClient.commit(BaseHoodieWriteClient.java:282)
at
org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:233)
at
org.apache.hudi.client.HoodieFlinkWriteClient.commit(HoodieFlinkWriteClient.java:111)
at
org.apache.hudi.client.HoodieFlinkWriteClient.commit(HoodieFlinkWriteClient.java:74)
at
org.apache.hudi.client.BaseHoodieWriteClient.commit(BaseHoodieWriteClient.java:199)
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.doCommit(StreamWriteOperatorCoordinator.java:540)
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.commitInstant(StreamWriteOperatorCoordinator.java:516)
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$notifyCheckpointComplete$2(StreamWriteOperatorCoordinator.java:246)
at
org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:130)
... 3 more
Caused by: org.apache.hudi.exception.HoodieUpsertException: Error upsetting
bucketType UPDATE for partition :files
at
org.apache.hudi.table.action.commit.BaseFlinkCommitActionExecutor.handleUpsertPartition(BaseFlinkCommitActionExecutor.java:203)
at
org.apache.hudi.table.action.commit.BaseFlinkCommitActionExecutor.execute(BaseFlinkCommitActionExecutor.java:107)
at
org.apache.hudi.table.action.commit.delta.FlinkUpsertPreppedDeltaCommitActionExecutor.execute(FlinkUpsertPreppedDeltaCommitActionExecutor.java:52)
at
org.apache.hudi.table.HoodieFlinkMergeOnReadTable.upsertPrepped(HoodieFlinkMergeOnReadTable.java:81)
at
org.apache.hudi.client.HoodieFlinkWriteClient.lambda$upsertPreppedRecords$4(HoodieFlinkWriteClient.java:167)
at
java.util.stream.ReferencePipeline$3$1.accept(ReferencePipeline.java:193)
at
java.util.HashMap$ValueSpliterator.forEachRemaining(HashMap.java:1628)
at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:482)
at
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:472)
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:747)
at java.util.stream.ReduceOps$ReduceTask.doLeaf(ReduceOps.java:721)
at java.util.stream.AbstractTask.compute(AbstractTask.java:316)
at java.util.concurrent.CountedCompleter.exec(CountedCompleter.java:731)
at java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:289)
at
java.util.concurrent.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1056)
at java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1692)
at
java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:172)
Caused by: java.lang.ExceptionInInitializerError
at
org.apache.hudi.common.table.log.block.HoodieHFileDataBlock.serializeRecords(HoodieHFileDataBlock.java:142)
at
org.apache.hudi.common.table.log.block.HoodieDataBlock.getContentBytes(HoodieDataBlock.java:115)
at
org.apache.hudi.common.table.log.HoodieLogFormatWriter.appendBlocks(HoodieLogFormatWriter.java:159)
at
org.apache.hudi.io.HoodieAppendHandle.appendDataAndDeleteBlocks(HoodieAppendHandle.java:464)
at
org.apache.hudi.io.HoodieAppendHandle.doAppend(HoodieAppendHandle.java:437)
at
org.apache.hudi.table.action.commit.delta.BaseFlinkDeltaCommitActionExecutor.handleUpdate(BaseFlinkDeltaCommitActionExecutor.java:54)
at
org.apache.hudi.table.action.commit.BaseFlinkCommitActionExecutor.handleUpsertPartition(BaseFlinkCommitActionExecutor.java:195)
... 16 more
Caused by: java.lang.RuntimeException: Could not create interface
org.apache.hudi.org.apache.hadoop.hbase.regionserver.MetricsRegionServerSourceFactory
Is the hadoop compatibility jar on the classpath?
at
org.apache.hudi.org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:75)
at
org.apache.hudi.org.apache.hadoop.hbase.io.MetricsIO.<init>(MetricsIO.java:32)
at
org.apache.hudi.org.apache.hadoop.hbase.io.hfile.HFile.<clinit>(HFile.java:176)
... 23 more
Caused by: java.util.NoSuchElementException
at
java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:365)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at
org.apache.hudi.org.apache.hadoop.hbase.CompatibilitySingletonFactory.getInstance(CompatibilitySingletonFactory.java:61)
... 25 more
```
<img width="1219" alt="企业微信截图_37195ea1-e328-4c8d-9b80-8446d4cc8d21"
src="https://github.com/apache/hudi/assets/10829744/79092b4c-83a3-49fe-9424-21ded170d629">
<img width="779" alt="企业微信截图_f3335208-7cbe-4678-b2c5-7cebe3501083"
src="https://github.com/apache/hudi/assets/10829744/d0d95fbb-26df-4d8a-be5e-3109299e9976">
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]