waytoharish commented on issue #10859:
URL: https://github.com/apache/hudi/issues/10859#issuecomment-1996583272
HI @CTTY Thanks for response.
I dont see that table ever created but changing the configuration to Flink
Configuration I am not seeing that anymore. But not what I am getting below
where even I not set that to
org.apache.flink.util.FlinkException: Global failure triggered by
OperatorCoordinator for 'stream_write:
default_database.customer_sample_hudi_001' (operator
e0b36cd61693ea5ca8dfd979374228b1).
at
org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder$LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:600)
~[flink-runtime-1.17.0.jar:1.17.0]
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$start$0(StreamWriteOperatorCoordinator.java:196)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.sink.utils.NonThrownExecutor.handleException(NonThrownExecutor.java:142)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:133)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
~[?:?]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
~[?:?]
at java.lang.Thread.run(Thread.java:829) ~[?:?]
Caused by: org.apache.hudi.exception.HoodieException: Executor executes
action [commits the instant 20240314112341150] error
... 6 more
Caused by: org.apache.hudi.exception.HoodieNotSupportedException: Compaction
is not supported on a CopyOnWrite table
at
org.apache.hudi.table.HoodieFlinkCopyOnWriteTable.scheduleCompaction(HoodieFlinkCopyOnWriteTable.java:322)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.client.BaseHoodieTableServiceClient.scheduleTableServiceInternal(BaseHoodieTableServiceClient.java:616)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.client.BaseHoodieTableServiceClient.scheduleTableService(BaseHoodieTableServiceClient.java:588)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.client.BaseHoodieWriteClient.scheduleTableService(BaseHoodieWriteClient.java:1228)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.client.BaseHoodieWriteClient.scheduleCompactionAtInstant(BaseHoodieWriteClient.java:986)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.client.BaseHoodieWriteClient.scheduleCompaction(BaseHoodieWriteClient.java:977)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.util.CompactionUtil.scheduleCompaction(CompactionUtil.java:63)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.scheduleTableServices(StreamWriteOperatorCoordinator.java:460)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$notifyCheckpointComplete$2(StreamWriteOperatorCoordinator.java:260)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
at
org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:130)
~[hudi-flink1.17-bundle-0.14.0.jar:0.14.0]
... 3 more
Here is my configuration
options.put(FlinkOptions.HIVE_SYNC_ENABLED.key(), "true");
options.put(FlinkOptions.HIVE_SYNC_MODE.key(),
"glue");
options.put(FlinkOptions.HIVE_SYNC_TABLE.key(),
"customer_sample_hudi_002");
options.put(FlinkOptions.HIVE_SYNC_DB.key(),
"default");
options.put(FlinkOptions.TABLE_TYPE.key(),
HoodieTableType.MERGE_ON_READ.name());
options.put(FlinkOptions.PRECOMBINE_FIELD.key(), "ts");
options.put(FlinkOptions.RECORD_KEY_FIELD.key(), "uuid");
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]