wardlican opened a new issue, #11945: URL: https://github.com/apache/hudi/issues/11945
**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you have triaged this as a bug, then file an [issue](https://issues.apache.org/jira/projects/HUDI/issues) directly. **Describe the problem you faced** A clear and concise description of the problem. **To Reproduce** Steps to reproduce the behavior: 1. use flinksql create sync table CREATE TABLE t_cow7( uuid VARCHAR(20), name VARCHAR(10), age INT, `partition` VARCHAR(20) ) PARTITIONED BY (`partition`) WITH ( 'connector' = 'hudi', 'path' = 'hdfs://xxx/tmp/hudi/t_cow7', 'hive_sync.conf.dir'='/usr/local/service/hive/conf', 'table.type' = 'COPY_ON_WRITE', 'hive_sync.enable' = 'true', 'hive_sync.mode' = 'hms', 'hive_sync.metastore.uris' = 'thrift://xxx:xxx', 'hive_sync.partition_extractor_class' = 'org.apache.hudi.hive.MultiPartKeysValueExtractor' ); **Expected behavior** A clear and concise description of what you expected to happen. **Environment Description** * Hudi version : 0.14.1 * Spark version : 3.4.3 * Hive version : 3.1.3 * Hadoop version : 3.2.2 * Storage (HDFS/S3/GCS..) : * Running on Docker? (yes/no) :no **Additional context** Add any other context about the problem here. **Stacktrace** ``` org.apache.flink.util.FlinkException: Global failure triggered by OperatorCoordinator for 'stream_write: hudi.t_cow6 -> Sink: clean_commits' (operator bc5c7abd7ea74e7bd5602e4e54d789eb). at org.apache.flink.runtime.operators.coordination.OperatorCoordinatorHolder$LazyInitializedCoordinatorContext.failJob(OperatorCoordinatorHolder.java:617) at org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$start$0(StreamWriteOperatorCoordinator.java:196) at org.apache.hudi.sink.utils.NonThrownExecutor.handleException(NonThrownExecutor.java:142) at org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:133) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:750) Caused by: org.apache.hudi.exception.HoodieException: Executor executes action [handle end input event for instant 20240914130530380] error ... 8 more Caused by: org.apache.hudi.exception.HoodieException: Got runtime exception when hive syncing t_cow6 at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:171) at org.apache.hudi.sink.StreamWriteOperatorCoordinator.doSyncHive(StreamWriteOperatorCoordinator.java:342) at org.apache.hudi.sink.StreamWriteOperatorCoordinator.syncHive(StreamWriteOperatorCoordinator.java:332) at org.apache.hudi.sink.StreamWriteOperatorCoordinator.handleEndInputEvent(StreamWriteOperatorCoordinator.java:452) at org.apache.hudi.sink.StreamWriteOperatorCoordinator.lambda$handleEventFromOperator$3(StreamWriteOperatorCoordinator.java:286) at org.apache.hudi.sink.utils.NonThrownExecutor.lambda$wrapAction$0(NonThrownExecutor.java:130) ... 5 more Caused by: org.apache.hudi.hive.HoodieHiveSyncException: Failed to sync partitions for table t_cow6 at org.apache.hudi.hive.HiveSyncTool.syncAllPartitions(HiveSyncTool.java:418) at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:283) at org.apache.hudi.hive.HiveSyncTool.doSync(HiveSyncTool.java:180) at org.apache.hudi.hive.HiveSyncTool.syncHoodieTable(HiveSyncTool.java:168) ... 10 more Caused by: org.apache.hudi.hive.HoodieHiveSyncException: hudi.t_cow6 add partition failed at org.apache.hudi.hive.ddl.HMSDDLExecutor.addPartitionsToTable(HMSDDLExecutor.java:217) at org.apache.hudi.hive.HoodieHiveSyncClient.addPartitionsToTable(HoodieHiveSyncClient.java:115) at org.apache.hudi.hive.HiveSyncTool.syncPartitions(HiveSyncTool.java:460) at org.apache.hudi.hive.HiveSyncTool.syncAllPartitions(HiveSyncTool.java:414) ... 13 more Caused by: MetaException(message:Invalid partition key & values; keys [], values [2024-09-14, ]) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$add_partitions_req_result$add_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$add_partitions_req_result$add_partitions_req_resultStandardScheme.read(ThriftHiveMetastore.java) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$add_partitions_req_result.read(ThriftHiveMetastore.java) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:86) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.recv_add_partitions_req(ThriftHiveMetastore.java:2488) at org.apache.hadoop.hive.metastore.api.ThriftHiveMetastore$Client.add_partitions_req(ThriftHiveMetastore.java:2475) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.add_partitions(HiveMetaStoreClient.java:697) at sun.reflect.GeneratedMethodAccessor145.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.invoke(RetryingMetaStoreClient.java:212) at com.sun.proxy.$Proxy67.add_partitions(Unknown Source) at sun.reflect.GeneratedMethodAccessor145.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.hadoop.hive.metastore.HiveMetaStoreClient$SynchronizedHandler.invoke(HiveMetaStoreClient.java:2775) at com.sun.proxy.$Proxy67.add_partitions(Unknown Source) at org.apache.hudi.hive.ddl.HMSDDLExecutor.addPartitionsToTable(HMSDDLExecutor.java:212) ... 16 more ``` -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected]
