nbeeee opened a new issue, #8965:
URL: https://github.com/apache/hudi/issues/8965

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   A clear and concise description of the problem.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   hudi config:
   
   hoodie.bucket.index.num.buckets -> 20
   hoodie.datasource.hive_sync.database -> dwd_hudi
   hoodie.parquet.small.file.limit -> 104857600
   hoodie.simple.index.update.partition.path -> false
   hoodie.datasource.hive_sync.mode -> JDBC
   hoodie.copyonwrite.record.size.estimate -> 100
   hoodie.datasource.write.precombine.field -> time_stamp
   hoodie.datasource.hive_sync.partition_fields -> cdate
   hoodie.datasource.hive_sync.partition_extractor_class -> 
org.apache.hudi.hive.MultiPartKeysValueExtractor
   hoodie.cleaner.fileversions.retained -> 1
   hoodie.parquet.max.file.size -> 125829120
   hoodie.write.lock.hivemetastore.database -> dwd_hudi
   hoodie.write.lock.client.wait_time_ms_between_retry -> 2000
   hoodie.write.lock.client.num_retries -> 100
   hoodie.write.lock.zookeeper.url -> zknodes
   hoodie.write.lock.zookeeper.base_path -> /hudi_multiwriter
   hoodie.index.bucket.engine -> SIMPLE
   hoodie.bucket.index.hash.field -> group_id
   hoodie.datasource.hive_sync.table -> ods_dts_trade_sale_main_di
   hoodie.index.type -> BUCKET
   hoodie.clean.automatic -> true
   hoodie.datasource.write.operation -> upsert
   hoodie.write.lock.wait_time_ms -> 120000
   hoodie.datasource.hive_sync.enable -> true
   hoodie.datasource.write.recordkey.field -> 
group_id,company_id,business_id,saleno
   hoodie.table.name -> ods_dts_trade_sale_main_di
   hoodie.datasource.hive_sync.jdbcurl -> jdbc:hive2://header:10000
   hoodie.datasource.write.hive_style_partitioning -> true
   hoodie.cleaner.policy -> KEEP_LATEST_FILE_VERSIONS
   hoodie.combine.before.upsert -> false
   hoodie.write.lock.num_retries -> 100
   hoodie.storage.layout.partitioner.class -> 
org.apache.hudi.table.action.commit.SparkBucketIndexPartitioner
   hoodie.fail.on.timeline.archiving -> false
   hoodie.cleaner.policy.failed.writes -> LAZY
   hoodie.keep.max.commits -> 10
   hoodie.upsert.shuffle.parallelism -> 50
   hoodie.write.lock.hivemetastore.table -> ods_dts_trade_sale_main_di
   hoodie.write.lock.zookeeper.port -> 2181
   hoodie.cleaner.commits.retained -> 1
   hoodie.write.lock.provider -> 
org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider
   hoodie.keep.min.commits -> 2
   hoodie.datasource.write.partitionpath.field -> cdate
   hoodie.clean.async -> true
   hoodie.write.lock.wait_time_ms_between_retry -> 2000
   hoodie.write.concurrency.mode -> optimistic_concurrency_control
   
   
   **Expected behavior**
   
   A clear and concise description of what you expected to happen.
   
   **Environment Description**
   
   * Hudi version :0.13.0
   
   * Spark version :3.1
   
   * Hive version :3.1
   
   * Hadoop version :
   
   * Storage (HDFS/S3/GCS..) :
   
   * Running on Docker? (yes/no) :
   
   
   **Additional context**
   23/06/14 15:58:42 INFO ConnectionStateManager: State change: CONNECTED
   23/06/14 15:58:42 ERROR ApplicationMaster: User class threw exception: 
java.lang.NoSuchMethodError: 
org.apache.curator.CuratorZookeeperClient.startAdvancedTracer(Ljava/lang/String;)Lorg/apache/curator/drivers/OperationTrace;
   java.lang.NoSuchMethodError: 
org.apache.curator.CuratorZookeeperClient.startAdvancedTracer(Ljava/lang/String;)Lorg/apache/curator/drivers/OperationTrace;
        at 
org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:716)
        at 
org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:484)
        at 
org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:474)
        at 
org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:454)
        at 
org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44)
        at 
org.apache.curator.framework.recipes.locks.StandardLockInternalsDriver.createsTheLock(StandardLockInternalsDriver.java:54)
        at 
org.apache.curator.framework.recipes.locks.LockInternals.attemptLock(LockInternals.java:216)
        at 
org.apache.curator.framework.recipes.locks.InterProcessMutex.internalLock(InterProcessMutex.java:232)
        at 
org.apache.curator.framework.recipes.locks.InterProcessMutex.acquire(InterProcessMutex.java:108)
        at 
org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider.acquireLock(ZookeeperBasedLockProvider.java:144)
        at 
org.apache.hudi.client.transaction.lock.ZookeeperBasedLockProvider.tryLock(ZookeeperBasedLockProvider.java:96)
        at 
org.apache.hudi.client.transaction.lock.LockManager.lock(LockManager.java:78)
        at 
org.apache.hudi.client.transaction.TransactionManager.beginTransaction(TransactionManager.java:59)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.doInitTable(BaseHoodieWriteClient.java:1129)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1169)
        at 
org.apache.hudi.client.BaseHoodieWriteClient.initTable(BaseHoodieWriteClient.java:1198)
        at 
org.apache.hudi.client.SparkRDDWriteClient.upsert(SparkRDDWriteClient.java:137)
        at 
org.apache.hudi.DataSourceUtils.doWriteOperation(DataSourceUtils.java:206)
        at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:363)
        at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:150)
        at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:46)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
        at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:90)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
        at 
org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
        at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
        at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
        at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:132)
        at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:131)
        at 
org.apache.spark.sql.DataFrameWriter.$anonfun$runCommand$1(DataFrameWriter.scala:989)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103)
        at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163)
        at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772)
        at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64)
        at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:989)
        at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:438)
        at 
org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:415)
        at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:293)
   
   
   
   After decompiling the hudi-spark3.1-bundle_2.12-0.13.0.jar package, it was 
found that the CuratorZookeeperClient class does not have the 
startAdvancedTracer method
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to