EdwinGuo opened a new issue #1455: [SUPPORT] Hudi upsert run into exception:  
java.lang.NoSuchMethodError: java.lang.Math.floorMod(JI)I
URL: https://github.com/apache/incubator-hudi/issues/1455
 
 
   **_Tips before filing an issue_**
   
   - Have you gone through our 
[FAQs](https://cwiki.apache.org/confluence/display/HUDI/FAQ)?
   Yes
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   Not able to upsert data when upgrade hudi jar to 0.5.2, exception: 
Java.lang.NoSuchMethodError: java.lang.Math.floorMod(JI)I
   
   A clear and concise description of the problem.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   from console: open spark shell:
   spark-shell --master yarn --deploy-mode client --driver-memory 512M 
--num-executors 1 --executor-memory 12G --executor-cores 5 --jars 
/usr/lib/spark/jars/httpclient-4.5.9.jar,s3://my-s3-bucket/hudi-spark-bundle_2.11-0.6.0-SNAPSHOT.jar,/usr/share/aws/hmclient/lib/aws-glue-datacatalog-spark-client-*,/usr/share/aws/aws-java-sdk/aws-java-sdk-glue-*
 --conf "spark.serializer=org.apache.spark.serializer.KryoSerializer"
   
   val hudi_options = Map(
               "hoodie.table.name"-> "table1",
               "hoodie.datasource.write.recordkey.field"-> "c1,c2,c3,c4",
               "hoodie.datasource.write.partitionpath.field"-> "c5",
               "hoodie.datasource.write.table.name"-> "table1",
               "hoodie.datasource.write.operation"-> "upsert",
               "hoodie.datasource.write.precombine.field"-> "c6",
               "hoodie.datasource.write.keygenerator.class"-> 
"org.apache.hudi.keygen.ComplexKeyGenerator",
               "hoodie.datasource.hive_sync.jdbcurl"-> 
"jdbc:hive2://localhost:10000",
               "hoodie.datasource.hive_sync.database"-> "db",
               "hoodie.datasource.hive_sync.enable"-> "true",
               "hoodie.datasource.hive_sync.table"-> "tablel",
               "hoodie.datasource.hive_sync.partition_fields"-> "c5",
               "hoodie.datasource.hive_sync.partition_extractor_class"-> 
"org.apache.hudi.hive.MultiPartKeysValueExtractor",
               "hoodie.upsert.shuffle.parallelism"-> "20",
               "hoodie.insert.shuffle.parallelism"-> "20")
   
    val df = spark.sparkContext.parallelize(List(("x1", "x2", "x3", "x4", "x5", 
100, "x7"), ("y1", "y2", "y3", "y4", "y5", 100, "y7"))).toDF("c1", "c2", "c3", 
"c4", "c5", "c6", "c7")
   
   
df.write.format("hudi").options(hudi_options).mode("append").save("s3://my-write-bucket/prefix1/prefix2")
   
   **Expected behavior**
   TO successfully upsert the data to S3 with no exception.
   
   A clear and concise description of what you expected to happen.
   TO successfully upsert the data to S3.
   
   **Environment Description**
   I'm running spark with hudi in AWS EMR 5.29.0, with a fresh compile of hudi 
jar: 
https://github.com/apache/incubator-hudi/commit/41202da7788193da77f1ae4b784127bb93eaae2c.
   
   * Hudi version :
   0.5.2: 
https://github.com/apache/incubator-hudi/commit/41202da7788193da77f1ae4b784127bb93eaae2c
   
   * Spark version :
   2.4.4
   * Hive version :
   Hive 2.3.6
   * Hadoop version :
   Hadoop 2.8.5
   
   * Storage (HDFS/S3/GCS..) :
   S3
   * Running on Docker? (yes/no) :
   no
   
   **Additional context**
   
   Add any other context about the problem here.
   
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
     at 
org.apache.hudi.client.HoodieWriteClient.upsert(HoodieWriteClient.java:193)
     at 
org.apache.hudi.DataSourceUtils.doWriteOperation(DataSourceUtils.java:206)
     at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:144)
     at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:108)
     at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68)
     at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:86)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:131)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:127)
     at 
org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:156)
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at 
org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:152)
     at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:127)
     at 
org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:83)
     at 
org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:83)
     at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
     at 
org.apache.spark.sql.DataFrameWriter$$anonfun$runCommand$1.apply(DataFrameWriter.scala:676)
     at 
org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:84)
     at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:165)
     at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:74)
     at 
org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:676)
     at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:285)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:271)
     at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:229)
     ... 49 elided
   Caused by: org.apache.spark.SparkException: Job aborted due to stage 
failure: Task 44 in stage 11.0 failed 4 times, most recent failure: Lost task 
44.3 in stage 11.0 (TID 975, ip-10-81-135-85.ec2.internal, executor 6): 
java.lang.NoSuchMethodError: java.lang.Math.floorMod(JI)I
           at 
org.apache.hudi.index.bloom.BucketizedBloomCheckPartitioner.getPartition(BucketizedBloomCheckPartitioner.java:148)
           at 
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151)
           at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
           at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
           at org.apache.spark.scheduler.Task.run(Task.scala:123)
           at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
           at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
           at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
           at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
           at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
           at java.lang.Thread.run(Thread.java:748)
   
   Driver stacktrace:
     at 
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:2041)
     at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2029)
     at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:2028)
     at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
     at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
     at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2028)
     at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:966)
     at 
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:966)
     at scala.Option.foreach(Option.scala:257)
     at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:966)
     at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2262)
     at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2211)
     at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2200)
     at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
     at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:777)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2061)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2082)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2101)
     at org.apache.spark.SparkContext.runJob(SparkContext.scala:2126)
     at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:945)
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at org.apache.spark.rdd.RDD.collect(RDD.scala:944)
     at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at 
org.apache.spark.rdd.PairRDDFunctions$$anonfun$countByKey$1.apply(PairRDDFunctions.scala:370)
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
     at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
     at org.apache.spark.rdd.RDD.withScope(RDD.scala:363)
     at 
org.apache.spark.rdd.PairRDDFunctions.countByKey(PairRDDFunctions.scala:369)
     at org.apache.spark.api.java.JavaPairRDD.countByKey(JavaPairRDD.scala:312)
     at 
org.apache.hudi.table.WorkloadProfile.buildProfile(WorkloadProfile.java:67)
     at org.apache.hudi.table.WorkloadProfile.<init>(WorkloadProfile.java:59)
     at 
org.apache.hudi.client.HoodieWriteClient.upsertRecordsInternal(HoodieWriteClient.java:470)
     at 
org.apache.hudi.client.HoodieWriteClient.upsert(HoodieWriteClient.java:188)
     ... 73 more
   Caused by: java.lang.NoSuchMethodError: java.lang.Math.floorMod(JI)I
     at 
org.apache.hudi.index.bloom.BucketizedBloomCheckPartitioner.getPartition(BucketizedBloomCheckPartitioner.java:148)
     at 
org.apache.spark.shuffle.sort.BypassMergeSortShuffleWriter.write(BypassMergeSortShuffleWriter.java:151)
     at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
     at 
org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
     at org.apache.spark.scheduler.Task.run(Task.scala:123)
     at 
org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:408)
     at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
     at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:414)
     at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
     at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
     at java.lang.Thread.run(Thread.java:748)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

Reply via email to