cbomgit opened a new issue, #10785:
URL: https://github.com/apache/hudi/issues/10785

   **_Tips before filing an issue_**
   
   - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)?
   
   - Join the mailing list to engage in conversations and get faster support at 
[email protected].
   
   - If you have triaged this as a bug, then file an 
[issue](https://issues.apache.org/jira/projects/HUDI/issues) directly.
   
   **Describe the problem you faced**
   
   I have a job that combs through a large dataset and deletes records. 
Recently this job started failing during the cleaning stage
   
   ![Screenshot 2024-02-29 at 8 55 25 
AM](https://github.com/apache/hudi/assets/1713621/866575f2-bd87-4fa9-bc9b-ec169ac21c17)
   
   The only exception I see in the executor logs is the following:
   
   ```
        
   User class threw exception: org.apache.spark.SparkException: Job aborted due 
to stage failure: Task 0 in stage 360.0 failed 4 times, most recent failure: 
Lost task 0.3 in stage 360.0 (TID 293463) (ip-10-0-118-226.ec2.internal 
executor 33): com.esotericsoftware.kryo.KryoException: 
java.util.ConcurrentModificationException
   Serialization trace:
   classes (sun.misc.Launcher$AppClassLoader)
   classloader (java.security.ProtectionDomain)
   cachedPDs (javax.security.auth.SubjectDomainCombiner)
   combiner (java.security.AccessControlContext)
   acc (org.apache.spark.util.MutableURLClassLoader)
   classLoader (org.apache.hadoop.conf.Configuration)
   conf (com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem)
   this$0 (com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem$1)
   at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:101)
   at 
com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
   at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:575)
   at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:79)
   at 
com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
   at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)
   at 
com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:106)
   at 
com.esotericsoftware.kryo.serializers.MapSerializer.write(MapSerializer.java:39)
   at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:575)
   at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:79)
   at 
com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
   at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:575)
   at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:79)
   at 
com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
   at com.esotericsoftware.kryo.Kryo.writeObjectOrNull(Kryo.java:629)
   at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:86)
   at 
com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
   at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:575)
   at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:79)
   at 
com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
   at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:575)
   at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:79)
   at 
com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
   at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:575)
   at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:79)
   at 
com.esotericsoftware.kryo.serializers.FieldSerializer.write(FieldSerializer.java:508)
   at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)
   at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:361)
   at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:302)
   at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)
   at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:361)
   at 
com.esotericsoftware.kryo.serializers.DefaultArraySerializers$ObjectArraySerializer.write(DefaultArraySerializers.java:302)
   at com.esotericsoftware.kryo.Kryo.writeClassAndObject(Kryo.java:651)
   at 
org.apache.spark.serializer.KryoSerializerInstance.serialize(KryoSerializer.scala:388)
   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:551)
   at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   at java.lang.Thread.run(Thread.java:750)
   Caused by: java.util.ConcurrentModificationException
   at java.util.Vector$Itr.checkForComodification(Vector.java:1212)
   at java.util.Vector$Itr.next(Vector.java:1165)
   at 
com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:99)
   at 
com.esotericsoftware.kryo.serializers.CollectionSerializer.write(CollectionSerializer.java:40)
   at com.esotericsoftware.kryo.Kryo.writeObject(Kryo.java:575)
   at 
com.esotericsoftware.kryo.serializers.ObjectField.write(ObjectField.java:79)
   ... 37 more
   Driver stacktrace:
   at 
org.apache.spark.scheduler.DAGScheduler.failJobAndIndependentStages(DAGScheduler.scala:2610)
   at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2(DAGScheduler.scala:2559)
   at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$abortStage$2$adapted(DAGScheduler.scala:2558)
   at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:62)
   at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:55)
   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:49)
   at 
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:2558)
   at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1(DAGScheduler.scala:1200)
   at 
org.apache.spark.scheduler.DAGScheduler.$anonfun$handleTaskSetFailed$1$adapted(DAGScheduler.scala:1200)
   at scala.Option.foreach(Option.scala:407)
   at 
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:1200)
   at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:2798)
   at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2740)
   at 
org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:2729)
   at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
   at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:978)
   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2215)
   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2236)
   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2255)
   at org.apache.spark.SparkContext.runJob(SparkContext.scala:2280)
   at org.apache.spark.rdd.RDD.$anonfun$collect$1(RDD.scala:1030)
   at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
   at 
org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
   at org.apache.spark.rdd.RDD.withScope(RDD.scala:414)
   at org.apache.spark.rdd.RDD.collect(RDD.scala:1029)
   at org.apache.spark.api.java.JavaRDDLike.collect(JavaRDDLike.scala:362)
   at org.apache.spark.api.java.JavaRDDLike.collect$(JavaRDDLike.scala:361)
   at 
org.apache.spark.api.java.AbstractJavaRDDLike.collect(JavaRDDLike.scala:45)
   at 
org.apache.hudi.client.common.HoodieSparkEngineContext.map(HoodieSparkEngineContext.java:103)
   at 
org.apache.hudi.metadata.FileSystemBackedTableMetadata.getAllPartitionPaths(FileSystemBackedTableMetadata.java:85)
   at 
org.apache.hudi.table.action.clean.CleanPlanner.getPartitionPathsForFullCleaning(CleanPlanner.java:214)
   at 
org.apache.hudi.table.action.clean.CleanPlanner.getPartitionPathsToClean(CleanPlanner.java:135)
   at 
org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:100)
   at 
org.apache.hudi.table.action.clean.CleanPlanActionExecutor.requestClean(CleanPlanActionExecutor.java:141)
   at 
org.apache.hudi.table.action.clean.CleanPlanActionExecutor.execute(CleanPlanActionExecutor.java:166)
   at 
org.apache.hudi.table.HoodieSparkCopyOnWriteTable.scheduleCleaning(HoodieSparkCopyOnWriteTable.java:204)
   at 
org.apache.hudi.client.BaseHoodieWriteClient.scheduleTableServiceInternal(BaseHoodieWriteClient.java:1353)
   at 
org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:864)
   at 
org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:837)
   at 
org.apache.hudi.client.BaseHoodieWriteClient.clean(BaseHoodieWriteClient.java:891)
   at 
org.apache.hudi.client.BaseHoodieWriteClient.autoCleanOnCommit(BaseHoodieWriteClient.java:614)
   at 
org.apache.hudi.client.BaseHoodieWriteClient.postCommit(BaseHoodieWriteClient.java:533)
   at 
org.apache.hudi.client.BaseHoodieWriteClient.commitStats(BaseHoodieWriteClient.java:236)
   at 
org.apache.hudi.client.SparkRDDWriteClient.commit(SparkRDDWriteClient.java:122)
   at 
org.apache.hudi.HoodieSparkSqlWriter$.commitAndPerformPostOperations(HoodieSparkSqlWriter.scala:678)
   at 
org.apache.hudi.HoodieSparkSqlWriter$.write(HoodieSparkSqlWriter.scala:313)
   at org.apache.hudi.DefaultSource.createRelation(DefaultSource.scala:165)
   at 
org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:45)
   at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
   at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
   at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:115)
   at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   at 
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
   at 
org.apache.spark.sql.execution.SQLExecution$.executeQuery$1(SQLExecution.scala:110)
   at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:135)
   at 
org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:107)
   at 
org.apache.spark.sql.execution.SQLExecution$.withTracker(SQLExecution.scala:232)
   at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:135)
   at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:253)
   at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:134)
   at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:775)
   at 
org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:68)
   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:112)
   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.applyOrElse(QueryExecution.scala:108)
   at 
org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:519)
   at 
org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:83)
   at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:519)
   at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:30)
   at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
   at 
org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
   at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   at 
org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:30)
   at 
org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:495)
   at 
org.apache.spark.sql.execution.QueryExecution.eagerlyExecuteCommands(QueryExecution.scala:108)
   at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted$lzycompute(QueryExecution.scala:95)
   at 
org.apache.spark.sql.execution.QueryExecution.commandExecuted(QueryExecution.scala:93)
   at 
org.apache.spark.sql.execution.QueryExecution.assertCommandExecuted(QueryExecution.scala:136)
   at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:848)
   at 
org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:382)
   at 
org.apache.spark.sql.DataFrameWriter.saveInternal(DataFrameWriter.scala:355)
   at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:239) 
   ```
   
   Unfortunately, I do not see anything more detailed in the logs but it looks 
like it occurs during the cleaning stagte.
   
   **To Reproduce**
   
   Steps to reproduce the behavior:
   
   1. Run the job to select records to delete. The table has many partitions, 
to avoid s3 throttling, we load paths for specific partitions, capped at a max 
number.
   2. select records to delete from the data loaded for the selected partitions 
   3. commit a delete against the table.
   
   
   These are the deletion options that we use:
   
   ```
   Computed delete options: 
   hoodie.copyonwrite.record.size.estimate -> 9385
   hoodie.datasource.write.precombine.field -> timestamp
   hoodie.datasource.write.payload.class -> 
org.apache.hudi.common.model.EmptyHoodieRecordPayload
   hoodie.bloom.index.filter.dynamic.max.entries -> 14998
   hoodie.cleaner.fileversions.retained -> 2
   hoodie.cleaner.parallelism -> 200
   hoodie.delete.shuffle.parallelism -> 200
   hoodie.metadata.enable -> false
   hoodie.clean.automatic -> true
   hoodie.datasource.write.operation -> delete
   hoodie.datasource.write.recordkey.field -> timestamp,impressionId,bidId
   hoodie.datasource.write.table.type -> COPY_ON_WRITE
   hoodie.datasource.write.hive_style_partitioning -> false
   hoodie.datasource.write.partitions.to.delete -> 
   hoodie.cleaner.policy -> KEEP_LATEST_FILE_VERSIONS
   hoodie.datasource.write.reconcile.schema -> true
   hoodie.datasource.write.keygenerator.class -> 
org.apache.hudi.keygen.ComplexKeyGenerator
   hoodie.upsert.shuffle.parallelism -> 200
   hoodie.datasource.write.partitionpath.field -> hour,region,experimentCode
   hoodie.bloom.index.filter.type -> DYNAMIC_V0
   ```
   
   **Expected behavior**
   
   Delete should succeed and cleaner should execute synchronously without error
   
   **Environment Description**
   
   * Hudi version : 0.11.0
   
   * Spark version : 3.2.1
   
   * Hive version : 3.1.3
   
   * Hadoop version : 3.2.1
   
   * Storage (HDFS/S3/GCS..) : s3
   
   * Running on Docker? (yes/no) : no. Running on EMR 6.7.1
   
   
   **Additional context**
   
   Job is submitted with the following args:
   
   ```
   "executionArgs": [
               "--deploy-mode",
               "cluster",
               "--conf",
               "spark.driver.cores=5",
               "--conf",
               "spark.executor.cores=5",
               "--conf",
               "spark.driver.memory=40g",
               "--conf",
               "spark.executor.memory=40g",
               "--conf",
               "spark.executor.instances=95",
               "--conf",
               "spark.default.parallelism=950",
               "--conf",
               "spark.sql.shuffle.partitions=950",
               "--conf",
               "spark.yarn.maxAppAttempts=1",
               "--conf",
               "spark.hadoop.fs.s3a.acl.default=BucketOwnerFullControl",
               "--jars",
               
"/usr/lib/hudi/hudi-spark-bundle.jar,/usr/lib/spark/external/lib/spark-avro.jar"
           ],
   ```
   **Stacktrace**
   
   ```Add the stacktrace of the error.```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to