jerqi commented on PR #2422: URL: https://github.com/apache/uniffle/pull/2422#issuecomment-2789413251
> The [docker test](https://github.com/apache/uniffle/actions/runs/14351459509/job/40231036642?pr=2422) is failed after #1919. > > Error msg: > > ``` > 25/04/09 07:45:41 ERROR FileFormatWriter: Aborting job a6def39c-4709-4752-944f-dd8113d24d4a. > org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 7.0 failed 4 times, most recent failure: Lost task 0.3 in stage 7.0 (TID 117) (172.18.0.11 executor 3): org.apache.spark.SparkException: [TASK_WRITE_FAILED] Task failed while writing rows to file:/shared/result.csv. > at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:774) > at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:420) > at org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:100) > at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:890) > at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:890) > at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52) > at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364) > at org.apache.spark.rdd.RDD.iterator(RDD.scala:328) > at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:93) > at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161) > at org.apache.spark.scheduler.Task.run(Task.scala:141) > at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$4(Executor.scala:620) > at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally(SparkErrorUtils.scala:64) > at org.apache.spark.util.SparkErrorUtils.tryWithSafeFinally$(SparkErrorUtils.scala:61) > at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:94) > at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:623) > at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) > at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) > at java.base/java.lang.Thread.run(Unknown Source) > Caused by: java.lang.reflect.InvocationTargetException > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) > at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.base/java.lang.reflect.Method.invoke(Unknown Source) > at org.apache.uniffle.shaded.io.netty.util.internal.CleanerJava9.freeDirectBuffer(CleanerJava9.java:88) > at org.apache.uniffle.shaded.io.netty.util.internal.PlatformDependent.freeDirectBuffer(PlatformDependent.java:521) > at org.apache.uniffle.common.util.RssUtils.releaseByteBuffer(RssUtils.java:425) > at org.apache.uniffle.client.impl.ShuffleReadClientImpl.close(ShuffleReadClientImpl.java:335) > at org.apache.spark.shuffle.reader.RssShuffleDataIterator.cleanup(RssShuffleDataIterator.java:218) > at org.apache.spark.shuffle.reader.RssShuffleReader$MultiPartitionIterator.lambda$new$0(RssShuffleReader.java:[293](https://github.com/apache/uniffle/actions/runs/14351459509/job/40231036642?pr=2422#step:8:294)) > at org.apache.spark.shuffle.FunctionUtils$1.apply(FunctionUtils.java:33) > at scala.Function0.apply$mcV$sp(Function0.scala:39) > at org.apache.spark.util.CompletionIterator$$anon$1.completion(CompletionIterator.scala:47) > at org.apache.spark.util.CompletionIterator.hasNext(CompletionIterator.scala:36) > at org.apache.spark.shuffle.reader.RssShuffleReader$MultiPartitionIterator.hasNext(RssShuffleReader.java:316) > at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:37) > at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:460) > at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage5.sort_addToSorter_0$(Unknown Source) > at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage5.processNext(Unknown Source) > at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43) > at org.apache.spark.sql.execution.WholeStageCodegenEvaluatorFactory$WholeStageCodegenPartitionEvaluator$$anon$1.hasNext(WholeStageCodegenEvaluatorFactory.scala:43) > at org.apache.spark.sql.execution.datasources.FileFormatDataWriter.writeWithIterator(FileFormatDataWriter.scala:91) > at org.apache.spark.sql.execution.datasources.FileFormatWriter$.$anonfun$executeTask$1(FileFormatWriter.scala:403) > at org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1397) > at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:410) > ... 17 more > Caused by: java.lang.IllegalArgumentException: duplicate or slice > at jdk.unsupported/sun.misc.Unsafe.invokeCleaner(Unknown Source) > ... 42 more > ``` > > I don't know if it is related to #2082. It should be related to the issue. It report similar error. Because the rpc change from grpc to Netty. It will trigger the issue of JDK 11. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: [email protected] For queries about this service, please contact Infrastructure at: [email protected] --------------------------------------------------------------------- To unsubscribe, e-mail: [email protected] For additional commands, e-mail: [email protected]
