zhli1142015 opened a new issue, #6039:
URL: https://github.com/apache/incubator-gluten/issues/6039

   ### Backend
   
   VL (Velox)
   
   ### Bug description
   
   
https://github.com/apache/incubator-gluten/actions/runs/9458630743/job/26056337772?pr=6036
   
   ```
   2024-06-11T04:37:04.9492156Z - Gluten - Cleanup staging files if job is 
failed
   2024-06-11T04:37:04.9492686Z 04:37:04.923 ERROR org.apache.spark.util.Utils: 
Aborting task
   2024-06-11T04:37:04.9493622Z 
org.apache.spark.executor.CommitDeniedException: 
attempt_202406102137047101565704628604290_0470_m_000001_501: Not committed 
because the driver did not authorize commit
   2024-06-11T04:37:04.9494717Z         at 
org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:85)
   2024-06-11T04:37:04.9495693Z         at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:279)
   2024-06-11T04:37:04.9496825Z         at 
org.apache.spark.sql.execution.SparkWriteFilesCommitProtocol.$anonfun$commitTask$1(SparkWriteFilesCommitProtocol.scala:91)
   2024-06-11T04:37:04.9497672Z         at 
org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:640)
   2024-06-11T04:37:04.9498493Z         at 
org.apache.spark.sql.execution.SparkWriteFilesCommitProtocol.commitTask(SparkWriteFilesCommitProtocol.scala:91)
   2024-06-11T04:37:04.9499943Z         at 
org.apache.spark.sql.execution.VeloxColumnarWriteFilesRDD.$anonfun$compute$2(VeloxColumnarWriteFilesExec.scala:224)
   2024-06-11T04:37:04.9501403Z         at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
   2024-06-11T04:37:04.9504139Z E20240611 04:37:04.924280 80034 
Exceptions.h:67] Line: 
/__w/incubator-gluten/incubator-gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:1858,
 Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   2024-06-11T04:37:04.9506250Z         at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1563)
   2024-06-11T04:37:04.9507675Z         at 
org.apache.spark.sql.execution.VeloxColumnarWriteFilesRDD.compute(VeloxColumnarWriteFilesExec.scala:207)
   2024-06-11T04:37:04.9509012Z         at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
   2024-06-11T04:37:04.9509815Z         at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
   2024-06-11T04:37:04.9510616Z         at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
   2024-06-11T04:37:04.9511769Z         at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
   2024-06-11T04:37:04.9512773Z         at 
org.apache.spark.scheduler.Task.run(Task.scala:139)
   2024-06-11T04:37:04.9513767Z         at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
   2024-06-11T04:37:04.9514907Z         at 
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
   2024-06-11T04:37:04.9515532Z         at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
   2024-06-11T04:37:04.9516361Z         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   2024-06-11T04:37:04.9517199Z         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   2024-06-11T04:37:04.9518127Z         at java.lang.Thread.run(Thread.java:750)
   2024-06-11T04:37:04.9519022Z 04:37:04.923 ERROR 
org.apache.spark.util.TaskResources: Task 501 failed by error: 
   2024-06-11T04:37:04.9520675Z 
org.apache.spark.executor.CommitDeniedException: 
attempt_202406102137047101565704628604290_0470_m_000001_501: Not committed 
because the driver did not authorize commit
   2024-06-11T04:37:04.9522066Z         at 
org.apache.spark.mapred.SparkHadoopMapRedUtil$.commitTask(SparkHadoopMapRedUtil.scala:85)
   2024-06-11T04:37:04.9523384Z         at 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitTask(HadoopMapReduceCommitProtocol.scala:279)
   2024-06-11T04:37:04.9524546Z         at 
org.apache.spark.sql.execution.SparkWriteFilesCommitProtocol.$anonfun$commitTask$1(SparkWriteFilesCommitProtocol.scala:91)
   2024-06-11T04:37:04.9525412Z         at 
org.apache.spark.util.Utils$.timeTakenMs(Utils.scala:640)
   2024-06-11T04:37:04.9526240Z         at 
org.apache.spark.sql.execution.SparkWriteFilesCommitProtocol.commitTask(SparkWriteFilesCommitProtocol.scala:91)
   2024-06-11T04:37:04.9527344Z         at 
org.apache.spark.sql.execution.VeloxColumnarWriteFilesRDD.$anonfun$compute$2(VeloxColumnarWriteFilesExec.scala:224)
   2024-06-11T04:37:04.9528331Z         at 
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
   2024-06-11T04:37:04.9529054Z         at 
org.apache.spark.util.Utils$.tryWithSafeFinallyAndFailureCallbacks(Utils.scala:1563)
   2024-06-11T04:37:04.9529990Z         at 
org.apache.spark.sql.execution.VeloxColumnarWriteFilesRDD.compute(VeloxColumnarWriteFilesExec.scala:207)
   2024-06-11T04:37:04.9530823Z         at 
org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
   2024-06-11T04:37:04.9531371Z         at 
org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
   2024-06-11T04:37:04.9531929Z         at 
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
   2024-06-11T04:37:04.9532731Z         at 
org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
   2024-06-11T04:37:04.9533377Z         at 
org.apache.spark.scheduler.Task.run(Task.scala:139)
   2024-06-11T04:37:04.9533984Z         at 
org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:554)
   2024-06-11T04:37:04.9534647Z         at 
org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1529)
   2024-06-11T04:37:04.9535274Z         at 
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:557)
   2024-06-11T04:37:04.9535973Z         at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
   2024-06-11T04:37:04.9536718Z         at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
   2024-06-11T04:37:04.9537340Z         at java.lang.Thread.run(Thread.java:750)
   2024-06-11T04:37:04.9539152Z 04:37:04.924 WARN 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter: Could not delete 
file:/__w/incubator-gluten/incubator-gluten/gluten-ut/spark34/target/scala-2.12/test-classes/unit-tests-working-home/spark-warehouse/t/_temporary/0/_temporary/attempt_202406102137047101565704628604290_0470_m_000001_501
   2024-06-11T04:37:04.9541145Z 04:37:04.924 ERROR 
org.apache.spark.sql.execution.VeloxColumnarWriteFilesRDD: Job 
job_202406102137047101565704628604290_0470 aborted.
   2024-06-11T04:37:04.9542886Z 04:37:04.926 WARN 
org.apache.spark.scheduler.TaskSetManager: Lost task 1.0 in stage 470.0 (TID 
501) (f5363349458a executor driver): TaskKilled (Stage cancelled)
   2024-06-11T04:37:04.9544839Z E20240611 04:37:04.951053 80034 
Exceptions.h:67] Line: 
/__w/incubator-gluten/incubator-gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:1858,
 Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   2024-06-11T04:37:05.0164142Z - Gluten - remove v1writes sort and project
   2024-06-11T04:37:05.0504694Z E20240611 04:37:05.049955 80034 
Exceptions.h:67] Line: 
/__w/incubator-gluten/incubator-gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:1858,
 Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   2024-06-11T04:37:05.0970976Z E20240611 04:37:05.096495 80034 
Exceptions.h:67] Line: 
/__w/incubator-gluten/incubator-gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:1858,
 Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   2024-06-11T04:37:05.1447808Z E20240611 04:37:05.144098 80034 
Exceptions.h:67] Line: 
/__w/incubator-gluten/incubator-gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:1858,
 Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   2024-06-11T04:37:05.1917733Z E20240611 04:37:05.191251 80034 
Exceptions.h:67] Line: 
/__w/incubator-gluten/incubator-gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:1858,
 Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   2024-06-11T04:37:05.2387176Z E20240611 04:37:05.238001 80034 
Exceptions.h:67] Line: 
/__w/incubator-gluten/incubator-gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:1858,
 Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   2024-06-11T04:37:05.2871713Z E20240611 04:37:05.286500 80034 
Exceptions.h:67] Line: 
/__w/incubator-gluten/incubator-gluten/ep/build-velox/build/velox_ep/velox/exec/Task.cpp:1858,
 Function:terminate, Expression:  Cancelled, Source: RUNTIME, ErrorCode: 
INVALID_STATE
   2024-06-11T04:37:05.3002947Z - Gluten - remove v1writes sort
   2024-06-11T04:37:05.3147811Z - Gluten - do not remove non-v1writes sort and 
project *** FAILED ***
   2024-06-11T04:37:05.3150209Z   org.apache.spark.SparkRuntimeException: 
[LOCATION_ALREADY_EXISTS] Cannot name the managed table as 
`spark_catalog`.`default`.`t`, as its associated location 
'file:/__w/incubator-gluten/incubator-gluten/gluten-ut/spark34/target/scala-2.12/test-classes/unit-tests-working-home/spark-warehouse/t'
 already exists. Please pick a different table name, or remove the existing 
location first.
   2024-06-11T04:37:05.3152812Z   at 
org.apache.spark.sql.errors.QueryExecutionErrors$.locationAlreadyExists(QueryExecutionErrors.scala:2796)
   2024-06-11T04:37:05.3153887Z   at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.validateTableLocation(SessionCatalog.scala:414)
   2024-06-11T04:37:05.3154888Z   at 
org.apache.spark.sql.catalyst.catalog.SessionCatalog.createTable(SessionCatalog.scala:400)
   2024-06-11T04:37:05.3155902Z   at 
org.apache.spark.sql.execution.command.CreateDataSourceTableCommand.run(createDataSourceTables.scala:120)
   2024-06-11T04:37:05.3157022Z   at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:75)
   2024-06-11T04:37:05.3158035Z   at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:73)
   2024-06-11T04:37:05.3159171Z   at 
org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:84)
   2024-06-11T04:37:05.3160229Z   at 
org.apache.spark.sql.execution.QueryExecution$$anonfun$eagerlyExecuteCommands$1.$anonfun$applyOrElse$1(QueryExecution.scala:98)
   2024-06-11T04:37:05.3161245Z   at 
org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:118)
   2024-06-11T04:37:05.3162115Z   at 
org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
   2024-06-11T04:37:05.3162731Z   ...
   2024-06-11T04:37:05.3225224Z - Gluten - SPARK-35106: Throw exception when 
rename custom partition paths returns false *** FAILED ***
   2024-06-11T04:37:05.3227600Z   org.apache.spark.SparkRuntimeException: 
[LOCATION_ALREADY_EXISTS] Cannot name the managed table as 
`spark_catalog`.`default`.`t`, as its associated location 
'file:/__w/incubator-gluten/incubator-gluten/gluten-ut/spark34/target/scala-2.12/test-classes/unit-tests-working-home/spark-warehouse/t'
 already exists. Please pick a different table name, or remove the existing 
location first.
   ```
   
   ### Spark version
   
   None
   
   ### Spark configurations
   
   _No response_
   
   ### System information
   
   _No response_
   
   ### Relevant logs
   
   _No response_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to