jlfsdtc opened a new pull request, #9401:
URL: https://github.com/apache/incubator-gluten/pull/9401

   ## What changes were proposed in this pull request?
   
   Fix package error.  The exception information is as follows
   
   
   ```
   [2025-04-23T01:21:51.758Z] [INFO] --- scala-maven-plugin:4.8.0:compile 
(scala-compile-first) @ backends-clickhouse ---
   [2025-04-23T01:21:51.758Z] [WARNING]  Expected all dependencies to require 
Scala version: 2.13.8
   [2025-04-23T01:21:51.758Z] [WARNING]  
org.apache.gluten:backends-clickhouse:1.5.0-SNAPSHOT requires scala version: 
2.13.8
   [2025-04-23T01:21:51.758Z] [WARNING]  
org.apache.flink:flink-scala_2.12:1.16.2 requires scala version: 2.12.7
   [2025-04-23T01:21:51.758Z] [WARNING] Multiple versions of scala libraries 
detected!
   [2025-04-23T01:21:51.758Z] [INFO] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/java:-1:
 info: compiling
   [2025-04-23T01:21:51.758Z] [INFO] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/target/generated-sources/protobuf/java:-1:
 info: compiling
   [2025-04-23T01:21:51.758Z] [INFO] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-celeborn/main/scala:-1:
 info: compiling
   [2025-04-23T01:21:51.758Z] [INFO] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta/main/scala:-1:
 info: compiling
   [2025-04-23T01:21:51.758Z] [INFO] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala:-1:
 info: compiling
   [2025-04-23T01:21:51.758Z] [INFO] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/java:-1:
 info: compiling
   [2025-04-23T01:21:51.758Z] [INFO] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-iceberg/main/scala:-1:
 info: compiling
   [2025-04-23T01:21:51.758Z] [INFO] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/target/generated-sources/antlr4:-1:
 info: compiling
   [2025-04-23T01:21:51.758Z] [INFO] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala:-1:
 info: compiling
   [2025-04-23T01:21:51.758Z] [INFO] Compiling 222 source files to 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/target/scala-2.13/classes
 at 1745371311701
   [2025-04-23T01:21:51.758Z] [INFO] compiler plugin: 
BasicArtifact(org.wartremover,wartremover_2.13,3.1.6,null)
   [2025-04-23T01:21:53.644Z] [WARNING] warning: -target is deprecated: Use 
-release instead to compile against the correct platform API.
   [2025-04-23T01:21:53.644Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this warning: msg=<part of the message>, cat=deprecation
   [2025-04-23T01:21:55.010Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:42:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:55.010Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:55.010Z] [ERROR] import org.apache.hadoop.mapreduce._
   [2025-04-23T01:21:55.010Z] [ERROR]                          ^
   [2025-04-23T01:21:56.375Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:42:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:56.375Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:56.375Z] [ERROR] import 
org.apache.hadoop.mapreduce.{TaskAttemptContext, TaskAttemptID, TaskID, 
TaskType}
   [2025-04-23T01:21:56.375Z] [ERROR]                          ^
   [2025-04-23T01:21:56.375Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:43:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:56.375Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:56.375Z] [ERROR] import 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
   [2025-04-23T01:21:56.375Z] [ERROR]                          ^
   [2025-04-23T01:21:56.375Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:78:
 error: Symbol 'type org.apache.hadoop.mapred.JobID' is missing from the 
classpath.
   [2025-04-23T01:21:56.375Z] [ERROR] This symbol is required by 'method 
org.apache.spark.internal.io.SparkHadoopWriterUtils.createJobID'.
   [2025-04-23T01:21:56.375Z] [ERROR] Make sure that type JobID is in your 
classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:56.375Z] [ERROR] A full rebuild may help if 
'SparkHadoopWriterUtils.class' was compiled against an incompatible version of 
org.apache.hadoop.mapred.
   [2025-04-23T01:21:56.375Z] [ERROR]     val jobId = 
SparkHadoopWriterUtils.createJobID(new Date(description.jobIdInstant), 
sparkStageId)
   [2025-04-23T01:21:56.375Z] [ERROR]                 ^
   [2025-04-23T01:21:56.375Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:79:
 error: not found: type TaskID
   [2025-04-23T01:21:56.375Z] [ERROR]     val taskId = new TaskID(jobId, 
TaskType.MAP, sparkPartitionId)
   [2025-04-23T01:21:56.375Z] [ERROR]                      ^
   [2025-04-23T01:21:56.375Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:79:
 error: not found: value TaskType
   [2025-04-23T01:21:56.375Z] [ERROR]     val taskId = new TaskID(jobId, 
TaskType.MAP, sparkPartitionId)
   [2025-04-23T01:21:56.375Z] [ERROR]                                    ^
   [2025-04-23T01:21:56.375Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:80:
 error: not found: type TaskAttemptID
   [2025-04-23T01:21:56.375Z] [ERROR]     val taskAttemptId = new 
TaskAttemptID(taskId, sparkAttemptNumber)
   [2025-04-23T01:21:56.375Z] [ERROR]                             ^
   [2025-04-23T01:21:56.376Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:83:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.376Z] [ERROR]     val taskAttemptContext: 
TaskAttemptContext = {
   [2025-04-23T01:21:56.376Z] [ERROR]                             ^
   [2025-04-23T01:21:56.376Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:92:
 error: not found: type TaskAttemptContextImpl
   [2025-04-23T01:21:56.376Z] [ERROR]       new 
TaskAttemptContextImpl(hadoopConf, taskAttemptId)
   [2025-04-23T01:21:56.376Z] [ERROR]           ^
   [2025-04-23T01:21:56.376Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:42:
 error: Unused import
   [2025-04-23T01:21:56.376Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.delta.commands
   [2025-04-23T01:21:56.376Z] [INFO] import 
org.apache.hadoop.mapreduce.{TaskAttemptContext, TaskAttemptID, TaskID, 
TaskType}
   [2025-04-23T01:21:56.376Z] [INFO]                                     ^
   [2025-04-23T01:21:56.376Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:42:
 error: Unused import
   [2025-04-23T01:21:56.376Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.delta.commands
   [2025-04-23T01:21:56.376Z] [INFO] import 
org.apache.hadoop.mapreduce.{TaskAttemptContext, TaskAttemptID, TaskID, 
TaskType}
   [2025-04-23T01:21:56.376Z] [INFO]                                            
             ^
   [2025-04-23T01:21:56.376Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:42:
 error: Unused import
   [2025-04-23T01:21:56.376Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.delta.commands
   [2025-04-23T01:21:56.376Z] [INFO] import 
org.apache.hadoop.mapreduce.{TaskAttemptContext, TaskAttemptID, TaskID, 
TaskType}
   [2025-04-23T01:21:56.376Z] [INFO]                                            
                            ^
   [2025-04-23T01:21:56.376Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:42:
 error: Unused import
   [2025-04-23T01:21:56.376Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.delta.commands
   [2025-04-23T01:21:56.376Z] [INFO] import 
org.apache.hadoop.mapreduce.{TaskAttemptContext, TaskAttemptID, TaskID, 
TaskType}
   [2025-04-23T01:21:56.376Z] [INFO]                                            
                                    ^
   [2025-04-23T01:21:56.376Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/commands/OptimizeTableCommandOverwrites.scala:43:
 error: Unused import
   [2025-04-23T01:21:56.376Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.delta.commands
   [2025-04-23T01:21:56.376Z] [INFO] import 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
   [2025-04-23T01:21:56.376Z] [INFO]                                         ^
   [2025-04-23T01:21:56.633Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/files/MergeTreeDelayedCommitProtocol.scala:19:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:56.633Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:56.633Z] [ERROR] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:56.633Z] [ERROR]                          ^
   [2025-04-23T01:21:56.633Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/files/MergeTreeDelayedCommitProtocol.scala:42:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.633Z] [ERROR]       taskContext: TaskAttemptContext,
   [2025-04-23T01:21:56.633Z] [ERROR]                    ^
   [2025-04-23T01:21:56.633Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/files/MergeTreeDelayedCommitProtocol.scala:44:
 error: Symbol 'type org.apache.hadoop.mapreduce.TaskAttemptContext' is missing 
from the classpath.
   [2025-04-23T01:21:56.633Z] [ERROR] This symbol is required by 'value 
org.apache.spark.sql.delta.files.DelayedCommitProtocol.taskContext'.
   [2025-04-23T01:21:56.633Z] [ERROR] Make sure that type TaskAttemptContext is 
in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:56.633Z] [ERROR] A full rebuild may help if 
'DelayedCommitProtocol.class' was compiled against an incompatible version of 
org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:56.633Z] [ERROR]       partitionValues: Map[String, 
String]): String = {
   [2025-04-23T01:21:56.633Z] [ERROR]                                    ^
   [2025-04-23T01:21:56.633Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/files/MergeTreeDelayedCommitProtocol.scala:70:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.633Z] [ERROR]       taskContext: TaskAttemptContext,
   [2025-04-23T01:21:56.633Z] [ERROR]                    ^
   [2025-04-23T01:21:56.633Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/delta/files/MergeTreeDelayedCommitProtocol.scala:19:
 error: Unused import
   [2025-04-23T01:21:56.633Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.delta.files
   [2025-04-23T01:21:56.633Z] [INFO] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:56.633Z] [INFO]                                    ^
   [2025-04-23T01:21:56.633Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:33:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:56.634Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:56.634Z] [ERROR] import org.apache.hadoop.mapreduce._
   [2025-04-23T01:21:56.634Z] [ERROR]                          ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:93:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.634Z] [ERROR]   def apply(taskContext: 
TaskAttemptContext, description: WriteJobDescription): FileNameSpec = {
   [2025-04-23T01:21:56.634Z] [ERROR]                          ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:64:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.634Z] [ERROR]   lazy val (taskAttemptContext: 
TaskAttemptContext, jobId: String) = {
   [2025-04-23T01:21:56.634Z] [ERROR]                                 ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/FileDeltaColumnarWrite.scala:175:
 error: Symbol 'type org.apache.hadoop.mapreduce.TaskAttemptContext' is missing 
from the classpath.
   [2025-04-23T01:21:56.634Z] [ERROR] This symbol is required by 'value 
org.apache.spark.internal.io.FileCommitProtocol.taskContext'.
   [2025-04-23T01:21:56.634Z] [ERROR] Make sure that type TaskAttemptContext is 
in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:56.634Z] [ERROR] A full rebuild may help if 
'FileCommitProtocol.class' was compiled against an incompatible version of 
org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:56.634Z] [ERROR]           
committer.commitTask(taskAttemptContext)
   [2025-04-23T01:21:56.634Z] [ERROR]           ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatDataWriter.scala:33:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:56.634Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:56.634Z] [ERROR] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:56.634Z] [ERROR]                          ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatDataWriter.scala:40:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.634Z] [ERROR]     taskAttemptContext: 
TaskAttemptContext,
   [2025-04-23T01:21:56.634Z] [ERROR]                         ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatDataWriter.scala:43:
 error: Symbol 'type org.apache.hadoop.mapreduce.TaskAttemptContext' is missing 
from the classpath.
   [2025-04-23T01:21:56.634Z] [ERROR] This symbol is required by 'value 
org.apache.spark.sql.execution.datasources.FileFormatDataWriter.taskAttemptContext'.
   [2025-04-23T01:21:56.634Z] [ERROR] Make sure that type TaskAttemptContext is 
in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:56.634Z] [ERROR] A full rebuild may help if 
'FileFormatDataWriter.class' was compiled against an incompatible version of 
org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:56.634Z] [ERROR]   extends 
FileFormatDataWriter(description, taskAttemptContext, committer, customMetrics) 
{
   [2025-04-23T01:21:56.634Z] [ERROR]           ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatDataWriter.scala:53:
 error: Symbol 'type org.apache.hadoop.mapreduce.TaskAttemptContext' is missing 
from the classpath.
   [2025-04-23T01:21:56.634Z] [ERROR] This symbol is required by 'value 
org.apache.spark.sql.execution.datasources.OutputWriterFactory.context'.
   [2025-04-23T01:21:56.634Z] [ERROR] Make sure that type TaskAttemptContext is 
in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:56.634Z] [ERROR] A full rebuild may help if 
'OutputWriterFactory.class' was compiled against an incompatible version of 
org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:56.634Z] [ERROR]     val ext = 
description.outputWriterFactory.getFileExtension(taskAttemptContext)
   [2025-04-23T01:21:56.634Z] [ERROR]               ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatDataWriter.scala:93:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.634Z] [ERROR]     taskAttemptContext: 
TaskAttemptContext,
   [2025-04-23T01:21:56.634Z] [ERROR]                         ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatDataWriter.scala:262:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.634Z] [ERROR]     taskAttemptContext: 
TaskAttemptContext,
   [2025-04-23T01:21:56.634Z] [ERROR]                         ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatDataWriter.scala:352:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.634Z] [ERROR]     taskAttemptContext: 
TaskAttemptContext,
   [2025-04-23T01:21:56.634Z] [ERROR]                         ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatDataWriter.scala:33:
 error: Unused import
   [2025-04-23T01:21:56.634Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution.datasources.v1.clickhouse
   [2025-04-23T01:21:56.634Z] [INFO] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:56.634Z] [INFO]                                    ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:43:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:56.634Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:56.634Z] [ERROR] import 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
   [2025-04-23T01:21:56.634Z] [ERROR]                          ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:44:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:56.634Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:56.634Z] [ERROR] import 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
   [2025-04-23T01:21:56.634Z] [ERROR]                          ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:77:
 error: not found: value Job
   [2025-04-23T01:21:56.634Z] [ERROR]     val job = Job.getInstance(hadoopConf)
   [2025-04-23T01:21:56.634Z] [ERROR]               ^
   [2025-04-23T01:21:56.634Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:80:
 error: not found: value FileOutputFormat
   [2025-04-23T01:21:56.634Z] [ERROR]     FileOutputFormat.setOutputPath(job, 
new Path(outputSpec.outputPath))
   [2025-04-23T01:21:56.634Z] [ERROR]     ^
   [2025-04-23T01:21:56.891Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:147:
 error: Symbol 'type org.apache.hadoop.mapreduce.Job' is missing from the 
classpath.
   [2025-04-23T01:21:56.891Z] [ERROR] This symbol is required by 'value 
org.apache.spark.sql.execution.datasources.FileFormat.job'.
   [2025-04-23T01:21:56.891Z] [ERROR] Make sure that type Job is in your 
classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:56.891Z] [ERROR] A full rebuild may help if 
'FileFormat.class' was compiled against an incompatible version of 
org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:56.891Z] [ERROR]       
fileFormat.prepareWrite(sparkSession, job, caseInsensitiveOptions, dataSchema)
   [2025-04-23T01:21:56.891Z] [ERROR]       ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:191:
 error: Symbol 'type org.apache.hadoop.mapreduce.JobContext' is missing from 
the classpath.
   [2025-04-23T01:21:56.892Z] [ERROR] This symbol is required by 'value 
org.apache.spark.internal.io.FileCommitProtocol.jobContext'.
   [2025-04-23T01:21:56.892Z] [ERROR] Make sure that type JobContext is in your 
classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:56.892Z] [ERROR] A full rebuild may help if 
'FileCommitProtocol.class' was compiled against an incompatible version of 
org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:56.892Z] [ERROR]     committer.setupJob(job)
   [2025-04-23T01:21:56.892Z] [ERROR]     ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:314:
 error: not found: type TaskID
   [2025-04-23T01:21:56.892Z] [ERROR]     val taskId = new TaskID(jobId, 
TaskType.MAP, sparkPartitionId)
   [2025-04-23T01:21:56.892Z] [ERROR]                      ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:314:
 error: not found: value TaskType
   [2025-04-23T01:21:56.892Z] [ERROR]     val taskId = new TaskID(jobId, 
TaskType.MAP, sparkPartitionId)
   [2025-04-23T01:21:56.892Z] [ERROR]                                    ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:315:
 error: not found: type TaskAttemptID
   [2025-04-23T01:21:56.892Z] [ERROR]     val taskAttemptId = new 
TaskAttemptID(taskId, sparkAttemptNumber)
   [2025-04-23T01:21:56.892Z] [ERROR]                             ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:318:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:56.892Z] [ERROR]     val taskAttemptContext: 
TaskAttemptContext = {
   [2025-04-23T01:21:56.892Z] [ERROR]                             ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:327:
 error: not found: type TaskAttemptContextImpl
   [2025-04-23T01:21:56.892Z] [ERROR]       new 
TaskAttemptContextImpl(hadoopConf, taskAttemptId)
   [2025-04-23T01:21:56.892Z] [ERROR]           ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:336:
 error: Symbol 'type org.apache.hadoop.mapreduce.TaskAttemptContext' is missing 
from the classpath.
   [2025-04-23T01:21:56.892Z] [ERROR] This symbol is required by 'value 
org.apache.spark.sql.execution.datasources.EmptyDirectoryDataWriter.taskAttemptContext'.
   [2025-04-23T01:21:56.892Z] [ERROR] Make sure that type TaskAttemptContext is 
in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:56.892Z] [ERROR] A full rebuild may help if 
'EmptyDirectoryDataWriter.class' was compiled against an incompatible version 
of org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:56.892Z] [ERROR]         new 
EmptyDirectoryDataWriter(description, taskAttemptContext, committer)
   [2025-04-23T01:21:56.892Z] [ERROR]             ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:42:
 error: Unused import
   [2025-04-23T01:21:56.892Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution.datasources.v1.clickhouse
   [2025-04-23T01:21:56.892Z] [INFO] import org.apache.hadoop.mapreduce._
   [2025-04-23T01:21:56.892Z] [INFO]                                    ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:43:
 error: Unused import
   [2025-04-23T01:21:56.892Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution.datasources.v1.clickhouse
   [2025-04-23T01:21:56.892Z] [INFO] import 
org.apache.hadoop.mapreduce.lib.output.FileOutputFormat
   [2025-04-23T01:21:56.892Z] [INFO]                                            
   ^
   [2025-04-23T01:21:56.892Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src-delta-32/main/scala/org/apache/spark/sql/execution/datasources/v1/clickhouse/MergeTreeFileFormatWriter.scala:44:
 error: Unused import
   [2025-04-23T01:21:56.892Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution.datasources.v1.clickhouse
   [2025-04-23T01:21:56.892Z] [INFO] import 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
   [2025-04-23T01:21:56.892Z] [INFO]                                         ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/MergeTreeFileFormat.scala:29:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:58.258Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:58.258Z] [ERROR] import org.apache.hadoop.mapreduce.{Job, 
TaskAttemptContext}
   [2025-04-23T01:21:58.258Z] [ERROR]                          ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/MergeTreeFileFormat.scala:40:
 error: not found: type Job
   [2025-04-23T01:21:58.258Z] [ERROR]       job: Job,
   [2025-04-23T01:21:58.258Z] [ERROR]            ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/MergeTreeFileFormat.scala:54:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.258Z] [ERROR]       override def 
getFileExtension(context: TaskAttemptContext): String = ""
   [2025-04-23T01:21:58.258Z] [ERROR]                                           
   ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/MergeTreeFileFormat.scala:59:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.258Z] [ERROR]           context: TaskAttemptContext): 
OutputWriter = {
   [2025-04-23T01:21:58.258Z] [ERROR]                    ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/MergeTreeFileFormat.scala:62:
 error: Symbol 'type org.apache.hadoop.mapreduce.TaskAttemptContext' is missing 
from the classpath.
   [2025-04-23T01:21:58.258Z] [ERROR] This symbol is required by 'value 
org.apache.gluten.execution.datasource.GlutenFormatWriterInjects.context'.
   [2025-04-23T01:21:58.258Z] [ERROR] Make sure that type TaskAttemptContext is 
in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:58.258Z] [ERROR] A full rebuild may help if 
'GlutenFormatWriterInjects.class' was compiled against an incompatible version 
of org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:58.258Z] [ERROR]         GlutenFormatFactory(shortName())
   [2025-04-23T01:21:58.258Z] [ERROR]                             ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/MergeTreeFileFormat.scala:29:
 error: Unused import
   [2025-04-23T01:21:58.258Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.delta
   [2025-04-23T01:21:58.258Z] [INFO] import org.apache.hadoop.mapreduce.{Job, 
TaskAttemptContext}
   [2025-04-23T01:21:58.258Z] [INFO]                                     ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/MergeTreeFileFormat.scala:29:
 error: Unused import
   [2025-04-23T01:21:58.258Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.delta
   [2025-04-23T01:21:58.258Z] [INFO] import org.apache.hadoop.mapreduce.{Job, 
TaskAttemptContext}
   [2025-04-23T01:21:58.258Z] [INFO]                                          ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/files/MergeTreeFileCommitProtocol.scala:30:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:58.258Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:58.258Z] [ERROR] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.258Z] [ERROR]                          ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/files/MergeTreeFileCommitProtocol.scala:40:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.258Z] [ERROR]   override def setupTask(taskContext: 
TaskAttemptContext): Unit = {
   [2025-04-23T01:21:58.258Z] [ERROR]                                       ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/files/MergeTreeFileCommitProtocol.scala:51:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.258Z] [ERROR]       taskContext: TaskAttemptContext,
   [2025-04-23T01:21:58.258Z] [ERROR]                    ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/files/MergeTreeFileCommitProtocol.scala:74:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.258Z] [ERROR]   override def commitTask(taskContext: 
TaskAttemptContext): TaskCommitMessage = {
   [2025-04-23T01:21:58.258Z] [ERROR]                                        ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/files/MergeTreeFileCommitProtocol.scala:89:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.258Z] [ERROR]   override def abortTask(taskContext: 
TaskAttemptContext): Unit = {
   [2025-04-23T01:21:58.258Z] [ERROR]                                       ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/delta/files/MergeTreeFileCommitProtocol.scala:30:
 error: Unused import
   [2025-04-23T01:21:58.258Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.delta.files
   [2025-04-23T01:21:58.258Z] [INFO] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.258Z] [INFO]                                    ^
   [2025-04-23T01:21:58.258Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:34:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:58.258Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:58.259Z] [ERROR] import 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
   [2025-04-23T01:21:58.259Z] [ERROR]                          ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:35:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:58.259Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:58.259Z] [ERROR] import 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
   [2025-04-23T01:21:58.259Z] [ERROR]                          ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:66:
 error: not found: type JobID
   [2025-04-23T01:21:58.259Z] [ERROR]     def createJobID(jobTrackerID: String, 
id: Int): JobID = {
   [2025-04-23T01:21:58.259Z] [ERROR]                                           
          ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:70:
 error: not found: type JobID
   [2025-04-23T01:21:58.259Z] [ERROR]       new JobID(jobTrackerID, id)
   [2025-04-23T01:21:58.259Z] [ERROR]           ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:77:
 error: not found: type TaskID
   [2025-04-23T01:21:58.259Z] [ERROR]     val taskId = new TaskID(jobID, 
TaskType.MAP, sparkPartitionId)
   [2025-04-23T01:21:58.259Z] [ERROR]                      ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:77:
 error: not found: value TaskType
   [2025-04-23T01:21:58.259Z] [ERROR]     val taskId = new TaskID(jobID, 
TaskType.MAP, sparkPartitionId)
   [2025-04-23T01:21:58.259Z] [ERROR]                                    ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:78:
 error: not found: type TaskAttemptID
   [2025-04-23T01:21:58.259Z] [ERROR]     val taskAttemptId = new 
TaskAttemptID(taskId, sparkAttemptNumber)
   [2025-04-23T01:21:58.259Z] [ERROR]                             ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:88:
 error: not found: type TaskAttemptContextImpl
   [2025-04-23T01:21:58.259Z] [ERROR]     (new 
TaskAttemptContextImpl(hadoopConf, taskAttemptId), jobID.toString)
   [2025-04-23T01:21:58.259Z] [ERROR]          ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:109:
 error: not found: type OutputCommitter
   [2025-04-23T01:21:58.259Z] [ERROR]   private lazy val committer: 
OutputCommitter = {
   [2025-04-23T01:21:58.259Z] [ERROR]                               ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:112:
 error: not found: type OutputCommitter
   [2025-04-23T01:21:58.259Z] [ERROR]     
field.get(sparkCommitter).asInstanceOf[OutputCommitter]
   [2025-04-23T01:21:58.259Z] [ERROR]                                           
 ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:116:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.259Z] [ERROR]       .getDeclaredMethod("getFilename", 
classOf[TaskAttemptContext], classOf[FileNameSpec])
   [2025-04-23T01:21:58.259Z] [ERROR]                                           
      ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:125:
 error: not found: type FileOutputCommitter
   [2025-04-23T01:21:58.259Z] [ERROR]       case f: FileOutputCommitter =>
   [2025-04-23T01:21:58.259Z] [ERROR]               ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:133:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.259Z] [ERROR]   private def getFilename(taskContext: 
TaskAttemptContext, spec: FileNameSpec): String = {
   [2025-04-23T01:21:58.259Z] [ERROR]                                        ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:138:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.259Z] [ERROR]       taskContext: TaskAttemptContext,
   [2025-04-23T01:21:58.259Z] [ERROR]                    ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:305:
 error: Symbol 'type org.apache.hadoop.mapreduce.TaskAttemptContext' is missing 
from the classpath.
   [2025-04-23T01:21:58.259Z] [ERROR] This symbol is required by 'value 
org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.taskContext'.
   [2025-04-23T01:21:58.259Z] [ERROR] Make sure that type TaskAttemptContext is 
in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:58.259Z] [ERROR] A full rebuild may help if 
'HadoopMapReduceCommitProtocol.class' was compiled against an incompatible 
version of org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:58.259Z] [ERROR]           
committer.commitTask(taskAttemptContext)
   [2025-04-23T01:21:58.259Z] [ERROR]           ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:33:
 error: Unused import
   [2025-04-23T01:21:58.259Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution
   [2025-04-23T01:21:58.259Z] [INFO] import org.apache.hadoop.mapreduce._
   [2025-04-23T01:21:58.259Z] [INFO]                                    ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:34:
 error: Unused import
   [2025-04-23T01:21:58.259Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution
   [2025-04-23T01:21:58.259Z] [INFO] import 
org.apache.hadoop.mapreduce.lib.output.FileOutputCommitter
   [2025-04-23T01:21:58.259Z] [INFO]                                            
   ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWrite.scala:35:
 error: Unused import
   [2025-04-23T01:21:58.259Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution
   [2025-04-23T01:21:58.259Z] [INFO] import 
org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl
   [2025-04-23T01:21:58.259Z] [INFO]                                         ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWriteFilesExec.scala:38:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:58.259Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:58.259Z] [ERROR] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.259Z] [ERROR]                          ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWriteFilesExec.scala:68:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.259Z] [ERROR]       taskAttemptContext: 
TaskAttemptContext,
   [2025-04-23T01:21:58.259Z] [ERROR]                           ^
   [2025-04-23T01:21:58.259Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWriteFilesExec.scala:77:
 error: Symbol 'type org.apache.hadoop.mapreduce.TaskAttemptContext' is missing 
from the classpath.
   [2025-04-23T01:21:58.260Z] [ERROR] This symbol is required by 'value 
org.apache.spark.sql.execution.datasources.SingleDirectoryDataWriter.taskAttemptContext'.
   [2025-04-23T01:21:58.260Z] [ERROR] Make sure that type TaskAttemptContext is 
in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:58.260Z] [ERROR] A full rebuild may help if 
'SingleDirectoryDataWriter.class' was compiled against an incompatible version 
of org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:58.260Z] [ERROR]         new 
SingleDirectoryDataWriter(description, taskAttemptContext, committer)
   [2025-04-23T01:21:58.260Z] [ERROR]             ^
   [2025-04-23T01:21:58.260Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWriteFilesExec.scala:79:
 error: Symbol 'type org.apache.hadoop.mapreduce.TaskAttemptContext' is missing 
from the classpath.
   [2025-04-23T01:21:58.260Z] [ERROR] This symbol is required by 'value 
org.apache.spark.sql.execution.datasources.DynamicPartitionDataSingleWriter.taskAttemptContext'.
   [2025-04-23T01:21:58.260Z] [ERROR] Make sure that type TaskAttemptContext is 
in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
   [2025-04-23T01:21:58.260Z] [ERROR] A full rebuild may help if 
'DynamicPartitionDataSingleWriter.class' was compiled against an incompatible 
version of org.apache.hadoop.mapreduce.
   [2025-04-23T01:21:58.260Z] [ERROR]         new 
DynamicPartitionDataSingleWriter(description, taskAttemptContext, committer)
   [2025-04-23T01:21:58.260Z] [ERROR]             ^
   [2025-04-23T01:21:58.260Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/CHColumnarWriteFilesExec.scala:38:
 error: Unused import
   [2025-04-23T01:21:58.260Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution
   [2025-04-23T01:21:58.260Z] [INFO] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.260Z] [INFO]                                    ^
   [2025-04-23T01:21:58.516Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHFormatWriterInjects.scala:34:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:58.516Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:58.516Z] [ERROR] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.516Z] [ERROR]                          ^
   [2025-04-23T01:21:58.516Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHFormatWriterInjects.scala:55:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.516Z] [ERROR]       context: TaskAttemptContext): 
proto.WriteRel = {
   [2025-04-23T01:21:58.516Z] [ERROR]                ^
   [2025-04-23T01:21:58.516Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHFormatWriterInjects.scala:70:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.516Z] [ERROR]   def createNativeWrite(outputPath: 
String, context: TaskAttemptContext): Write
   [2025-04-23T01:21:58.516Z] [ERROR]                                           
           ^
   [2025-04-23T01:21:58.516Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHFormatWriterInjects.scala:75:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.516Z] [ERROR]       context: TaskAttemptContext,
   [2025-04-23T01:21:58.516Z] [ERROR]                ^
   [2025-04-23T01:21:58.516Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHFormatWriterInjects.scala:34:
 error: Unused import
   [2025-04-23T01:21:58.516Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution.datasources.v1
   [2025-04-23T01:21:58.516Z] [INFO] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.516Z] [INFO]                                    ^
   [2025-04-23T01:21:58.516Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHMergeTreeWriterInjects.scala:42:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:58.516Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:58.516Z] [ERROR] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.516Z] [ERROR]                          ^
   [2025-04-23T01:21:58.516Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHMergeTreeWriterInjects.scala:69:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.516Z] [ERROR]   override def 
createNativeWrite(outputPath: String, context: TaskAttemptContext): Write = {
   [2025-04-23T01:21:58.516Z] [ERROR]                                           
                    ^
   [2025-04-23T01:21:58.516Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHMergeTreeWriterInjects.scala:88:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.516Z] [ERROR]       context: TaskAttemptContext,
   [2025-04-23T01:21:58.516Z] [ERROR]                ^
   [2025-04-23T01:21:58.516Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHMergeTreeWriterInjects.scala:42:
 error: Unused import
   [2025-04-23T01:21:58.516Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution.datasources.v1
   [2025-04-23T01:21:58.516Z] [INFO] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.516Z] [INFO]                                    ^
   [2025-04-23T01:21:58.517Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHOrcWriterInjects.scala:19:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:58.517Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:58.517Z] [ERROR] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.517Z] [ERROR]                          ^
   [2025-04-23T01:21:58.517Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHOrcWriterInjects.scala:33:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.517Z] [ERROR]   override def 
createNativeWrite(outputPath: String, context: TaskAttemptContext): Write = 
Write
   [2025-04-23T01:21:58.517Z] [ERROR]                                           
                    ^
   [2025-04-23T01:21:58.517Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHOrcWriterInjects.scala:19:
 error: Unused import
   [2025-04-23T01:21:58.517Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution.datasources.v1
   [2025-04-23T01:21:58.517Z] [INFO] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.517Z] [INFO]                                    ^
   [2025-04-23T01:21:58.517Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHParquetWriterInjects.scala:23:
 error: object mapreduce is not a member of package org.apache.hadoop
   [2025-04-23T01:21:58.517Z] [ERROR] did you mean mapred?
   [2025-04-23T01:21:58.517Z] [ERROR] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.517Z] [ERROR]                          ^
   [2025-04-23T01:21:58.517Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHParquetWriterInjects.scala:46:
 error: not found: type TaskAttemptContext
   [2025-04-23T01:21:58.517Z] [ERROR]   override def 
createNativeWrite(outputPath: String, context: TaskAttemptContext): Write = 
Write
   [2025-04-23T01:21:58.517Z] [ERROR]                                           
                    ^
   [2025-04-23T01:21:58.517Z] [ERROR] 
/home/jenkins/agent/workspace/gluten/gluten-package/source-code/gluten/backends-clickhouse/src/main/scala/org/apache/spark/sql/execution/datasources/v1/CHParquetWriterInjects.scala:23:
 error: Unused import
   [2025-04-23T01:21:58.517Z] [WARNING] Applicable -Wconf / @nowarn filters for 
this fatal warning: msg=<part of the message>, cat=unused-imports, 
site=org.apache.spark.sql.execution.datasources.v1
   [2025-04-23T01:21:58.517Z] [INFO] import 
org.apache.hadoop.mapreduce.TaskAttemptContext
   [2025-04-23T01:21:58.517Z] [INFO]                                    ^
   [2025-04-23T01:21:58.517Z] [WARNING] 1 warning
   [2025-04-23T01:21:58.517Z] [ERROR] 97 errors
   [2025-04-23T01:21:58.517Z] [ERROR] exception compilation error occurred!!!
   [2025-04-23T01:21:58.517Z] org.apache.commons.exec.ExecuteException: Process 
exited with an error: 1 (Exit value: 1)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.commons.exec.DefaultExecutor.executeInternal 
(DefaultExecutor.java:404)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.commons.exec.DefaultExecutor.execute (DefaultExecutor.java:166)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.commons.exec.DefaultExecutor.execute (DefaultExecutor.java:153)
   [2025-04-23T01:21:58.517Z]     at 
scala_maven_executions.JavaMainCallerByFork.run (JavaMainCallerByFork.java:95)
   [2025-04-23T01:21:58.517Z]     at scala_maven.ScalaCompilerSupport.compile 
(ScalaCompilerSupport.java:173)
   [2025-04-23T01:21:58.517Z]     at scala_maven.ScalaCompilerSupport.doExecute 
(ScalaCompilerSupport.java:86)
   [2025-04-23T01:21:58.517Z]     at scala_maven.ScalaMojoSupport.execute 
(ScalaMojoSupport.java:310)
   [2025-04-23T01:21:58.517Z]     at scala_maven.ScalaCompileMojo.execute 
(ScalaCompileMojo.java:108)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo 
(DefaultBuildPluginManager.java:137)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.lifecycle.internal.MojoExecutor.doExecute2 
(MojoExecutor.java:370)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.lifecycle.internal.MojoExecutor.doExecute 
(MojoExecutor.java:351)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:215)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:171)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.lifecycle.internal.MojoExecutor.execute (MojoExecutor.java:163)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:117)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject 
(LifecycleModuleBuilder.java:81)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.lifecycle.internal.builder.singlethreaded.SingleThreadedBuilder.build
 (SingleThreadedBuilder.java:56)
   [2025-04-23T01:21:58.517Z]     at 
org.apache.maven.lifecycle.internal.LifecycleStarter.execute 
(LifecycleStarter.java:128)
   [2025-04-23T01:21:58.517Z]     at org.apache.maven.DefaultMaven.doExecute 
(DefaultMaven.java:299)
   [2025-04-23T01:21:58.517Z]     at org.apache.maven.DefaultMaven.doExecute 
(DefaultMaven.java:193)
   [2025-04-23T01:21:58.517Z]     at org.apache.maven.DefaultMaven.execute 
(DefaultMaven.java:106)
   [2025-04-23T01:21:58.517Z]     at org.apache.maven.cli.MavenCli.execute 
(MavenCli.java:963)
   [2025-04-23T01:21:58.517Z]     at org.apache.maven.cli.MavenCli.doMain 
(MavenCli.java:296)
   [2025-04-23T01:21:58.517Z]     at org.apache.maven.cli.MavenCli.main 
(MavenCli.java:199)
   [2025-04-23T01:21:58.517Z]     at 
sun.reflect.NativeMethodAccessorImpl.invoke0 (Native Method)
   [2025-04-23T01:21:58.517Z]     at 
sun.reflect.NativeMethodAccessorImpl.invoke (NativeMethodAccessorImpl.java:62)
   [2025-04-23T01:21:58.517Z]     at 
sun.reflect.DelegatingMethodAccessorImpl.invoke 
(DelegatingMethodAccessorImpl.java:43)
   [2025-04-23T01:21:58.517Z]     at java.lang.reflect.Method.invoke 
(Method.java:498)
   [2025-04-23T01:21:58.517Z]     at 
org.codehaus.plexus.classworlds.launcher.Launcher.launchEnhanced 
(Launcher.java:282)
   [2025-04-23T01:21:58.517Z]     at 
org.codehaus.plexus.classworlds.launcher.Launcher.launch (Launcher.java:225)
   [2025-04-23T01:21:58.517Z]     at 
org.codehaus.plexus.classworlds.launcher.Launcher.mainWithExitCode 
(Launcher.java:406)
   [2025-04-23T01:21:58.517Z]     at 
org.codehaus.plexus.classworlds.launcher.Launcher.main (Launcher.java:347)
   [2025-04-23T01:21:58.517Z] [INFO] prepare-compile in 0.0 s
   [2025-04-23T01:21:58.517Z] [INFO] compile in 6.8 s
   [2025-04-23T01:21:58.517Z] [INFO] 
------------------------------------------------------------------------
   [2025-04-23T01:21:58.517Z] [INFO] Reactor Summary for Gluten Parent Pom 
1.5.0-SNAPSHOT:
   [2025-04-23T01:21:58.517Z] [INFO] 
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Parent Pom 
.................................. SUCCESS [  2.087 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Ras 
......................................... SUCCESS [  7.115 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Ras Common 
.................................. SUCCESS [ 19.602 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Shims 
....................................... SUCCESS [  4.186 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Shims Common 
................................ SUCCESS [ 20.049 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Shims for Spark 3.5 
......................... SUCCESS [  8.816 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten UI 
.......................................... SUCCESS [  3.276 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Core 
........................................ SUCCESS [ 16.985 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Substrait 
................................... SUCCESS [ 27.162 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Celeborn 
.................................... SUCCESS [  3.632 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Iceberg 
..................................... SUCCESS [  7.372 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten DeltaLake 
................................... SUCCESS [  7.106 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Package 
..................................... SUCCESS [  7.114 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Ras Planner 
................................. SUCCESS [  0.281 s]
   [2025-04-23T01:21:58.517Z] [INFO] Gluten Backends ClickHouse 
......................... FAILURE [ 19.523 s]
   [2025-04-23T01:21:58.517Z] [INFO] 
------------------------------------------------------------------------
   [2025-04-23T01:21:58.517Z] [INFO] BUILD FAILURE
   [2025-04-23T01:21:58.517Z] [INFO] 
------------------------------------------------------------------------
   [2025-04-23T01:21:58.517Z] [INFO] Total time:  02:34 min
   [2025-04-23T01:21:58.517Z] [INFO] Finished at: 2025-04-23T01:21:58Z
   [2025-04-23T01:21:58.517Z] [INFO] 
------------------------------------------------------------------------
   [2025-04-23T01:21:58.517Z] [ERROR] Failed to execute goal 
net.alchim31.maven:scala-maven-plugin:4.8.0:compile (scala-compile-first) on 
project backends-clickhouse: scala compilation failed -> [Help 1]
   [2025-04-23T01:21:58.517Z] [ERROR] 
   [2025-04-23T01:21:58.517Z] [ERROR] To see the full stack trace of the 
errors, re-run Maven with the -e switch.
   [2025-04-23T01:21:58.517Z] [ERROR] Re-run Maven using the -X switch to 
enable full debug logging.
   [2025-04-23T01:21:58.517Z] [ERROR] 
   [2025-04-23T01:21:58.518Z] [ERROR] For more information about the errors and 
possible solutions, please read the following articles:
   [2025-04-23T01:21:58.518Z] [ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
   [2025-04-23T01:21:58.518Z] [ERROR] 
   [2025-04-23T01:21:58.518Z] [ERROR] After correcting the problems, you can 
resume the build with the command
   [2025-04-23T01:21:58.518Z] [ERROR]   mvn <args> -rf :backends-clickhouse
   script returned exit code 1
   ```
   
   ## How was this patch tested?
   
   The manual packaging was successful using spark-3.5 and scala 2.13
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to